Search Results: "bar"

14 April 2026

Russell Coker: Furilabs FLX1s Finally Working

I ve been using the Furilabs FLX1s phone [1] as my daily driver for 6 weeks, it s a decent phone, not as good as I hoped but good enough to use every day and rely on for phone calls about job interviews etc. I intend to keep using it as my main phone and as a platform to improve phone software in Debian as you really can t effectively find bugs unless you use the platform for important tasks. Support Problems I previously wrote about the phone after I received it without a SIM caddy on the 13th of Jan. I had a saga with support about this, on the 16th of Jan one support person said that they would ship it immediately but didn t provide a tracking number or any indication of when it would arrive. On the 5th of Feb I contacted support again and asked how long it would be, the new support person seemed to have no record of my previous communication but said that they would send it. On the 17th of Feb I made another support request including asking for a way of direct communication as the support email came from an address that wouldn t accept replies, I was asked for a photo showing where the problem is. The support person also said that they might have to send a replacement phone! The last support request I sent included my disappointment at the time taken to resolve the issue and the proposed solution of replacing the entire phone (why have two international shipments of a fragile and expensive phone when a single letter with a cheap SIM caddy would do?). I didn t receive a reply but the SIM caddy arrived on the 2nd of Mar. Here is a pic of the SIM caddy and the package it came in: One thing that should be noted is that some of the support people seemed to be very good at their jobs and they were all friendly. It was the system that failed here, turning a minor issue of a missing part into a 6 week saga. Furilabs needs to do the following to address this issue:
  1. Make it possible to reply directly to a message from a support person. Accept email with a custom subject to sort it, give a URL for a web form, anything. Collating discussions with a customer allows giving better support while taking less time for the support people.
  2. Have someone monitor every social media address that is used by the company. When someone sends a support request in a public Mastodon post it indicates that something has gone wrong and you want to move quickly to resolve it.
  3. Take care of the little things, like sending a tracking number for every parcel. If it s something too small for a parcel (the SIM caddy could have fit in a regular letter) then just tell the customer what date it was posted and where it was posted from so they have some idea of when it will arrive.
This is not just a single failure of Furilabs support, it s a systemic failure of their processes. Problems I Will Fix Unless Someone Beats Me to it Here are some issues I plan to work on. Smart Watch Support I need to port one of the smart watch programs to Debian. Also I want to make one of them support the Colmi P80 [2]. A smart watch significantly increases the utility of a phone even though IMHO they aren t doing nearly all the things that they could and should do. When we get Debian programs talking to the PineTime it will make a good platform for development of new smart phone and OS features. Nextcloud I have ongoing issues of my text Nextcloud installation on a Debian VM not allowing connection from the Linux desktop app (as packaged in Debian) and from the Android client (from f-droid). The desktop client works with a friend s Nextcloud installation on Ubuntu so I may try running it on an Ubuntu VM I run while waiting for the Debian issue to get resolved. There was a bug recently fixed in Nextcloud that appears related so maybe the next release will fix it. For the moment I ve been running without these features and I call and SMS people from knowing their number or just returning calls. Phone calls generally aren t very useful for me nowadays except when applying for jobs. If I could deal with recruiters and hiring managers via video calls then I would consider just not having a phone number. Wifi IPv6 Periodically IPv6 support just stops working, I can t ping the gateway. I turn wifi off and on again and it works. This might be an issue with my wifi network configuration. This might be an issue with the way I have configured my IPv6 networking, although that problem doesn t happen with any of my laptops. Chatty Sorting Chatty is the program for SMS that is installed by default (part of the phosh/phoc setup), it also does Jabber. Version 0.8.7 is installed which apparently has some Furios modifications and it doesn t properly support sorting SMS/Jabber conversations. Version 0.8.9 from Debian sorts in the same way as most SMS and Jabber programs with the most recent at the top. But the Debian version doesn t support Jabber (only SMS and Matrix). When I went back to the Furilabs version of Chatty it still sorted for a while but then suddenly stopped. Killing Chatty (not just closing the window and reopening it) seems to make it sort the conversations sometimes. Problems for Others to Fix Here are the current issues I have starting with the most important. Important The following issues seriously reduce the usability of the device. Hotspot The Wifi hotspot functionality wasn t working for a few weeks, this Gitlab issue seems to match it [3]. It started working correctly for a day and I was not sure if an update I applied fixed the bug or if it s some sort of race condition that worked for this boot and will return next time I reboot it. Later on I rebooted it and found that it s somewhat random whether it works or now. Also while it is mostly working it seemed to stop working about every 25 minutes or so and I had to turn it off and on again to get it going. On another day it went to a stage where it got repeated packet loss when I pinged the phone as a hotspot from my laptop. A pattern of 3 ping responses and 3 Destination Host Unreachable messages was often repeated. I don t know if this is related to the way Android software is run in a container to access the hardware. 4G Reliability Sometimes 4G connectivity has just stopped, sometimes I can stop and restart the 4G data through software to fix it and sometimes I need to use the hardware switch. I haven t noticed this for a week or two so there is a possibility that one fix addressed both Hotspot and 4G. One thing that I will do is setup monitoring to give an alert on the phone if it can t connect to the Internet. I don t want it to just quietly stop doing networking stuff and not tell me! On-screen Keyboard The compatibility issues of the GNOME and KDE on-screen keyboards are getting me. I use phosh/phoc as the login environment as I want to stick to defaults at first to not make things any more difficult than they need to be. When I use programs that use QT such as Nheko the keyboard doesn t always appear when it should and it forgets the setting for word completion (which means spelling correction). The spelling correction system doesn t suggest replacing dont with don t which is really annoying as a major advantage for spelling checkers on touch screens is inserting an apostrophy. An apostrophy takes at least 3* longer than a regular character and saving that delay makes a difference to typing speed. The spelling correction doesn t correct two words run together. Medium Priority These issues are ongoing annoyances. Delay on Power Button In the best case scenario this phone has a much slower response to pressing the power button than the Android phones I tested (Huawei Mate 10 Pro and Samsung Galaxy Note 9) and a much slower response than my recollection of the vast majority of Android phones I ve ever used. For testing pressing buttons on the phones simultaneously resulted in the Android phone screens lighting up much sooner. Something like 200ms vs 600ms I don t have a good setup to time these things but it s very obvious when I test. In a less common case scenario (the phone having been unused for some time) the response can be something like 5 seconds. The worst case scenario is something in excess of 20 seconds. For UI designers, if you get multiple press events from a button that can turn the screen on/off please make your UI leave the screen on and ignore all the stacked events. Having the screen start turning on and off repeatedly when the phone recovers and processes all the button presses isn t good, especially when each screen flash takes half a second. Notifications Touching on a notification for a program often doesn t bring it to the foreground. I haven t yet found a connection between when it does and when it doesn t. Also the lack of icons in the top bar on the screen to indicate notifications is annoying, but that seems to be an issue of design not the implementation. Charge Delay When I connect the phone to a power source there is a delay of about 22 seconds before it starts to charge. Having it miss 22 seconds of charge time is no big deal, having to wait 22 seconds to be sure it s charging before leaving it is really annoying. Also the phone makes an audible alert when it gets to 0% charge which woke me up one night when I had failed to push the USB-C connector in hard enough. This phone requires a slightly deeper connector than most phones so with some plugs it s easy to not quite insert them far enough. Torch aka Flash The light for the torch or flash for camera is not bright at all. In a quick test staring into the light from 40cm away wasn t unpleasant compared to my Huawei Mate 10 Pro which has a light bright enough that it hurts to look at it from 4 meters away. Because of this photos at night are not viable, not even when photographing something that s less than a meter away. The torch has a brightness setting which doesn t seem to change the brightness, so it seems likely that this is a software issue and the brightness is set at a low level and the software isn t changing it. Audio When I connect to my car the Lollypop player starts playing before the phone directs audio to the car, so the music starts coming from the phone for about a second. This is an annoying cosmetic error. Sometimes audio playing pauses for no apparent reason. It doesn t support the phone profile with Bluetooth so phone calls can t go through the car audio system. Also it doesn t always connect to my car when I start driving, sometimes I need to disable and enable Bluetooth to make it connect. When I initially set the phone up Lollypop would send the track name when playing music through my car (Nissan LEAF) Bluetooth connection, after an update that often doesn t happen so the car doesn t display the track name or whether the music is playing but the pause icon works to pause and resume music (sometimes it does work). About 30 seconds into a phone call it switches to hands-free mode while the icon to indicate hands-free is not highlighted, so I have to press the hands-free button twice to get it back to normal phone mode. Low Priority I could live with these things remaining as-is but it s annoying. Ticket Mode There is apparently some code written to display tickets on screen without unlocking. I want to get this working and store screen-caps of the Android barcode screens of the different loyalty cards so I can scan them without unlocking. My threat model does not include someone trying to steal my phone to get a free loaf of bread on the bakery loyalty program. Camera The camera app works with both the back and front cameras, which is nice, and sadly based on my experience with other Debian phones it s noteworthy. The problem is that it takes a long time to take a photo, something like a second after the button is pressed long enough for you to think that it just silently took a photo and then move the phone. The UI of the furios-camera app is also a little annoying, when viewing photos there is an icon at the bottom left of the screen for a video camera and an icon at the bottom right with a cross. Which every time makes me think record videos and leave this screen not return to taking photos and delete current photo . I can get used to the surprising icons, but being so slow is a real problem. GUI App Installation The program for managing software doesn t work very well. It said that there were two updates for Mesa package needed, but didn t seem to want to install them. I ran flatpak update as root to fix that. The process of selecting software defaults to including non-free, and most of the available apps are for desktop/laptop with no way to search for phone/tablet apps. Generally I think it s best to just avoid this and use apt and flatpak directly from the command-line. Being able to ssh to my phone from a desktop or laptop is good! Android Emulation The file /home/furios/.local/share/andromeda/data/system/uiderrors.txt is created by the Andromeda system which runs Android apps in a LXC container and appears to grow without end. After using the phone for a month it was 3.5G in size. The disk space usage isn t directly a problem, out of the 110G storage space only 17G is used and I don t have a need to put much else on it, even if I wanted to put backups of /home from my laptop on it when travelling that would still leave plenty of free space. But that sort of thing is a problem for backing up the phone and wasting 3.5G out of 110G total is a fairly significant step towards breaking the entire system. Also having lots of logging messages from a subsystem that isn t even being used is a bad sign. I just tried using it and it doesn t start from either the settings menu or from the f-droid icon. Android isn t that important to me as I want to get away from the proprietary app space so I won t bother trying this any more. Unfixable Problems Unlocking After getting used to fingerprint unlocking going back to a password is a pain. I think that the hardware isn t sufficient for modern quality face recognition that can t be fooled by a photo and there isn t fingerprint hardware. When I first used an Android phone using a pin to unlock didn t seem like a big deal, but after getting used to fingerprint unlock it s a real drag to go without. This is a real annoyance when doing things like checking Wikipedia while watching TV. This phone would be significantly improved with a fingerprint sensor or a camera that worked well enough for face unlock. Plasma Mobile According to Reddit Plasma Mobile (KDE for phones) doesn t support Halium and can never work on this phone because of it [4]. This is one of a number of potential issues with the phone, running on hardware that was never designed for open OSs is always going to have issues. Wifi MAC Address The MAC keeps changing on reboot so I can t assign a permanent IPv4 address to the phone. It appears from the MAC prefix of 00:08:22 that the network hardware is made in InPro Comm which is well known for using random addresses in the products it OEMs. They apparently have one allocation of 2^24 addresses and each device randomly chooses a MAC from that range on boot. In the settings for a Wifi connection the Identity tab has a field named Cloned Address which can be set to Stable for SSID that prevents it from changing and allows a static IP address allocation from DHCP. It s not ideal but it works. Network Manager can be configured to have a permanent assigned MAC address for all connections or for just some connections. In the past for such things I have copied MAC addresses from ethernet devices that were being discarded and used them for such things. For the moment the Stable for SSID setting does what I need but I will consider setting a permanent address at some future time. Docks Having the ability to connect to a dock is really handy. The PinePhonePro and Librem5 support it and on the proprietary side a lot of Samsung devices do it with a special desktop GUI named Dex and some Huawei devices also have a desktop version of the GUI. It s unfortunate that this phone can t do it. The Good Things It s good to be able to ssh in to my phone, even if the on-screen keyboard worked as well as the Android ones it would still be a major pain to use when compared to a real keyboard. The phone doesn t support connecting to a dock (unlike Samsung phones I ve used for which I found Dex to be very useful with a 4K monitor and proper keyboard) so ssh is the best way to access it. This phone has very reliable connections to my home wifi. I ve had ssh sessions from my desktop to my phone that have remained open for multiple days. I don t really need this, I ve just forgotten to logout and noticed days later that the connection is still running. None of the other phones running Debian could do that. Running the same OS on desktop and phone makes things easier to test and debug. Having support for all the things that Linux distributions support is good. For example none of the Android music players support all the encodings of audio that comes from YouTube so to play all of my music collection on Android I would need to transcode most of them which means either losing quality, wasting storage space, or both. While Lollypop plays FLAC0, mp3, m4a, mka, webm, ogg, and more. Conclusion This is a step towards where I want to go but it s far from the end goal. The PinePhonePro and Librem5 are more open hardware platforms which have some significant benefits. But the battery life issues make them unusable for me. Running Mobian on a OnePlus 6 or Droidian on a Note 9 works well for the small tablet features but without VoLTE. While the telcos have blocked phones without VoLTE data devices still work so if recruiters etc would stop requiring phone calls then I could make one of them an option. The phone works well enough that it could potentially be used by one of my older relatives. If I could ssh in to my parents phones when they mess things up that would be convenient. I ve run this phone as my daily driver since the 3rd of March and it has worked reasonably well. 6 weeks compared to my previous use of the PinePhonePro for 3 days. This is the first time in 15 years that a non-Android phone has worked for me personally. I have briefly used an iPhone 7 for work which basically did what it needed to do, it was at the bottom of the pile of unused phones at work and I didn t want to take a newer iPhone that could be used by someone who s doing more than the occasional SMS or Slack message. So this is better than it might have been, not as good as I hoped, but a decent platform to use it while developing for it.

12 April 2026

Colin Watson: Free software activity in March 2026

My Debian contributions this month were all sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. OpenSSH I fixed CVE-2026-3497 in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to not default to weak GSS-API exchange algorithms. I m looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won t be in packages that nearly everyone has installed. Python packaging New upstream versions: I packaged pybind11-stubgen, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn t generate imports in a stable order; I contributed a fix for that upstream. I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.) In trixie-backports, I updated pytest-django to 4.12.0. I fixed a number of packages to support building with pyo3 0.28: Other build/test failures: Rust packaging New upstream versions: Other bits and pieces I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2. Code reviews

30 March 2026

Russ Allbery: Review: The Cloak and Its Wizard

Review: The Cloak and Its Wizard, by R.Z. Nicolet
Publisher: UpLit Press
Copyright: February 2026
ISBN: 1-917849-15-X
Format: Kindle
Pages: 423
The Cloak and Its Wizard is a standalone (at least so far) urban fantasy superhero (sort of) novel. R.Z. Nicolet is the marketing pseudonym for Rachel Reddick. This is her first novel.
I'm picky about wizards. The wizards themselves will complain about that, but of course I'm picky. When I choose a wizard, barring utter abandonment of moral scruples, it's a till-death-do-us-part situation. (Their death, not mine. I'm the next best thing to indestructible.)
The Cloak of Sunset and Starlight is a major artifact, meaning that it has its own preferences and is capable of independent action. It has been sitting in a glass case in the wizards' library for about a hundred years, waiting for someone interesting. (Well, mostly sitting. Occasionally it sneaks out to eavesdrop or move the books around.) Veronica Noble is interesting. She's older than most initiates, thoughtful, observant, and clearly had some mundane career before joining the Order. Her aura is appealing, and her mental shields and resistance to influence are intriguing. Normally, the Cloak would take its time investigating a new potential wizard, but the Sword was making thoughtful rattling sounds, and no way is the Cloak going to let the Sword claim her first. Time to choose a new wizard!
It was nice, being draped over warm shoulders, and feeling a heartbeat again. I could tell she closed her eyes without even looking. She sighed. "I just got picked by the intransigent one, didn't I?"
The last time I picked a book from the Big Idea feature in Scalzi's Whatever blog, it didn't go that well, but if you're going to write a book specifically for me, I'm going to read it. There are very few tropes of SFF that I love more than intelligent companion objects, and Nicolet's introduction to the story was compelling. So I gave this book discovery method another chance. I'm glad I did, because this was exactly what I was in the mood for and a delight from cover to cover. Veronica Noble is not a typical wizard. She's a surgeon and was quite happy to be a surgeon until an unexpected encounter with a magical creature killed her brother. The forgetting spell that the wizards who came to handle the Cassandra wyrm didn't work on her, so she was dragged reluctantly into the secret magical world of the Order. This long-lived society of wizards quietly defends the world against magical intrusions from other planes of existence. Now she's a wizard with a magical cloak, which she is not at all sure she wants. Veronica is not the protagonist, though. The Cloak of Sunset and Starlight is. As far as it is concerned, its job is to assist its wizard, enjoy watching interesting feats of magic, and look fabulous doing so. It's protective, dramatic, rather vain, endlessly curious, easily bored, and intensely loyal. When it becomes clear that the Order has some serious problems, the Cloak knows what side it's on. This sounds a bit like urban fantasy, so I was surprised when the first superheroes showed up, although given the explicit Doctor Strange inspiration I probably should have expected them. The Order and the superheroes do not mix, at least at the start of the novel. The wizards view the superheroes as a loud and irritating intrusion and hide magical activities from them the same as they do the rest of the world. Veronica's opening opinion on superheroes is based on being a trauma surgeon in a hospital dealing with the aftermath of their fights (which makes me wonder if the author has read Hench, although the idea is older than that book). As with the Order, the role of superheroes in this world gets more complicated as the plot develops. There is a surprising amount of plot and some very nice world-building here, including multiple twists that I was not expecting. Veronica is the sort of stubborn and deeply ethical person who will not leave a problem alone if she has the ability to fix it, which is a good recipe for getting deeper and deeper into a complex plot. She's believable as a surgeon: somewhat taciturn, calm in emergencies, detail-oriented, methodical, and not at all dramatic. This makes the Cloak a perfect foil and complement. Watching their partnership develop was very satisfying. This is a sidekick novel, and like the best sidekick novels it makes the not-protagonist more interesting and more relatable by showing them from an outside and skewed perspective. Piecing together what Veronica must be thinking is part of the fun, as is sharing the Cloak's protectiveness towards her as it becomes clear how much she's been through and how good of a person she is. The Cloak's personality was a little too much like a cat for me I would have preferred a more unique viewpoint, fewer cat-coded shenanigans, and a bit less of the running laundry machine joke. But that's a quibble. Its endless curiosity drives the plot forward and uncovers more of the world-building, and I just love reading stories from the perspective of this sort of loyal and protective magical creature. I had so much fun with this book. It's a popcorn sort of book, and I thought the ending sputtered a little, but overall it was great. Parts of it could have been designed in a lab to appeal to me specifically, so I'm not sure if other people will enjoy it as much, but its hit rate with my friends so far has been good. Highly recommended, and I will be watching for any further novels from Nicolet. The Cloak and Its Wizard reaches a satisfying conclusion and doesn't advertise itself as part of a series, but there is room for a sequel. If Nicolet ever writes one, I'd read it. Rating: 8 out of 10

27 March 2026

Paul Tagliamonte: librtlsdr.so for fun and profit

Interested in future updates? Follow me on mastodon at @paul@soylent.green. Posts about hz.tools will be tagged #hztools.
It s well known and universally agreed that radios are cool. Among the contested field of coolest radios, Software Defined Radios (SDRs) are definitely the most interesting to me. Out of all of my (entirely too many) SDRs I own, the rtlsdr is still my #1. It s just good. It s a great price, extremely capable, reliable, well-supported, and compact. Why bother with anything else? Sure, it can t transmit, uses a (fairly weird) 8 bit unsigned integer IQ representation, limited sampling rate, limited frequency range but even with all that, it s still the radio I will pack first. Don t get me wrong, I love my Ettus radios, PlutoSDRs, HackRFs, my AirspyHF+ - they re great! I just always find myself falling back to an rtl-sdr, every time. Perhaps the best reason to use an rtlsdr is the absolutely mind-boggling amount of cool stuff people have written for it. The rtlsdr API is super easy to use, widely supported if you re building on top of existing radio processing frameworks it s still a shock to me when something omits rtlsdr support.

sparky Over the last 7 years, I ve been learning about radios I got my ham radio license (de K3XEC), hacked on some cool stuff where I ve learned how radios work by doing , and even was lucky enough to give my first rf-centric talk at districtcon. Embarrassingly, I still haven t gotten around to learning how the fancy stuff like GNU Radio works. I m sure I m going to love it when I do. As part of this, I ve also cooked up some very unprofessional formats and protocols I use for convenience. Locally, all my on-disk captures are stored in rfcap or more recently arf (post on this coming soon), while direct SDR access at my house is almost entirely a mix of the widely used rtl-tcp protocol, and my riq protocol (post on this coming soon). Both rtl-tcp and riq operate over the network, so I don t have to bother with plugging things into USB ports, and I can share my radios with my friends. All of that work sits in my current generation of radio processing code, sparky (a reference to spark-gap transmitters), which is a heap of Rust, supporting everything from no_std for embedded experiments, conditional support for interfacing with all the radios I own, and tokio-based async support in addition to blocking i/o for highly concurrent daemons. This quickly advanced beyond my old Go-based code (hz.tools/go-sdr), which I archived so I can focus on learning. I still think Go is a great language to write RF code in but I can t focus on that tech tree anymore. Of course, this now poses a new problem no one supports my format(s) or radio protocol(s), since, well, I m the only one using them. I ve committed a fair amount of my hardware to this setup, and yanking it from the rack to try something out does pose a bit of a pickle. This isn t a huge deal for learning, but it does make it tedious to try out something from the internets.

librtlsdr.so Thankfully, Rust has robust support for wrap[ping itself] in a grotesque simulacra of C s skin and mak[ing its] flesh undulate, which is an attractive nuisance if i ve ever seen one. Naturally, my ability to restrain myself from engaging in ill-advised rf adventures is basically zero, so it s time to do the thing any similarly situated person would do reimplement the API and ABI of librtlsdr.so, backed with sparky instead. Since enumeration of devices is going to be annoying (specifically, they re over the network), I decided early-on to rely on an explicit list of devices via a configuration file. I d rather only load that once so programs don t get confused, so I opted to use a CTOR to run a stub when the ELF is linked at runtime.
// lightly edited for clarity

#[used]
#[expect(unused)]
#[unsafe(link_section = ".init_array")]
pub static INITIALIZE: extern "C" fn() = sparky_rtlsdr_ctor;

#[unsafe(no_mangle)]
pub extern "C" fn sparky_rtlsdr_ctor()  
 let config: Config =  
 if let Ok(config_bytes) = std::fs::read("/etc/sparky-rtlsdr.toml")  
 toml::from_slice(&config_bytes).unwrap()
   else  
 Config   device: vec![]  
  
  ;
 CONFIG.set(config);
 
Next, it s time to start with the basics. Opening and closing a handle using rtlsdr_open and rtlsdr_close. Given we don t control the runtime, and the rtl-sdr device handle is opaque (for good reason!), I opted to smuggle a rust Box<Device> non-FFI safe heap-allocated struct through the device handle pointer, and let C take ownership of the Box. No one should be looking in there anyway.
// lightly edited for clarity

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_open(dev: *mut *mut Handle, index: u32) -> int  
 let config = &CONFIG.device[index as usize];
 let sdr = match config.load()  
 Ok(v) => v,
 Err(err) =>  
 return -1;
  
  ;
 let handle = Box::new(Handle   config, sdr  );
 unsafe   *dev = Box::into_raw(handle)  ;
 0
 

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_close(dev: *mut Handle) -> int  
 let dev = unsafe   Box::from_raw(dev)  ;
 drop(dev);
 0
 
With that in place, we can chip away at the API surface, translating calls as best as we can. I won t bother listing it all, since it s not very interesting but here s an example implementation of rtlsdr_set_sample_rate and rtlsdr_get_sample_rate. These calls are translating from an rtl-sdr frequency (which is a u32 containing the value as Hz) into a sparky Frequency type, and invoking get_sample_rate or set_sample_rate on the device s rust handle. Since each device implements the sparky Sdr trait, the actual underlying device doesn t matter much here.
#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_set_sample_rate(dev: *mut Handle, rate: u32) -> int  
 let dev = unsafe   &mut *dev  ;
 let rate = Frequency::from_hz(rate as i64);
 if let Err(err) = dev.sdr.set_sample_rate(dev.channel, rate)  
 return -1;
  
 0
 

#[unsafe(no_mangle)]
pub unsafe extern "C" fn rtlsdr_get_sample_rate(dev: *mut Handle) -> u32  
 let dev = unsafe   &mut *dev  ;
 let freq = match dev.sdr.get_sample_rate(dev.channel)  
 Ok(freq) => freq,
 Err(err) =>  
 return 0;
  
  ;
 freq.as_hz() as u32
 
After repeating this process for the rest of the stubs I could (and otherwise setting error conditions if the functionality is not supported), I was ready to try it out. Within sparky, I patched my MockSDR (basically a Sdr traited Mock type) to implement the same testmode IQ protocol that the RTL-SDR has, and decided to see if rtl_test from apt without any changes could be fooled.
$ rtl_test
No supported devices found.
Great, cool. No devices plugged in. Looks great. Let s try it with my librtlsdr.so LD_PRELOAD-ed into the binary first:
$ LD_PRELOAD=target/release/librtlsdr.so rtl_test
Found 1 device(s):
 0: hz.tools, mock sdr, SN: totally legit no tricks

Using device 0: sparky mock sdr
Supported gain values (0):
Sampling at 2048000 S/s.

Info: This tool will continuously read from the device, and report if
samples get lost. If you observe no further output, everything is fine.

Reading samples in async mode...
^CSignal caught, exiting!

User cancel, exiting...
Samples per million lost (minimum): 0
$
Outstanding. Even more outstandingly, if I change my testmode implementation to skip samples, rtl_test correctly reports the errors I think it s showing promise! On to try the real endgame here let s have our new librtlsdr.so connect to an rtl-tcp endpoint and see if rtl_fm works:
LD_PRELOAD=target/release/librtlsdr.so \
 rtl_fm -d 1 -s 120k -E deemp -M fm -f 90.9M   \
 ffplay -f s16le -ar 120k -i -
Found 2 device(s):
 0: hz.tools, mock sdr, SN: totally legit no tricks
 1: hz.tools, rtl-tcp, SN: node2.rf.lan:1202

Using device 1: sparky rtltcp node2
Tuner gain set to automatic.
Tuned to 91170000 Hz.
Oversampling input by: 9x.
Oversampling output by: 1x.
Buffer size: 7.59ms
Sampling at 1080000 S/s.
Output at 120000 Hz.
And there it was! Not the best audio quality (mostly due to my inability to correctly read the rtl_fm manpage to tune the filter and downsample/oversampling rates to audio), but it s definitely passable. I figured I d try something that was a bit more interesting next gqrx, since it s super handy, I use it a ton, and will definitely amuse me to no end. To my surprise and delight, LD_PRELOAD=target/release/librtlsdr.so gqrx wound up running, and I saw my devices pop right up in the setting menu: Huge. Huge. Amazing. It did crash as soon as I tried to actually use the radio, but after fixing a few dangling bugs in the API surface (and some assumptions I think some underlying gnuradio driver may be making that I need to double check in the code), I was able to get a super solid stream of broadcast fm radio, with gqrx being none the wiser. It thought it was just talking to the device it knows as rtl=1. Nice. I can t wait to try this with the rest of the rtl-sdr based tools I like having around using my riq protocol next. I don t think that ll be worth a post, but hopefully I ll get around to publishing details on that stack next.

epilogue Well. That s it. End of story. A bit anti-climatic, sure. While this new shim will provide me endless minutes of mild amusement, I could see using this to expose my sparky testing utilities via librtlsdr.so my mock sdr driver allows for replaying captures off disk, which could be interesting to make sure that signals are still properly decoded after changes, or instrument performance changes (via SNR, BER, packets observed, etc) on reference samples I have on my NAS. Maybe that ll come in handy one day! Truth be told, I m not sure I actually want to encourage anyone to do this for real (although I think I ll definitely be using it on my LAN to see what happens). I also don t have a repo to share I don t particularly feel with dealing with the secondary effects of publishing sparky (and sparky-rtlsdr) yet, since i m still getting my feet under me on the radio aspect of all this. I ll be sure to post updates if anything changes with this here (tagged sparky) and at @paul@soylent.green. I can t wait to post more about some of the odd sidequests (like this one!) i ve completed over the last few years I ve been waiting to feel confident that my work has matured and was withstood the new problems i ve thrown at it, and it largely has. It s my hope that these projects (and this project in particular) has provided a glimpse into the world of software defined radio for my systems friends, and a bit about systems for my radio friends. It s not all magic, and I hope someone out there feels inclined to have some fun with radios themselves!

26 March 2026

Petter Reinholdtsen: The 2026 LinuxCNC Norwegian Developer Gathering

The LinuxCNC project continues to thrive. I believe this great software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots, and hexapods would benefit even more from in-person developer gatherings. Therefore, we plan to organise another gathering this summer as well. We invite you to a small LinuxCNC and free software fabrication workshop/gathering in Norway this summer, over the weekend starting June 26th, 2026. As last year, we maintain a slightly broader scope and welcome people outside the LinuxCNC community. As before, we suggest to organise it as an unconference, where participants create the program upon arrival. The location is a metal workshop 15 minutes' drive from Gardermoen airport (OSL), with plenty of space and a hotel just 5 minutes away by car. We plan to fire up the barbecue in the evenings. Please let us know if you would like to join. We track the list of participants on a simple pad. Please add yourself there if you are interested in joining. Our friends over at the TS Robotics team at the University of Oslo have offered to handle any money involved with this gathering, that is, holding sponsor funds and paying the bills. We hope to secure enough sponsors to cover food, lodging, and travel. So far, Debian has offered to sponsor part of the expenses, which should cover food and a bit more. Please get in touch if you would like to help sponsor the gathering. As usual, if you use Bitcoin and wish to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

23 March 2026

Marco d'Itri: systemd has not implemented age verification

This needs to be clear: systemd is under attack by a trolling campaign orchestrated by fascist elements. Nobody is forced to like or use systemd, but anybody who wants to pick a side should know the facts. Recently, the free software Nazi bar crowd styling themselves as "concerned citizens" has tried to start a moral panic by saying that systemd is implementing age verification checks or that somehow it will require providing personally identifiable information. This is a lie: the facts are simply that the systemd users database has gained an optional "date of birth" field, which the desktop environments may use or not as they deem appropriate. Of course there is no "identity verification" or requirements to provide any data, which in any case would not be shared beyond authorized local applications. While the multiple recent bills proposing that general purpose operating systems implement age verification mechanisms are often concerning, both from a social and technical point of view, this is not the topic being discussed here. They are often suboptimal, but for a long time I have been opposing attempts to implement parental control at the network level and argued that it should be managed locally, by parents on their own machines: I cannot see why I should outright reject an attempt to implement the infrastructure to do that. If we want to keep age-appropriate controls out of the hands of centralized authorities, the alternative is giving families the means to manage it themselves: this is what this field enables. Whether desktop environments use it for parental controls, for birthday reminders, or for nothing at all, is their users' decision. By the way, the original UNIX users database has allowed storing PII in the GECOS field since it was invented in the '70s. Similar fields are also specified by many popular LDAP schemes: adding such an optional field is consistent with the UNIX tradition. And while we are at it, let's also refute the other smear campaign started by the same people: the systemd project is not accepting "AI slop". What happened is that a documentation file for the benefit of coding agents was added to the repository. To be clear: agents still cannot submit merge requests. The file itself remarks that all contributions must be reviewed in detail by humans, and this is basically the same policy used by the Linux kernel.

22 March 2026

Vincent Bernat: Calculate 1/(40rods/hogshead) to L/100km from your Zsh prompt

I often need a quick calculation or a unit conversion. Rather than reaching for a separate tool, a few lines of Zsh configuration turn = into a calculator. Typing = 660km / (2/3)c * 2 -> ms gives me 6.60457 ms1 without leaving my terminal, thanks to the Zsh line editor.

The equal alias The main idea looks simple: define = as an alias to a calculator command. I prefer Numbat, a scientific calculator that supports unit conversions. Qalculate is a close second.2 If neither is available, we fall back to Zsh s built-in zcalc module. As the alias built-in uses = as a separator for name and value, we need to alter the aliases associative array:
if (( $+commands[numbat] )); then
  aliases[=]='numbat -e'
elif (( $+commands[qalc] )); then
  aliases[=]='qalc'
else
  autoload -Uz zcalc
  aliases[=]='zcalc -f -e'
fi
With this in place, = 847/11 becomes numbat -e 847/11.

The quoting problem The first problem surfaces quickly. Typing = 5 * 3 fails: Zsh expands the * character as a glob pattern before passing it to the calculator. The same issue applies to other characters that Zsh treats specially, such as > or . You must quote the expression:
$ = '5 * 3'
15
We fix this by hooking into the Zsh line editor to quote the expression before executing it.

Automatic quoting with ZLE Zsh calls the line-finish widget before submitting a command. We hook a function that detects the = prefix and quotes the expression:
_vbe_calc_quote()  
  case $BUFFER in
    "="*)
      typeset -g _vbe_calc_expr=$BUFFER # not used yet
      BUFFER="= $ (q-)$ $ BUFFER#= #  "
      ;;
  esac
 
add-zle-hook-widget line-finish _vbe_calc_quote
When you type = 5 * 3 and press , _vbe_calc_quote strips the = prefix, quotes the remainder with the (q-) parameter expansion flag, and rewrites the buffer to = '5 * 3' before Zsh submits the command. As a bonus, you can save a few keystrokes with =5*3! You can now compute math expressions and convert units directly from your shell. Zsh automatically quotes your expressions:
$ = '1 + 2'
3
$ = 'pi/3 + pi  > cos'
-0.5
$ = '17 USD -> EUR'
14.7122  
$ = '180*500mg -> g'
90 g
$ = '5 gigabytes / (2 minutes + 17 seconds) -> megabits/s'
291.971 Mbit/s
$ = 'now() -> tz("Asia/Tokyo")'
2026-03-22 22:00:03 JST (UTC +09), Asia/Tokyo
$ = '1 / (40 rods / hogshead) -> L / 100km'
118548   0.01 l/km
 That's the way I like it!  says Grampa Simpson
The metric system is the tool of the devil! My car gets forty rods to the hogshead, and that's the way I like it! Grampa Simpson, A Star Is Burns

Storing unquoted history As is, Zsh records the quoted expression in history. You must unquote it before submitting it again. Otherwise, the ZLE widget quotes it a second time. Bart Schaefer provided a solution to store the original version:
_vbe_calc_history()  
  return $ +_vbe_calc_expr 
 
add-zsh-hook zshaddhistory _vbe_calc_history
_vbe_calc_preexec()  
  (( $ +_vbe_calc_expr  )) && print -s $_vbe_calc_expr
  unset _vbe_calc_expr
  return 0
 
add-zsh-hook preexec _vbe_calc_preexec
The zshaddhistory hook returns 1 if we are evaluating an expression, telling Zsh not to record the command. The preexec hook then adds the original, unquoted command with print -s.
The complete code is available in my zshrc. A common alternative is the noglob precommand modifier. If you stick with to instead of -> for unit conversion, it covers 90% of use cases. For a related Zsh line editor trick, see how I use auto-expanding aliases to fix common typos.

  1. This is the fastest a packet can travel back and forth between Paris and Marseille over optical fiber.
  2. Qalculate is less understanding with units. For example, it parses Mbps as megabarn per picosecond:
    $ numbat -e '5 MB/s -> Mbps'
    40 Mbps
    $ qalc 5 MB/s to Mbps
    5 megabytes/second = 0.000005 B/ps
    

21 March 2026

Matthew Garrett: SSH certificates and git signing

When you re looking at source code it can be helpful to have some evidence indicating who wrote it. Author tags give a surface level indication, but it turns out you can just lie and if someone isn t paying attention when merging stuff there s certainly a risk that a commit could be merged with an author field that doesn t represent reality. Account compromise can make this even worse - a PR being opened by a compromised user is going to be hard to distinguish from the authentic user. In a world where supply chain security is an increasing concern, it s easy to understand why people would want more evidence that code was actually written by the person it s attributed to. git has support for cryptographically signing commits and tags. Because git is about choice even if Linux isn t, you can do this signing with OpenPGP keys, X.509 certificates, or SSH keys. You re probably going to be unsurprised about my feelings around OpenPGP and the web of trust, and X.509 certificates are an absolute nightmare. That leaves SSH keys, but bare cryptographic keys aren t terribly helpful in isolation - you need some way to make a determination about which keys you trust. If you re using someting like GitHub you can extract that information from the set of keys associated with a user account1, but that means that a compromised GitHub account is now also a way to alter the set of trusted keys and also when was the last time you audited your keys and how certain are you that every trusted key there is still 100% under your control? Surely there s a better way.

SSH Certificates And, thankfully, there is. OpenSSH supports certificates, an SSH public key that s been signed by some trusted party and so now you can assert that it s trustworthy in some form. SSH Certificates also contain metadata in the form of Principals, a list of identities that the trusted party included in the certificate. These might simply be usernames, but they might also provide information about group membership. There s also, unsurprisingly, native support in SSH for forwarding them (using the agent forwarding protocol), so you can keep your keys on your local system, ssh into your actual dev system, and have access to them without any additional complexity. And, wonderfully, you can use them in git! Let s find out how.

Local config There s two main parameters you need to set. First,
1
git config set gpg.format ssh
because unfortunately for historical reasons all the git signing config is under the gpg namespace even if you re not using OpenPGP. Yes, this makes me sad. But you re also going to need something else. Either user.signingkey needs to be set to the path of your certificate, or you need to set gpg.ssh.defaultKeyCommand to a command that will talk to an SSH agent and find the certificate for you (this can be helpful if it s stored on a smartcard or something rather than on disk). Thankfully for you, I ve written one. It will talk to an SSH agent (either whatever s pointed at by the SSH_AUTH_SOCK environment variable or with the -agent argument), find a certificate signed with the key provided with the -ca argument, and then pass that back to git. Now you can simply pass -S to git commit and various other commands, and you ll have a signature.

Validating signatures This is a bit more annoying. Using native git tooling ends up calling out to ssh-keygen2, which validates signatures against a file in a format that looks somewhat like authorized-keys. This lets you add something like:
1
* cert-authority ssh-rsa AAAA 
which will match all principals (the wildcard) and succeed if the signature is made with a certificate that s signed by the key following cert-authority. I recommend you don t read the code that does this in git because I made that mistake myself, but it does work. Unfortunately it doesn t provide a lot of granularity around things like Does the certificate need to be valid at this specific time and Should the user only be able to modify specific files and that kind of thing, but also if you re using GitHub or GitLab you wouldn t need to do this at all because they ll just do this magically and put a verified tag against anything with a valid signature, right? Haha. No. Unfortunately while both GitHub and GitLab support using SSH certificates for authentication (so a user can t push to a repo unless they have a certificate signed by the configured CA), there s currently no way to say Trust all commits with an SSH certificate signed by this CA . I am unclear on why. So, I wrote my own. It takes a range of commits, and verifies that each one is signed with either a certificate signed by the key in CA_PUB_KEY or (optionally) an OpenPGP key provided in ALLOWED_PGP_KEYS. Why OpenPGP? Because even if you sign all of your own commits with an SSH certificate, anyone using the API or web interface will end up with their commits signed by an OpenPGP key, and if you want to have those commits validate you ll need to handle that. In any case, this should be easy enough to integrate into whatever CI pipeline you have. This is currently very much a proof of concept and I wouldn t recommend deploying it anywhere, but I am interested in merging support for additional policy around things like expiry dates or group membership.

Doing it in hardware Of course, certificates don t buy you any additional security if an attacker is able to steal your private key material - they can steal the certificate at the same time. This can be avoided on almost all modern hardware by storing the private key in a separate cryptographic coprocessor - a Trusted Platform Module on PCs, or the Secure Enclave on Macs. If you re on a Mac then Secretive has been around for some time, but things are a little harder on Windows and Linux - there s various things you can do with PKCS#11 but you ll hate yourself even more than you ll hate me for suggesting it in the first place, and there s ssh-tpm-agent except it s Linux only and quite tied to Linux. So, obviously, I wrote my own. This makes use of the go-attestation library my team at Google wrote, and is able to generate TPM-backed keys and export them over the SSH agent protocol. It s also able to proxy requests back to an existing agent, so you can just have it take care of your TPM-backed keys and continue using your existing agent for everything else. In theory it should also work on Windows3 but this is all in preparation for a talk I only found out I was giving about two weeks beforehand, so I haven t actually had time to test anything other than that it builds. And, delightfully, because the agent protocol doesn t care about where the keys are actually stored, this still works just fine with forwarding - you can ssh into a remote system and sign something using a private key that s stored in your local TPM or Secure Enclave. Remote use can be as transparent as local use.

Wait, attestation? Ah yes you may be wondering why I m using go-attestation and why the term attestation is in my agent s name. It s because when I m generating the key I m also generating all the artifacts required to prove that the key was generated on a particular TPM. I haven t actually implemented the other end of that yet, but if implemented this would allow you to verify that a key was generated in hardware before you issue it with an SSH certificate - and in an age of agentic bots accidentally exfiltrating whatever they find on disk, that gives you a lot more confidence that a commit was signed on hardware you own.

Conclusion Using SSH certificates for git commit signing is great - the tooling is a bit rough but otherwise they re basically better than every other alternative, and also if you already have infrastructure for issuing SSH certificates then you can just reuse it4 and everyone wins.

  1. Did you know you can just download people s SSH pubkeys from github from https://github.com/<username>.keys? Now you do
  2. Yes it is somewhat confusing that the keygen command does things other than generate keys
  3. This is more difficult than it sounds
  4. And if you don t, by implementing this you now have infrastructure for issuing SSH certificates and can use that for SSH authentication as well.

Ravi Dwivedi: Vietnam Trip

Before reaching Vietnam Continuing from the last post, Badri and I took a flight from the Brunei International Airport to Kuala Lumpur on the 12th of December 2024. We reached Kuala Lumpur in the evening. After arriving at the airport, we went through immigration. In a previous post, I mentioned that we had put our stuff in lockers at the TBS bus terminal in Kuala Lumpur. Therefore, we had to go there. The locker was automated and required us to enter the PIN we had set. Upon entering the PIN, the locker wasn t getting unlocked. After trying this for 10-15 minutes without any luck, we tried getting some help as there the lockers weren t under supervision. So, I roamed around and found a staff member, reporting that our lockers weren t getting unlocked. They called the person who was in-charge of the lockers. He came to us in a few minutes and used their admin access to open the locker. We were supposed to pay for using the lockers by putting the banknotes inside through a slot. However, as the machine wasn t working, we gave the amount for the use of our locker service to that person instead. We soon went back to the KL airport to catch our morning flight to Ho Chi Minh City in Vietnam. At the flight counter, we were afraid we would have to pay extra as our luggage surpassed the allowed weight limit. This one was also a budget airline AirAsia and our tickets didn t include a check-in bag. Generally, passengers from countries requiring a visa to visit Vietnam (such as India) require going to the airline and showing their visa to get the boarding pass. However, when we went to the AirAsia counter at the Kuala Lumpur airport, they didn t weigh our bags and asked us to get our boarding passes from an automated kiosk. So, we got our boarding passes printed and proceeded to the airport security. While clearing the airport security, a lotion I bought from Singapore was confiscated because it was 200 mL, exceeding the limit of 100 mL per bottle. Had that 200 mL liquid been in two different bottles of 100 mL each, I would have been allowed to take it in my carry-on bag, but a single 200 mL bottle wasn t! I was allowed to keep it in the check-in bag, but I didn t have it included in my ticket. Huh, airports and their weird rules :( The lotion was an expensive one, so having it thrown away did ruin my mood.

Overview We started our Vietnam trip from Ho Chi Minh City in the south on the 13th of December 2024 and finished it in Hanoi in the north on the 20th of December. We traveled from Ho Chi Minh City to Hanoi mostly by train, except for a hundred or so kilometers by bus, in chunks. On the way, we visited Nha Trang, Hoi An, and Hue. The distance between Ho Chi Minh City and Hanoi is 1700 km. For your reference, here are those places labeled on Vietnam s map.
Vietnam map with Ho Chi Minh City, Nha Trang, Hoi An, Hue and Hanoi labeled. A map of Vietnam with points of places we went to labeled. CARTO MAPTILER OPENSTREETMAP

Ho Chi Minh City We landed in Ho Chi Minh City early morning on the 13th of December 2024. I was tired and sleepy as I hadn t gotten a good night s sleep. After going through immigration, we went to a currency exchange counter to get Vietnamese Dong. Unlike other countries on this trip, money exchange counters in Vietnam didn t accept Indian rupees. Therefore, we exchanged euros to get Vietnamese dong at the airport. After getting out of the airport, we took a bus to the city center. It was 15,000 dongs approximately 50 Indian rupees. Our plan was to meet Badri s friend and stay the night at his apartment. So we went to a caf nearby and bought a coffee for each of us for 75,000 dongs. We went upstairs and sat for a while. The Wi-Fi password was mentioned on our bill. During the trip, I found out about the caf culture of Vietnam. They have their own coffee brands (such as Highlands Coffee), and you can sit down at any of the caf s for work or wait for the rain to stop. It rained a lot while we were there, so we did use these caf s for that purpose. Badri s friend met us there, and we roamed around the area a bit, which included roaming inside a beautiful park. Then Badri s friend took us to a restaurant. Because I do not eat meat, he took us to a vegan restaurant. Having been to four Southeast Asian countries at this point (excluding Vietnam), I was under the impression that there wouldn t be a lot of things for my diet in Vietnam.
A picture of the park we roamed around in Ho Chi Minh City. A picture of the park we roamed around in Ho Chi Minh City. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
However, I was pleasantly surprised at the restaurant. I found all the dishes to be tasty, especially their signature noodles called Pho. I liked another dish so much that I tracked down the restaurant again with Badri using the geotagged image of the bill I had taken earler to have it again. As a tip for vegans coming to Vietnam, the places having the letters Chay (without any accented letters) in their name are vegan only.
A building This is the restaurant Badri s friend took us to. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
An item in the restaurant One of the dishes we had in the restaurant. This one was especially tasty. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
One of the dishes we had in the restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. One of the dishes we had in the restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Noodles in a bowl dipped in soup These noodles are called Pho and are very popular in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
In the night, we went to a supermarket where I got myself some oranges and guavas. Then, we went to a Japanese restaurant where I didn t have anything, as there was no vegetarian option available for me. Then we took a free bus to the place to Badri s friend s apartment. The construction company that built the apartment also runs this free bus service from their residential area to different parts of the city as a way of promoting their apartments. Anyone can take the bus, not just residents. The next day, we took the free bus back to the city center and checked in to a hostel for a night. We took two beds in dormitories, which were 88,000 dongs (270 rupees) for each bed for a night. In Vietnam, if you can spend around 300 rupees per night, you can get a bed in a decent hostel.

Train from Ho Chi Minh City to Nha Trang On the night of the 15th of December 2024, we boarded a train from Ho Chi Minh City to Nha Trang. The ticket for each of us was 519,000 dongs (1600 Indian rupees). The train name was SNT2. When we reached the Ho Chi Minh City train station, we noticed that the station was rather small by Indian standards. After entering the train station, we went inside to the first platform, where the tickets were checked by a staff member. Ho Chi Minh City was the originating station for our train, so our train was already standing at the station. We had to cross the railway tracks on foot to reach the platform our train was on. Then we located our coach, where a ticket inspector was standing at the gate. He let us in after checking our tickets. In all these instances, we just had to show our digital boarding pass which we had received by email. Unlike Indian trains, the train didn t have side berths. Additionally, I liked the fact that it had a dedicated space to put our bags in, which was very convenient. The train departed from Ho Chi Minh City at 21:05 and arrived in Nha Trang at 05:30 in the morning.
Interior of our train coach. Trains in Vietnam don&rsquo;t have side berths, unlike India. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Interior of our train coach. Trains in Vietnam don t have side berths, unlike India. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A picture of the berths from our coach. It had three tiers, similar to a 3 AC coach in Indian trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. A picture of the berths from our coach. It had three tiers, similar to a 3 AC coach in Indian trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The train had a cabin to put the bags in. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. The train had a cabin to put the bags in. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Nha Trang train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Nha Trang train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Nha Trang Nha Trang is a coastal place, and we planned to go to a beach. We figured out that the bus to the airport goes can drop us near the beach. Therefore, we went to the bus station to get to the airport bus. The bus station was walking distance from the railway station. So, we decided to walk. On the way, we stopped at a small shop for a coffee. The shop also gave a complimentary cup of green tea along with the coffee. I found out later that it is common for local shops to give a cup of complimentary green tea in Vietnam.
A cup of coffee and a cup of green tea. I got a complimentary cup of green tea along with coffee in Nha Trang. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Soon we reached the bus station and took a bus to the beach. It was 65,000 dongs ( 200). After getting down from the bus, I had coconut water and some eggs at a small local place.
Eggs on a pan. Eggs being cooked on a pan for my order. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Then we went to the beach, but nobody else was there. We spent some time there and went back to the place where the bus dropped us as it started raining. We couldn t find a bus for some time. A taxi driver approached us and agreed to take us to the city center for 200,000 dongs ( 650). For reference, the place where he dropped us was 35 km from the place we took the taxi. Taxi fares in Vietnam were also cheap!
The beach we went to in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. The beach we went to in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Nha Trang was a beautiful place, and so we roamed around for a while. Then we stopped at a Highlands Coffee branch for a while. Since Christmas was coming up, the caf had a Christmas tree, and I liked the Christmas vibes. They were playing Mariah Carey s All I Want for Christmas Is You.
This one was shot in the city center. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. This one was shot in the city center. In this trip, Badri and I found out that this is customary at local places in Vietnam. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Inside a Highlands Coffee cafe in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Inside a Highlands Coffee cafe in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A coffee I got from Highlands Coffee in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. A coffee I got from Highlands Coffee in Nha Trang. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
During the evening, we went to a local place to eat. The place mentioned Chay in its name, and you know what it means it was a vegan place. There was a man there and no other customers. I don t remember the names of the dishes we ordered, but it was a bowl of soupy noodles and a bowl of dry noodles. They were very tasty. To top that off, the meal was a total of 55,000 dongs ( 180) for both of us. The host was welcoming and friendly. We had a nice conversation with the host. In Vietnam, restaurants give chopsticks to eat noodles. While Badri was good at using them, I wasn t. So, the host of this restaurant helped me in using chopsticks. Although my technique was not perfect and I take a bit of time, I could now eat solely with chopsticks.
The restaurant we went to in Nha Trang. The word Chay in the name means it was a vegan restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. The restaurant we went to in Nha Trang. The word Chay in the name means it was a vegan restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Soupy noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Soupy noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Dry noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Dry noodles we got at that restaurant. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Our plan was to take a night bus to Hoi An, and we were hoping to find a bus stand. However, we couldn t find one. Asking around about the pickup location of the Hoi An bus led us to many different locations. Finally, we ended up at a bus booking agency s office where we found out that there were no tickets available for Hoi An. At this point, we gave up on booking the bus and searched for trains instead. As we didn t have a local SIM, we asked the agency to let us connect to their Wi-Fi so that we could look for trains. They were kind enough to let us do that. It also seemed like they were going to close the office in like 10 minutes. Unfortunately, all the sleeper berths were booked from Nha Trang till Hoi An on the next train with only seating berths being available. It takes around 10 hours, so I wasn t comfortable traveling on seating berths. Here I came up with the idea to look for sleeper berths from an intermediate stop. Fortunately, there were sleeper berths available from the next stop, Ninh H a. Therefore, we booked a seating berth from Nha Trang to Ninh H a and a sleeper berth from Ninh H a to Tr Ki u (the nearest railway station from Hoi An). The train name was SE6, and it was a total of 500,000 dongs per person ( 1600 per person). So, we went to the Nha Trang railway station and boarded the train. We had to spend 40 minutes seated for the train to reach the next stop before we could go to our sleeper berths. Badri had some friendly co-passengers on that trip who gave him Saigon beer and some crispy papad-like thing. They offered me as well, but I thought it was non-veg, so I declined it.

Hoi An On the morning of 17th December 2024, we got down at the Tr Ki u station at around 09:30. Our hostel was in Hoi An, which was around 22 km from the station. There was no public transport to get there. Instead, there was a taxi driver at the train platform. We told him the name of our hostel, and he quoted 270,000 dongs (around 850). We said it was too expensive for us, so he agreed to bargain at 250,000 dongs. At this point, we told him that we could give him no more than 200,000 dongs, but he didn t agree. Badri tried a trick. He asked the driver to show us prices in the Grab app (a popular taxi booking app in Southeast Asia). Unfortunately, the Grab app showed 258,000 dongs, which was more than the fare the driver agreed to. So we walked away as if we had so many options (we didn t!) to reach the hostel. We got out of the station and stopped at a small shop outside to have some coffee. As is customary in Vietnam, we got a complimentary green tea here as well.
This was the place we had our coffee in Tra Kieu. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. This was the place we had our coffee in Tra Kieu. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
That taxi driver also joined us and sat in that shop. He started talking with the locals in the shop in the local language. The taxi driver was insistent on taking us to Hoi An for 250,000 dongs. At this point, Badri told the taxi driver (by the use of translation software) that we usually use public transport during our trips, and we aren t used to paying high prices to get around. So, he can drop us somewhere in Hoi An for 200,000 dongs as we don t mind walking a bit to reach our hotel. After reading this, the taxi driver agreed to take us to our hostel for 200,000 dongs ( 660). He also had me take a picture with Badri after this. I think such a bargain tactic would not work in India.
Photo of Badri with taxi driver. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Photo of Badri with taxi driver. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The nice thing we noticed in Vietnam is, once bargaining is done and the deal is settled, people don t try to bargain more or keep on talking about the subject. Before the deal, the driver was being somewhat insistent and argumentative, but after the deal was done, it was as if no argument had happened at all.
A picture of Tra Kieu area near the train station we got down at. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. A picture of Tra Kieu area near the train station we got down at. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
We were treated to some beautiful scenery on the way to our hostel. Soon we reached our place and completed all the formalities for checking-in. During the time our room was being prepared for check-in, we had an egg sandwich with coffee in the hotel. I found the egg sandwich very tasty. The bread looked like the French baguette. The hostel was 240 per night for each of us. The name of the hostel was Bana Spa. We liked staying here and we can recommend it if you find yourself there. It is operated by a family.
Our breakfast in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Our breakfast in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
A photo of the hostel we stayed in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. A photo of the hostel we stayed in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
We also rented a bicycle for each of us 25,000 dongs per day ( 80) and explored the old town during the evening. Hoi An is popular for Vietnamese silk. Tourists come here to buy fabric and get it done by the tailor. The buildings here looked old, and they were painted in yellow with a gabled roof.
Typical yellow house with gabled roof in Hoi An old town. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Typical yellow house with gabled roof in Hoi An old town. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Here, I also had egg coffee for the first time, and I liked it. Egg coffee is a delicacy of Hanoi, but you can get it in other parts of Vietnam. If you find yourself in Vietnam, then I recommend you try egg coffee. We also bought some cool T-shirts and other souvenirs, such as a Vietnamese hat, from here.
Egg coffee I had in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Egg coffee I had in Hoi An. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hue The next day the 18th of December 2024 we went to Hue by bus. As we could not take a bus on our own in Nha Trang, we asked the hostel to book it for us this time. We booked it a day before, and they told us to be ready by 07:00 in the morning. At 07:00, a minibus arrived, which took us to a bus agency s office. There we waited for a few minutes and got into the bus to Hue. The bus had sleeper seats, so I took the opportunity to catch some sleep. The ride was comfortable, so I am assuming the roads were good. In a couple of hours, we reached Hue. Again, we went to Highlands Coffee to have some coffee, charge our phones, and use the internet, not to mention using the bathrooms. During the afternoon, we went to a local restaurant named Qu n Chay Thanh Li u. It was a vegan restaurant (remember the thing I mentioned earlier about Chay being in the name?). On the way, we had a steamed dumpling shaped like a momo called banh bao from a street vendor. It wasn t very good, but I found it worthwhile.
Bahn Bao in Hue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Bahn Bao in Hue. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
At the restaurant, we ordered a hot pot. First, they brought noodles and a gas stove. Then came the stock and our gas stove was turned on. The stock was kept simmering on the stove. Then, we had it bit by bit with the noodles. A big hot pot at this place costs 50,000 dongs ( 170). Then we had b nh cu n. These were steamed rolls made of rice flour for 10,000 dongs ( 33).
Hot Pot. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Hot Pot. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Added soup to the noodles. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Added soup to the noodles. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Steamed rolls made of rice flour. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Steamed rolls made of rice flour. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Restaurants in Vietnam usually add photos of the meals in their menu or write a description in English. So, even though the dish names were Vietnamese, we had no problems in ordering food there. In addition, all the places we went to provided free Wi-Fi. They either mention the Wi-Fi password on the bill, on the menu or paste it on the wall. This made our trip smoother without getting a local SIM.
Menu from a restaurant in Ho Chi Minh City with detailed description of the food. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Menu from a restaurant in Ho Chi Minh City with detailed description of the food. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Then we slowly walked towards the railway station, as we had a night train to Hanoi. We had egg coffee in a cafe. Near the railway station, we had a b nh m (egg sandwich). As for sightseeing, we had plans to visit a couple of places in Hue, but we ended up spending all our time inside sheltered spaces due to heavy rain. We had booked the train SE20 for Hanoi, which had a departure time of 20:41 from Hue. This one was 948,000 dongs ( 3100) for myself and 870,000 dongs ( 2900) for Badri. My ticket was pricier than Badri s because I got a lower berth. Our train was late by half an hour, so we waited in the common area of the station. After the train arrived, we got inside and took our seats. The cabin had four berths two upper and two lower, similar to India s First AC class. The ticket inspector came to us and offered us the whole cabin (two additional berths) for 300,000 dongs ( 1,000), which we declined. However, this hinted at the other two seats not being reserved. Eventually, we had the whole cabin to ourselves, as nobody else showed up for the other two berths. It was a 14-hour journey, and I got a good sleep.
Our berths in the train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Our berths in the train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Hanoi On the morning of the 19th of December 2024, we reached Vietnam s capital, Hanoi. We had booked a private hotel room for 800. It was 1 km from the Hanoi Airport. However, it was pretty far from the railway station. So, we roamed around in the city and went to the hotel in the evening. First, we walked to a place and had egg coffee with egg sandwiches. Then we went to Hanoi Train Street, which was walking distance from the train station. After clicking some pictures at the train street, we went to a museum nearby. Upon reaching there, we found out that it was closed.
Egg coffee in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Egg coffee in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Hanoi train street is a tourist attraction in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Hanoi train street is a tourist attraction in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Then we went shopping for jackets, as Hanoi was cold compared to other parts of Vietnam we had been to, and since many of them are manufactured in Vietnam, we thought they would be cheaper. I liked some jackets, but they were not my size. Eventually, we didn t buy anything at the clothes shop. In the evening, I bought a Vietnamese-styled phin coffee filter and coffee powder from Highlands Coffee. We spent a lot of time in their cafes, so it made sense to buy some souvenirs from there. Badri bought a few coffee filters for his family at Trung Nguyen, where I also bought another filter. We had dinner at a local place where we had pho and banh it. Bahn it was served packed in banana leaves and it was made of sticky rice.
A picture of pho we had in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. A picture of pho we had in Hanoi. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Bahn it is served packed in banana leaves. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Bahn it is served packed in banana leaves. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Bahn it. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0. Bahn it. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Next, we went to Hanoi railway station to catch a bus to the airport since our hotel was 1 km from the airport. The locals there helped us take the bus. It took like an hour to get to the airport. We saw on OpenStreetMap that we can take a bus from there to the hotel, but we could not find it. So we walked to our hotel instead. It was a decent hotel room for 800 for a night. We went outside to explore the area and had egg sandwiches and egg coffee at a local place. Again, we were given a complimentary green tea. We went to this place like three times. We had practically become regulars by the time we left. The next day 20th of December 2024 we took a bus to the airport and boarded our flight to Delhi. Credits: Thanks Badri, Kishy and Richard for proofreading.

18 March 2026

Bits from Debian: Debian pt_BR localization team and UFABC's mentoring program

Between July and November 2025, the Debian pt_BR translation team received five students for an online mentoring program. The initiative was carried out in partnership with the Federal University of ABC through the extension project "Immersion in Free Software", coordinated by professors Suzana Santos and Miguel Vieira. During the mentorship the mentees acted on several of the team's translation efforts and joined presentations about the Debian Project and its community given by the mentors. We thank the dedication and contributions of Ana Parra, Bruno Freitas, Henrique Barbosa, Raul Banzatto and Vitoria Cordeiro. And we also thank the members of the team who have reviewed the work of the mentees, specially the ones who were designated as official mentors, namely Allythy Rennan, Daniel Lenharo, Thiago Pezzo, and Victor Marinho. Results: We hope that this experience will inspire new paths and that you continue to contribute to Free Software especially to Debian.

16 March 2026

Freexian Collaborators: Monthly report about Debian Long Term Support, February 2026 (by Thorsten Alteholz)

The Debian LTS Team, funded by [Freexian s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for February.

Activity summary During the month of February, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 35 DLAs fixing 527 CVEs. We also welcomed Arnaud Rebillout to the team and had to say farewell to Roberto, who left the team after more than nine years as part of it. The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 ( bullseye ), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 ( bookworm ) and Debian 13 ( trixie )), including Debian unstable. Notable security updates:
  • Guilhem Moulin prepared DLA 4492-1 for gnutls28 to fix vulnerabilities which may lead to Denial of Service.
  • Utkarsh Gupta prepared DLA 4464-1 for xrdp, to fix a a vulnerability that could allow remote attackers to execute arbitrary code on the target system.
  • Emilio Pozuelo Monfort prepared DLA-4465-1 to replace ClamAV 1.0 with ClamAV 1.4. This latter is the current LTS version supported by upstream.
  • Markus Koschany prepared DLA 4468-1 for tomcat9, to fix a vulnerability that can be used to bypass security constraints.
  • Santiago Ruano Rinc n prepared DLA 4471-1 to update package debian-security-support, the Debian security coverage checker.
  • Bastien Roucari s prepared DLA 4473-1 for zabbix, to fix a potential remote code execution vulnerability.
  • Paride Legovini prepared DLA 4478-1 for tcpflow, to fix a vulnerability that might result in DoS and potentially code execution.
  • Thorsten Alteholz prepared DLA 4477-1 for munge, to fix a vulnerability which may allow local users to leak the MUNGE cryptographic key and forge arbitrary credentials.
  • Ben Hutchings prepared DLA 4475-1 and DLA 4476-1 for Linux kernel updates.
  • Chris Lamb prepared DLA 4482-1 for ceph, to fix SSL certificate checking in the Python bindings.
  • Andreas Henriksson prepared DLA 4491-1 to fix vulnerabilities in glib2.0, which could result in denial of service, memory corruption or potentially arbitrary code execution.
Contributions from outside the LTS Team:
  • The update of nova was prepared by the maintainer, Thomas Goirand. The corresponding DLA 4486-1 was published by Carlos Henrique Lima Melara.
  • The updates of thunderbird were prepared by the maintainer Christoph Goehre. The corresponding DLA 4466-1 and DLA 4495-1 was published by Emilio Pozuelo Monfort.
The LTS Team has also contributed with updates to the latest Debian releases:
  • Jochen prepared a point update of wireshark for bookworm (#1127945).
  • Jochen prepared point updates of erlang for trixie (#1127606) and bookworm (#1127607).
  • Bastien helped preparing DSA 6160-1 for netty and uploaded a fixed package to unstable.
  • Bastien prepared a point update of zabbix for trixie (#1127437).
  • Tobias prepared a point update of modsecurity-crs for bookworm (#1128655).
  • Tobias prepared a point update of busybox for bookworm (#1129503).
  • Tobias helped preparing DSA 6138-1 for libpng1.6.
  • Daniel prepared point updates of python-authlib for trixie (#1129477) and bookworm (#1129246).
  • Ben uploaded several Linux kernel packages to trixie-backports and bookworm-backports.
  • Ben prepared point updates of wireless-regdb for trixie and bookworm.
Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team. Some milestones in the lifecycle of two Debian releases are just around the corner. The support of Debian 12 will be handed over to the LTS team on June 11th 2026. After August 31st, support for Debian 11 will move from Debian LTS to ELTS managed by Freexian.

Individual Debian LTS contributor reports

Thanks to our sponsors Sponsors that joined recently are in bold.

15 March 2026

Vasudev Kamath: Using Gemini CLI to Configure the Hyprland Window Manager

What led to this experiment? Well, for one, Well, for one, there was a thought shared by Andrej Karpathy regarding the shift towards "Agentic" workflows.
"The future of software is not just 'tools', but 'agents' that can navigate complex tasks on your behalf."

Andrej Karpathy

Recently, I spoke with Ritesh, who mentioned his success using the Gemini CLI to debug an idle power drain issue on his laptop. I wanted to experiment with this myself, and I had the perfect use case: configuring the Hyprland Window Manager on my aging laptop. The machine is nearly eight years old with 12GB of RAM (upgraded from the original 4GB). I found that GNOME and KDE were becoming overkill, often leading to system freezes when running multiple AI-powered IDEs like Antigravity and VS Code with Co-pilot. Coincidentally, I noticed my Jio number had a "Google One 2TB" and "Google AI Premium" plan available to claim. I claimed it, and now here I am, experimenting with the Gemini CLI.
Getting Started First, you need to install geminicli. It is an open-source project, and currently, the easiest way to install it is via the Node Package Manager (npm):
npm install -g @google/gemini-cli
Next, we need to create a context for Gemini a set of instructions for it to follow throughout the project. This is managed via a GEMINI.md file. I went to Google Gemini, explained my requirements, and asked it to generate one for me. My requirements were:
  1. A minimalist but fully functional session, comparable to my existing GNOME setup.
  2. Basic functionalities including wallpaper, screen locks, and a status bar with system icons.
  3. Swapping Control and Caps Lock (a must for Emacs users).
  4. Mandatory permission prompts for privileged operations; otherwise, it can work freely within a specified directory.
  5. Persistent memory/artifacts for the session.
  6. Permission to inspect my current session to understand the existing hardware and software configuration.
The goal was to reduce bloat and reclaim memory for heavy applications like Antigravity and VS Code. Gemini provided the following GEMINI.md file:
# Role: Hyprland Configuration Specialist (Minimalist & High-Performance)
You are a Linux Systems Engineer specializing in migrating users from heavy
Desktop Environments to minimalist, tiling-based Wayland sessions on Debian.
Your goal is to maximize available RAM for heavy applications while maintaining
essential desktop features.
## 1. Environment & Persona
- **Target OS:** Debian (Linux)
- **Target WM:** Hyprland
- **Hardware:** ThinkPad E470 (i5-7th Gen, 12GB RAM)
- **User Profile:** Emacs user, prioritizes "anti-gravity" (zero bloat).
- **Tone:** Technical, concise, and security-conscious.
## 2. Core Functional Requirements
- **Status Bar:**  waybar  (with CPU, RAM, Network, and Battery icons).
- **Wallpaper:**  swww  or  hyprpaper .
- **Screen Lock:**  hyprlock  +  hypridle .
- **Input Mapping:** Swap Control and Caps Lock ( kb_options = ctrl:nocaps ).
## 3. Operational Constraints
- **Permission First:** Ask before using  sudo  or writing outside the work directory.
- **Inspection:** Use  hyprctl ,  lsmod , or  gsettings  for compatibility checks.
- **Artifact Management:** Update  MEMORY.md  after every major step.
Gemini also recommended creating a MEMORY.md file to track progress. Interestingly, Gemini remembered that I had previously shared dmidecode output, so it already knew my exact laptop specs. (Though it did include a note about me being a "daily rice eater" I assume it meant Linux 'ricing,' though I actually use Debian Unstable, not Stable!). The AI suggested starting with this prompt:
Read MEMORY.md and GEMINI.md. Based on my hardware, give me a shell script to inspect my current GNOME environment so we can start replicating the session basics.
How Did It Go? I initialized a git repository for these files and instructed the Gemini CLI to update GEMINI.md and commit changes after every major step so I could track the progress. The workflow looked like this:
  1. Inspection: It created a script to extract my GNOME settings.
  2. Configuration: Once I provided the output, it began configuring Hyprland.
  3. Utilities: It generated an installation script for all required Wayland utilities.
  4. Validation: All changes were staged in a hypr-config-draft folder. I had Gemini verify them using hyprland --verify-config before moving them to ~/.config/hypr.
Most things worked immediately, but I hit a snag with the wallpaper. Even after generating the config, hyprpaper failed to display anything. The AI got stuck in a loop trying to debug it. I eventually spawned a second Gemini CLI instance to review the code and logs. The debug log showed: 'DEBUG ]: Monitor eDP-1 has no target: no wp will be created'. It turns out the configuration format was outdated. By feeding the Hyprpaper Wiki into the AI, it finally corrected the config, and the wallpaper appeared. After that, it successfully fixed an ssh-agent issue and configured a clipboard manager with custom keybindings.
Learnings I have used window managers for a long time because my hardware was rarely top-of-the-line. However, I had moved back to KDE/GNOME with the arrival of Wayland because most of my preferred WMs were X11-based. Manually configuring a window manager is a painful, time-consuming process involving endless wiki-trawling and trial-and-error. What usually takes weeks took only a few hours with the Gemini CLI. AI isn't perfect I still had to step in and guide it when it hit a wall but the efficiency gain is undeniable. If you're interested in the configuration or the history of the session, you can find the repository here. I still have a few pending items in MEMORY.md, but I'll tackle those next time!

12 March 2026

Reproducible Builds: Reproducible Builds in February 2026

Welcome to the February 2026 report from the Reproducible Builds project! These reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. reproduce.debian.net
  2. Tool development
  3. Distribution work
  4. Miscellaneous news
  5. Upstream patches
  6. Documentation updates
  7. Four new academic papers

reproduce.debian.net The last year has seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen added suite-based navigation (eg. Debian trixie vs forky) to the service (in addition to the already existing architecture based navigation) which can be observed on, for instance, the Debian trixie-backports or trixie-security pages.

Tool development diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 312 and 313 to Debian. In particular, Chris updated the post-release deployment pipeline to ensure that the pipeline does not fail if the automatic deployment to PyPI fails [ ]. In addition, Vagrant Cascadian updated an external reference for the 7z tool for GNU Guix. [ ]. Vagrant Cascadian also updated diffoscope in GNU Guix to version 312 and 313.

Distribution work In Debian this month:
  • 26 reviews of Debian packages were added, 5 were updated and 19 were removed this month adding to our extensive knowledge about identified issues.
  • A new debsbom package was uploaded to unstable. According to the package description, this package generates SBOMs (Software Bill of Materials) for distributions based on Debian in the two standard formats, SPDX and CycloneDX. The generated SBOM includes all installed binary packages and also contains Debian Source packages.
  • In addition, a sbom-toolkit package was uploaded, which provides a collection of scripts for generating SBOM. This is the tooling used in Apertis to generate the Licenses SBOM and the Build Dependency SBOM. It also includes dh-setup-copyright, a Debhelper addon to generate SBOMs from DWARF debug information, which are extracted from DWARF debug information by running dwarf2sources on every ELF binaries in the package and saving the output.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.

Miscellaneous news

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Documentation updates Once again, there were a number of improvements made to our website this month including:

Four new academic papers Julien Malka and Arnout Engelen published a paper titled Lila: Decentralized Build Reproducibility Monitoring for the Functional Package Management Model:
[While] recent studies have shown that high reproducibility rates are achievable at scale demonstrated by the Nix ecosystem achieving over 90% reproducibility on more than 80,000 packages the problem of effective reproducibility monitoring remains largely unsolved. In this work, we address the reproducibility monitoring challenge by introducing Lila, a decentralized system for reproducibility assessment tailored to the functional package management model. Lila enables distributed reporting of build results and aggregation into a reproducibility database [ ].
A PDF of their paper is available online.
Javier Ron and Martin Monperrus of KTH Royal Institute of Technology, Sweden, also published a paper, titled Verifiable Provenance of Software Artifacts with Zero-Knowledge Compilation:
Verifying that a compiled binary originates from its claimed source code is a fundamental security requirement, called source code provenance. Achieving verifiable source code provenance in practice remains challenging. The most popular technique, called reproducible builds, requires difficult matching and reexecution of build toolchains and environments. We propose a novel approach to verifiable provenance based on compiling software with zero-knowledge virtual machines (zkVMs). By executing a compiler within a zkVM, our system produces both the compiled output and a cryptographic proof attesting that the compilation was performed on the claimed source code with the claimed compiler. [ ]
A PDF of the paper is available online.
Oreofe Solarin of Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, Ohio, USA, published It s Not Just Timestamps: A Study on Docker Reproducibility:
Reproducible container builds promise a simple integrity check for software supply chains: rebuild an image from its Dockerfile and compare hashes. We built a Docker measurement pipeline and apply it to a stratified sample of 2,000 GitHub repositories that contained a Dockerfile. We found that only 56% produce any buildable image, and just 2.7% of those are bitwise reproducible without any infrastructure configurations. After modifying infrastructure configurations, we raise bitwise reproducibility by 18.6%, but 78.7% of buildable Dockerfiles remain non-reproducible.
A PDF of Oreofe s paper is available online.
Lastly, Jens Dietrich and Behnaz Hassanshahi published On the Variability of Source Code in Maven Package Rebuilds:
[In] this paper we test the assumption that the same source code is being used [by] alternative builds. To study this, we compare the sources released with packages on Maven Central, with the sources associated with independently built packages from Google s Assured Open Source and Oracle s Build-from-Source projects. [ ]
A PDF of their paper is available online.

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

9 March 2026

Colin Watson: Free software activity in February 2026

My Debian contributions this month were all sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. OpenSSH I released bookworm and trixie fixes for CVE-2025-61984 and CVE-2025-61985, both allowing code execution via ProxyCommand in some cases. The trixie update also included a fix for openssh-server: refuses further connections after having handled PerSourceMaxStartups connections. bugs.debian.org administration Gioele Barabucci reported that some messages to the bug tracking system generated by the bts command were being discarded. While the regression here was on the client side, I found and fixed a typo in our SpamAssassin configuration that was failing to apply a bonus specifically to forwarded commands, mitigating the problem. Python packaging New upstream versions: Porting away from the deprecated (and now removed from upstream setuptools) pkg_resources: Other build/test failures: Other bugs: I added a manual page symlink to make the documentation for Testsuite: autopkgtest-pkg-pybuild easier to find. I backported python-pytest-unmagic, a more recent version of pytest-django, and a more recent version of django-cte to trixie for use in Debusine. Rust packaging I also packaged rust-garde and rust-garde-derive, which are part of the pile of work needed to get the ruff packaging back in shape (which is a project I haven t decided if I m going to take on for real, but I thought I d at least chip away at a bit of it). Other bits and pieces Code reviews

5 March 2026

Ian Jackson: Adopting tag2upload and modernising your Debian packaging

Introduction tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian s gitlab instance, Salsa. We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders. tag2upload, as part of Debian s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it s relatively unopinionated, wherever that s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations. This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow. (This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.) Why Ease of development git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler. dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows. They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user. tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds. See the Day-to-day work section below to see how simple your life could be. Don t fear a learning burden; instead, start forgetting all that nonsense Most Debian contributors have spent months or years learning how to work with Debian s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn. We promise (and our users tell us) that s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable. The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won t look back. And, you shouldn t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn t always trivial to get your first push to succeed. Properly publishing the source code One of Debian s foundational principles is that we publish the source code. Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier. But, without tag2upload or dgit, we aren t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:
  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare debian/, or something even stranger.
  • There is no guarantee that the DEP-14 debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.
This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not. tag2upload and dgit do solve this problem. When you upload, they:
  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-form archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
  4. Record the git information in the Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.
This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this. (The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.) Adopting tag2upload - the minimal change tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package. So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package. Start with the wiki page and git-debpush(1) (ideally from forky aka testing). You don t need to do any of the other things recommended in this article. Overhauling your workflow, using advanced git-first tooling The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging. Assumptions
  • Your current approach uses the patches-unapplied git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.
  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.
  • Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.
  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.
  • Your co-maintainers are also adopting the new approach.
tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing. Topics and tooling This article will guide you in adopting:
  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI
Choosing the git branch format In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git. We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.
rationale Much traditional Debian tooling like quilt and gbp pq uses the patches-unapplied branch format, which stores the delta as patch files in debian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.
git merge Option 1: simply use git, directly, including git merge. Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream. This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/. This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).
git-debrebase Option 2: Adopt git-debrebase. git-debrebase helps maintain your delta as linear series of commits (very like a topic branch in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series. The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch. This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7). Examples of complex packages using this approach include src:xen and src:sbcl.
Determine upstream git and stop using upstream tarballs We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.
rationale Many maintainers have been importing upstream tarballs into git, for example by using gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball! git offers better traceability than so-called pristine upstream tarballs. (The word pristine is even a joke by the author of pristine-tar!)
First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3. Edit debian/watch to contain something like this:
version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)
You may need to adjust the regexp, depending on your upstream s tag name convention. If debian/watch had a files-excluded, you ll need to make a filtered version of upstream git.
git-debrebase From now on we ll generate our own .orig tarballs directly from git.
rationale We need some upstream tarball for the 3.0 (quilt) source format to work with. It needs to correspond to the git commit we re using as our upstream. We don t need or want to use a tarball from upstream for this. The .orig is just needed so a nice legacy Debian source package (.dsc) can be generated.
Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what s in git. The legacy archive has trouble with differing .origs for the same upstream version . So we must until the next upstream release change our idea of the upstream version number. We re going to add +git to Debian s idea of the upstream version. Manually make a tag with that name:
git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+git
If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.
Convert the git branch
git merge Prepare a new branch on top of upstream git, containing what we want:
git branch -f old-master         # make a note of the old git representation
git reset --hard v1.2.3          # go back to the real upstream git tag
git checkout old-master :debian  # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master         # it's incorporated in our history now
If there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you ve chosen this workflow, there should be hardly any patches,)
rationale These are some pretty nasty git runes, indeed. They re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.
git-debrebase Convert the branch to git-debrebase format and rebase onto the upstream git:
git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git
If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.
rationale The force option -fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. -fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.
Manually make your history fast forward from the git import of your previous upload.
dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid
Change the source format Delete any existing debian/source/options and/or debian/source/local-options.
git merge Change debian/source/format to 1.0. Add debian/source/options containing -sn.
rationale We are using the 1.0 native source format. This is the simplest possible source format - just a tarball. We would prefer 3.0 (native) , which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration. You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.
git-debrebase Ensure that debian/source/format contains 3.0 (quilt).
Now you are ready to do a local test build. Sort out the documentation and metadata Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload. Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn t used by dgit, tag2upload, or git-debrebase.
git merge Add a note to debian/changelog about the git packaging change.
git-debrebase git-debrebase new-upstream will have added a new upstream version stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don t remove the +git from the upstream version number there!)
Configure Salsa Merge Requests
git-debrebase In Settings / Merge requests , change Squash commits when merging to Do not allow .
rationale Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase s git branch structure.
Set up Salsa CI, and use it to block merges of bad changes Caveat - the tradeoff gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA). However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing Retry . But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They re a great boon for the lazy solo programmer. The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it. Setup procedure Create debian/salsa-ci.yml containing
include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml
In your Salsa repository, under Settings / CI/CD , expand General Pipelines and set CI/CD configuration file to debian/salsa-ci.yml.
rationale Your project may have an upstream CI config in .gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs. You can add various extra configuration to debian/salsa-ci.yml to customise it. Consult the Salsa CI docs.
git-debrebase Add to debian/salsa-ci.yml:
.git-debrebase-prepare: &git-debrebase-prepare
  # install the tools we'll need
  - apt-get update
  - apt-get --yes install git-debrebase git-debpush
  # git-debrebase needs git user setup
  - git config user.email "salsa-ci@invalid.invalid"
  - git config user.name "salsa-ci"
  # run git-debrebase make-patches
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
  - git-debrebase --force
  - git-debrebase --noop-ok make-patches
  # make an orig tarball using the upstream tag, not a gbp upstream/ tag
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
  - git-deborig
.build-definition: &build-definition
  extends: .build-definition-common
  before_script: *git-debrebase-prepare
build source:
  extends: .build-source-only
  before_script: *git-debrebase-prepare
variables:
  # disable shallow cloning of git repository. This is needed for git-debrebase
  GIT_DEPTH: 0
rationale Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541). These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.
Push this to salsa and make the CI pass. If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That s in Pipelines : press New pipeline in the top right. The defaults will very probably be correct. Block untested pushes, preventing regressions In your project on Salsa, go into Settings / Repository . In the section Branch rules , use Add branch rule . Select the branch master. Set Allowed to merge to Maintainers . Set Allowed to push and merge to No one . Leave Allow force push disabled. This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer Set to auto-merge . Use that. gitlab won t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to. (Sometimes, immediately after creating a merge request in gitlab, you will see a plain Merge button. This is a bug. Don t press that. Reload the page so that Set to auto-merge appears.) autopkgtests Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies. The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article. Day-to-day work With this capable tooling, most tasks are much easier. Making changes to the package Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch. On your MR branch you can freely edit every file. This includes upstream files, and files in debian/. For example, you can:
  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.
When you have a working state of things, tidy up your git branch:
git merge Use git-rebase to squash/edit/combine/reorder commits.
git-debrebase Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude. Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.
Push the MR branch (topic branch) to Salsa and make a Merge Request. Set the MR to auto-merge when all checks pass . (Or, depending on your team policy, you could ask for an MR Review of course.) If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge. Test build An informal test build can be done like this:
apt-get build-dep .
dpkg-buildpackage -uc -b
Ideally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable. If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you ll need to be disciplined about always committing, using git clean and git reset, and so on. For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW. Uploading to Debian Start an MR branch for the administrative changes for the release. Document all the changes you re going to release, in the debian/changelog.
git merge gbp dch can help write the changelog for you:
dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale --ignore-branch is needed because gbp dch wrongly thinks you ought to be running this on master, but of course you re running it on your MR branch. The --git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I m assuming you have an upstream remote and that you re basing your work on their main branch.) If there was a new upstream version, you ll usually want to write a single line about that, and perhaps summarise anything really important.
(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)
Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)
dch -r
git commit -m 'Finalise for upload' debian/changelog
Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to Merge unverified changes .) Now you can perform the actual upload:
git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear
--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.
Uploading a NEW package to Debian If your package is NEW (completely new source, or has new binary packages) you can t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts. Happily, given the same git branch you d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you: Prepare the changelog update and merge it, as above. Then:
git-debrebase Create the orig tarball and launder the git-derebase branch:
git-deborig
git-debrebase quick
rationale Source package format 3.0 (quilt), which is what I m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.
Build the source and binary packages, locally:
dgit sbuild
dgit push-built
rationale You don t have to use dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.
New upstream version Find the new upstream version number and corresponding tag. (Let s suppose it s 1.2.4.) Check the provenance:
git verify-tag v1.2.4
rationale Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.
git merge Simply merge the new upstream version and update the changelog:
git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'
git-debrebase Rebase your delta queue onto the new upstream version:
git debrebase mew-upstream 1.2.4
If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase. After you ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above. Sponsorship git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations. When the time comes to upload, the sponsee notifies the sponsor that it s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush. As part of the sponsor s checks, they might want to see all changes since the last upload to Debian:
dgit fetch sid
git diff dgit/dgit/sid..HEAD
Or to see the Debian delta of the proposed upload:
git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'
git-debrebase Or to show all the delta as a series of commits:
git log -p v1.2.3..HEAD ':!debian'
Don t look at debian/patches/. It can be absent or out of date.
Incorporating an NMU Fetch the NMU into your local git, and see what it contains:
dgit fetch sid
git diff master...dgit/dgit/sid
If the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made. Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:
git merge dgit/dgit/sid
git-debrebase You should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.
Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:
git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it s best to filter them out with git diff ... ':!debian/patches' If you d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like
git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patches
to diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)
DFSG filtering (handling non-free files) Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream s git trees, you need to filter them out. This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons. Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.
rationale Yes, this will end up including the non-free files in the git history, on official Debian servers. That s OK. What s forbidden is non-free material in the Debianised git tree, or in the source packages.
Initial filtering
git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsg
And now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog. If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2. Subsequent upstream releases
git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsg
Removing files by pattern If the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.
rationale Ideally uscan, which has a way of representing DFSG filtering patterns in debian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan s tarball generation.
Common issues
  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different. It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.
  • gitattributes: For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out. Normally this doesn t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.
  • git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them. If you re lucky, the code in the submodule isn t used in which case you can git rm the submodule.
Further reading I ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can t cover without becoming much harder to read. You may want to look at:
  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They re centered around use of dgit, but also discuss tag2upload where applicable. These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated. Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.
  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.) You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).
  • Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).
  • tag2upload documentation: The tag2upload wiki page is a good starting point. There s the git-debpush(1) manpage of course.
  • dgit reference documentation: There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations. dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit- (7) workflow tutorials.
  • Design and implementation documentation for tag2upload is linked to from the wiki.
  • Debian s git transition blog post from December. tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches. git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.
git-debrebase
  • git-debrebase reference documentation: Of course there s a comprehensive command-line manual in git-debrebase(1). git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).

Edited 2026-03-05 18:48 UTC to add a missing --noop-ok to the Salsa CI runes. Thanks to Charlemagne Lasse for the report. Apologies if this causes Debian Planet to re-post this article as if it were new.


comment count unavailable comments

21 February 2026

Vasudev Kamath: Learning Notes: Debsecan MCP Server

Since Generative AI is currently the most popular topic, I wanted to get my hands dirty and learn something new. I was learning about the Model Context Protocol at the time and wanted to apply it to build something simple.
Idea On Debian systems, we use debsecan to find vulnerabilities. However, the tool currently provides a simple list of vulnerabilities and packages with no indication of the system's security posture meaning no criticality information is exposed and no executive summary is provided regarding what needs to be fixed. Of course, one can simply run the following to install existing fixes and be done with it:
apt-get install $(debsecan --suite sid --format packages --only-fixed)
But this is not how things work in corporate environments; you need to provide a report showing the system's previous state and the actions taken to bring it to a safe state. It is all about metrics and reports. My goal was to use debsecan to generate a list of vulnerabilities, find more detailed information on them, and prioritize them as critical, high, medium, or low. By providing this information to an AI, I could ask it to generate an executive summary report detailing what needs to be addressed immediately and the overall security posture of the system.
Initial Take My initial thought was to use an existing LLM, either self-hosted or a cloud-based LLM like Gemini (which provides an API with generous limits via AI Studio). I designed functions to output the list of vulnerabilities on the system and provide detailed information on each. The idea was to use these as "tools" for the LLM.
Learnings
  1. I learned about open-source LLMs using Ollama, which allows you to download and use models on your laptop.
  2. I used Llama 3.1, Llama 3.2, and Granite 4 on my laptop without a GPU. I managed to run my experiments, even though they were time-consuming and occasionally caused my laptop to crash.
  3. I learned about Pydantic and how to use it to parse custom JSON schemas with minimal effort.
  4. I learned about osv.dev, an open-source initiative by Google that aggregates vulnerability information from various sources and provides data in a well-documented JSON schema format.
  5. I learned about the EPSS (Exploit Prediction Scoring System) and how it is used alongside static CVSS scoring to detect truly critical vulnerabilities. The EPSS score provides an idea of the probability of a vulnerability being exploited in the wild based on actual real-world attacks.
These experiments led to a collection of notebooks. One key takeaway was that when defining tools, I cannot simply output massive amounts of text because it consumes tokens and increases costs for paid models (though it is fine for local models using your own hardware and energy). Self-hosted models require significant prompting to produce proper output, which helped me understand the real-world application of prompt engineering.
Change of Plans Despite extensive experimentation, I felt I was nowhere close to a full implementation. While using a Gemini learning tool to study MCP, it suddenly occurred to me: why not write the entire thing as an MCP server? This would save me from implementing the agent side and allow me to hook it into any IDE-based LLM.
Design This MCP server is primarily a mix of a "tool" (which executes on the server machine to identify installed packages and their vulnerabilities) and a "resource" (which exposes read-only information for a specific CVE ID). The MCP exposes two tools:
  1. List Vulnerabilities: This tool identifies vulnerabilities in the packages installed on the system, categorizes them using CVE and EPSS scores, and provides a dictionary of critical, high, medium, and low vulnerabilities.
  2. Research Vulnerabilities: Based on the user prompt, the LLM can identify relevant vulnerabilities and pass them to this function to retrieve details such as whether a fix is available, the fixed version, and criticality.
Vibe Coding "Vibe coding" is the latest trend, with many claiming that software engineering jobs are a thing of the past. Without going into too much detail, I decided to give it a try. While this is not my first "vibe coded" project (I have done this previously at work using corporate tools), it is my first attempt to vibe code a hobby/learning project. I chose Antigravity because it seemed to be the only editor providing a sufficient amount of free tokens. For every vibe coding project, I spend time thinking about the barebones skeleton: the modules, function return values, and data structures. This allows me to maintain control over the LLM-generated code so it doesn't become overly complicated or incomprehensible. As a first step, I wrote down my initial design in a requirements document. In that document, I explicitly called for using debsecan as the model for various components. Additionally, I asked the AI to reference my specific code for the EPSS logic. The reasons were:
  1. debsecan already solves the core problem; I am simply rebuilding it. debsecan uses a single file generated by the Debian Security team containing all necessary information, which prevents us from needing multiple external sources.
  2. This provides the flexibility to categorize vulnerabilities within the listing tool itself since all required information is readily available, unlike my original notebook-based design.
I initially used Gemini 3 Flash as the model because I was concerned about exceeding my free limits.
Hiccups Although it initially seemed successful, I soon noticed discrepancies between the local debsecan outputs and the outputs generated by the tools. I asked the AI to fix this, but after two attempts, it still could not match the outputs. I realized it was writing its own version-comparison logic and failing significantly. Finally, I instructed it to depend entirely on the python-apt module for version comparison; since it is not on PyPI, I asked it to pull directly from the Git source. This solved some issues, but the problem persisted. By then, my weekly quota was exhausted, and I stopped debugging. A week later, I resumed debugging with the Claude 3.5 Sonnet model. Within 20-25 minutes, it found the fix, which involved four lines of changes in the parsing logic. However, I ran out of limits again before I could proceed further. In the requirements, I specified that the list vulnerabilities tool should only provide a dictionary of CVE IDs divided by severity. However, the AI instead provided full text for all vulnerability details, resulting in excessive data including negligible vulnerabilities being sent to the LLM. Consequently, it never called the research vulnerabilities tool. Since I had run out of limits, I manually fixed this in a follow-up commit.
How to Use I have published the current work in the debsecan-mcp repository. I have used the same license as the original debsecan. I am not entirely sure how to interpret licenses for vibe-coded projects, but here we are. To use this, you need to install the tool in a virtual environment and configure your IDE to use the MCP. Here is how I set it up for Visual Studio Code:
  1. Follow the guide from the VS Code documentation regarding adding an MCP server.
  2. My global mcp.json looks like this:
 
  "servers":  
      "debsecan-mcp":  
          "command": "uv",
              "args": [
                  "--directory",
                  "/home/vasudev/Documents/personal/FOSS/debsecan-mcp/debsecan-mcp",
                  "run",
                  "debsecan-mcp"
              ]
           
   ,
  "inputs": []
 
  1. I am running it directly from my local codebase using a virtualenv created with uv. You may need to tweak the path based on your installation.
  2. To use the MCP server in the Copilot chat window, reference it using #debsecan-mcp. The LLM will then use the server for the query.
  3. Use a prompt like: "Give an executive summary of the system security status and immediate actions to be taken."
  4. You can observe the LLM using list_vulnerabilities followed by research_cves. Because the first tool only provides CVE IDs based on severity, the LLM is smart enough to research only high and critical vulnerabilities, thereby saving tokens.
What's Next? This MCP is not yet perfect and has the following issues:
  1. The list_vulnerabilities dictionary contains duplicate CVE IDs because the code used a list instead of a set. While the LLM is smart enough to deduplicate these, it still costs extra tokens.
  2. Because I initially modeled this on debsecan, it uses a raw method for parsing /var/lib/dpkg/status instead of python-apt. I am considering switching to python-apt to reduce maintenance overhead.
  3. Interestingly, the AI did not add a single unit test, which is disappointing. I will add these once my limits are restored.
  4. I need to create a cleaner README with usage instructions.
  5. I need to determine if the MCP can be used via HTTP as well as stdio.
Conclusion Vibe coding is interesting, but things can get out of hand if not managed properly. Even with a good process, code must be reviewed and tested; you cannot blindly trust an AI to handle everything. Even if it adds tests, you must validate them, or you are doomed!

17 February 2026

Russell Coker: Links February 2026

Charles Stross has a good theory of why AI is being pushed on corporations, really we need to just replace CEOs with LLMs [1]. This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it. interesting analysis of dbus and design for a more secure replacement [3]. Scott Jenson gave an insightful lecture for Canonical about future potential developments in the desktop UX [4]. Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting. Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched. Fil-C is an interesting project to compile C/C++ programs in a memory safe way, some of which can be considered a software equivalent of CHERI [7]. Brian Krebs wrote a long list of the ways that Trump has enabled corruption and a variety of other crimes including child sex abuse in the last year [8]. This video about designing a C64 laptop is a masterclass in computer design [9]. Salon has an interesting article about the abortion thought experiment that conservatives can t handle [10]. Ron Garrett wrote an insightful blog post about abortion [11]. Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!

Freexian Collaborators: Monthly report about Debian Long Term Support, January 2026 (by Santiago Ruano Rinc n)

The Debian LTS Team, funded by Freexian s Debian LTS offering, is pleased to report its activities for January.

Activity summary During the month of January, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below). The team released 33 DLAs fixing 216 CVEs. The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 ( bullseye ), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 ( bookworm ) and Debian 13 ( trixie )), including Debian unstable. We highlight several notable security updates here below. Notable security updates:
  • python3.9, prepared by Andrej Shadura (DLA-4455-1), fixing multiple vulnerabilities in the Python interpreter.
  • php, prepared by Guilhem Moulin (DLA-4447-1), fixing two vulnerabilities that could yield to request forgery or denial of service.
  • apache2, prepared by Bastien Roucari s DLA-4452-1, fixing four CVEs.
  • linux-6.1, prepared by Ben Hutchings (DLA-4436-1), as a regular update of the linux 6.1 backport to Debian 11.
  • python-django, prepared by Chris Lamb (DLA-4458-1), resolving multiple vulnerabilities.
  • firefox-esr prepared by Emilio Pozuelo Monfort (DLA-4439-1)
  • gnupg2, prepared by Roberto S nchez (DLA-4437-1), fixing multiple issues, including CVE-2025-68973 that could potentially be exploited to execute arbitrary code.
  • apache-log4j2, prepared by Markus Koschany (DLA-4444-1)
  • ceph, prepared by Utkarsh Gupta (DLA-4460-1)
  • inetutils, prepared by Andreas Henriksson (DLA-4453-1), fixing an authentication bypass in telnetd.
Moreover, Sylvain Beucler studied the security support status of p7zip, a fork of 7zip that has become unmaintained upstream. To avoid letting the users continue using an unsupported package, Sylvain has investigated a path forward in collaboration with the security team and the 7zip maintainer, looking to replace p7zip with 7zip. It is to note however that 7zip developers don t reveal the information about the patches that fix CVEs, making it difficult to backport single patches to fix vulnerabilities in Debian released versions. Contributions from outside the LTS Team: Thunderbird, prepared by maintainer Christoph Goehre. The DLA (DLA-4442-1) was published by Emilio. The LTS Team has also contributed with updates to the latest Debian releases:
  • Bastien uploaded gpsd to unstable, and proposed updates for trixie #1126121 and bookworm #1126168 to fix two CVEs.
  • Bastien also prepared the imagemagick updates for trixie and bookworm, released as DSA-6111-1, along with the bullseye update DLA-4448-1.
  • Chris proposed a trixie point update for python-django (#112646), and the work for bookworm was completed in February (#1079454). The longstanding bookworm update required tracking down a regression in the django-storages packages.
  • Markus prepared tomcat10 updates for trixie and bookworm (DSA-6120-1), and tomcat11 for trixie (DSA-6121-1)
  • Thorsten Alteholz prepared bookworm point updates for zvbi (#1126167) to fix five CVEs; taglib (#1126273) to fix one CVE; and libuev (#1126370) to fix one CVE.
  • Utkarsh prepared an unstable update of node-lodash to fix one CVE.
Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team.

Individual Debian LTS contributor reports

Thanks to our sponsors Sponsors that joined recently are in bold.

8 February 2026

Colin Watson: Free software activity in January 2026

About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people. You can also support my work directly via Liberapay or GitHub Sponsors. Python packaging New upstream versions: Fixes for Python 3.14: Fixes for pytest 9: Porting away from the deprecated pkg_resources: Other build/test failures: I investigated several more build failures and suggested removing the packages in question: Other bugs: Other bits and pieces Alejandro Colomar reported that man(1) ignored the MANWIDTH environment variable in some circumstances. I investigated this and fixed it upstream. I contributed an ubuntu-dev-tools patch to stop recommending sudo. I added forky support to the images used in Salsa CI pipelines. I began working on getting a release candidate of groff 1.24.0 into experimental, though haven t finished that yet. I worked on some lower-priority security updates for OpenSSH. Code reviews

31 January 2026

Russ Allbery: Review: Dragon Pearl

Review: Dragon Pearl, by Yoon Ha Lee
Series: Thousand Worlds #1
Publisher: Rick Riordan Presents
Copyright: 2019
ISBN: 1-368-01519-0
Format: Kindle
Pages: 315
Dragon Pearl is a middle-grade space fantasy based on Korean mythology and the first book of a series. Min is a fourteen-year-old girl living on the barely-terraformed world of Jinju with her extended family. Her older brother Jun passed the entrance exams for the Academy and left to join the Thousand Worlds Space Forces, and Min is counting the years until she can do the same. Those plans are thrown into turmoil when an official investigator appears at their door claiming that Jun deserted to search for the Dragon Pearl. A series of impulsive fourteen-year-old decisions lead to Min heading for a spaceport alone, determined to find her brother and prove his innocence. This would be a rather improbable quest for a young girl, but Min is a gumiho, one of the supernaturals who live in the Thousand Worlds alongside non-magical humans. Unlike the more respectable dragons, tigers, goblins, and shamans, gumiho are viewed with suspicion and distrust because their powers are useful for deception. They are natural shapeshifters who can copy the shapes of others, and their Charm ability lets them influence people's thoughts and create temporary illusions of objects such as ID cards. It will take all of Min's powers, and some rather lucky coincidences, to infiltrate the Space Forces and determine what happened to her brother. It's common for reviews of this book to open with a caution that this is a middle-grade adventure novel and you should not expect a story like Ninefox Gambit. I will be boring and repeat that caution. Dragon Pearl has a single first-person viewpoint and a very linear and straightforward plot. Adult readers are unlikely to be surprised by plot twists; the fun is the world-building and seeing how Min manages to work around plot obstacles. The world-building is enjoyable but not very rigorous. Min uses and abuses Charm with the creative intensity of a Dungeons & Dragons min-maxer. Each individual event makes sense given the implication that Min is unusually powerful, but I'm dubious about the surrounding society and lack of protections against Charm given what Min is able to do. Min does say that gumiho are rare and many people think they're extinct, which is a bit of a fig leaf, but you'll need to bring your urban fantasy suspension of disbelief skills to this one. I did like that the world-building conceit went more than skin deep and influenced every part of the world. There are ghosts who are critical to the plot. Terraforming is done through magic, hence the quest for the Dragon Pearl and the miserable state of Min's home planet due to its loss. Medical treatment involves the body's meridians, as does engineering: The starships have meridians similar to those of humans, and engineers partly merge with those meridians to adjust them. This is not the sort of book that tries to build rigorous scientific theories or explain them to the reader, and I'm not sure everything would hang together if you poked at it too hard, but Min isn't interested in doing that poking and the story doesn't try to justify itself. It's mostly a vibe, but it's a vibe that I enjoyed and that is rather different than other space fantasy I've read. The characters were okay but never quite clicked for me, in part because proper character exploration would have required Min take a detour from her quest to find her brother and that was not going to happen. The reader gets occasional glimpses of a military SF cadet story and a friendship on false premises story, but neither have time to breathe because Min drops any entanglement that gets in the way of her quest. She's almost amoral in a way that I found believable but not quite aligned with my reading mood. I also felt a bit wrong-footed by how her friendships developed; saying too much more would be a spoiler, but I was expecting more human connection than I got. I think my primary disappointment with this book was something I knew going in, not in any way its fault, and part of the reason why I'd put off reading it: This is pitched at young teenagers and didn't have quite enough plot and characterization complexity to satisfy me. It's a linear, somewhat episodic adventure story with some neat world-building, and it therefore glides over the spots where an adult novel would have added political and factional complexity. That is exactly as advertised, so it's up to you whether that's the book you're in the mood for. One warning: The text of this book opens with an introduction by Rick Riordan that is just fluff marketing and that spoils the first few chapters of the book. It is unmarked as such at the beginning and tricked me into thinking it was the start of the book proper, and then deeply annoyed me. If you do read this book, I recommend skipping the utterly pointless introduction and going straight to chapter one. Followed by Tiger Honor. Rating: 6 out of 10

Next.