Search Results: "agi"

20 February 2025

Paul Tagliamonte: boot2kier

I can t remember exactly the joke I was making at the time in my work s slack instance (I m sure it wasn t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with userland . I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven t seen it, this post will seem perhaps weirder than it actually is. I promise I haven t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan. As for how to do it I figured I d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn t the sort of thing I d usually post about except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI. First thing s first gotta create a rust project (I ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi =   version = "0.33", features = ["panic_handler", "alloc", "global_allocator"]  
We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we re interested in:
[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]
Unfortunately, I wasn t able to use the image crate, since it won t build against the uefi target. This looks like it s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we re nostd for a non-hardfloat target. So-called softening requires a software floating point implementation that the compiler can use to polyfill (feels weird to use the term polyfill here, but I guess it s spiritually right?) the lack of hardware floating point operations, which rust hasn t implemented for this target yet. As a result, I changed tactics, and figured I d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result but it s entirely manageable.
$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin
This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it s also important to remember that the size of the kier.full.jpg file may not actually be the requested size it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file. Last step with the image is to compile it into our Rust bianary, since we don t want to struggle with trying to read this off disk, which is thankfully real easy to do.
const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;
Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu s default resolution for me) which means we ll get a semi-annoying black band under the image when we go to run it but it ll work. Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We ll just need to hack up a container for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
///  image::RgbImage , but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage  
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
 
impl RgbImage  
/// Create a new  RgbImage .
 fn new(width: usize, height: usize) -> Self  
RgbImage  
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
 
 
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result  
gop.blt(BltOp::BufferToVideo  
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
 )
 
 
impl Index<(usize, usize)> for RgbImage  
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel  
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
 
 
impl IndexMut<(usize, usize)> for RgbImage  
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel  
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
 
 
We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution so we need to do some capping to ensure that we don t write more pixels than the display can handle. Writing fewer than the display s maximum seems fine, though.
fn praise() -> Result  
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height  
for x in 0..width  
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
 
 
buffer.write(&mut gop)?;
Ok(())
 
Not so bad! A bit tedious we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image but this will do for now. This is a joke, after all, let s not go nuts. All that s left with our code is for us to write our main function and try and boot the thing!
#[entry]
fn main() -> Status  
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
 
If you re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.
$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob While I can definitely get my machine to boot these blobs to test, I figured I d save myself some time by using QEMU to test without a full boot. If you ve not done this sort of thing before, we ll need two packages, qemu and ovmf. It s a bit different than most invocations of qemu you may see out there so I figured it d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it ll create us an EFI partition as a drive and attach it to the VM off a local directory so let s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven t done this before, and are only interested in running this in a VM, don t worry too much about it, a lot of it is convention and this layout should work for you.
$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi
With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.
$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp
If all goes well, soon you ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it s incredibly well done.

Booting a live system Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you ll need to work out depending on how you ve decided to manage your MOK.
$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi
I figured I d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I m sure there is a way to do it using efibootmgr, but I wasn t smart enough to do that quickly. I let er rip, and it booted up and worked great! It was a bit hard to get a video of my laptop, though but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I d grab a video of me booting a physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted system which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don t think I care to do that much work.

But now I must away If I wanted to keep this joke going, I d likely try and find a copy of the original video when Helly 100%s her file and boot into that or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don t have any friends involved with production (yet?), so I reckon all that s out for now. I ll likely stop playing with this the joke was done and I m only writing this post because of how great everything was along the way. All in all, this reminds me so much of building a homebrew kernel to boot a system into but like, good, though, and it s a nice reminder of both how fun this stuff can be, and how far we ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can t believe how good the uefi crate is specifically. Praise Kier! Kudos, to everyone involved in making this so delightful .

12 February 2025

Jonathan Dowland: FOSDEM 2025

I'm going to FOSDEM 2025! As usual, I'll be in the Java Devroom for most of that day, which this time around is Saturday. Please recommend me any talks! This is my shortlist so far:

11 February 2025

B lint R czey: Supercharge Your Installs with apt-eatmydata: Because Who Needs Crash Safety Anyway?

APT eatmydata super cow powers
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!

What Is apt-eatmydata? If you ve ever used libeatmydata, you know it s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged no extra typing required.

How to Get It

Debian If you re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata

Ubuntu Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
sudo add-apt-repository ppa:firebuild/apt-eatmydata
sudo apt install apt-eatmydata
And boom! Your apt install times are getting serious upgrade. Let s run some tests
# pre-download package to measure only the installation
$ sudo apt install -d linux-headers-6.8.0-53-lowlatency
...
# installation time is 9.35s without apt-eatmydata:
$ sudo time apt install linux-headers-6.8.0-53-lowlatency
...
2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k
32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps
$ sudo apt install apt-eatmydata
...
$ sudo apt purge linux-headers-6.8.0-53-lowlatency
# installation time is 3.17s with apt-eatmydata:
$ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency
2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k
0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!

But Wait, There s More!  If you re automating CI builds, there s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here:
 GitHub Marketplace: apt-eatmydata

Should You Use It?  Warning: apt-eatmydata is not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It s an absolute game-changer. I use it on my laptop, too. So go forth and install recklessly fast!  If you run into any issues, feel free to file a bug or drop a comment. Happy hacking! (To accelerate your CI pipeline or local builds, check out Firebuild, that speeds up the builds, too!)

10 February 2025

Petter Reinholdtsen: Some of my 2024 free software activities

It is a while since I posted a summary of the free software and open culture activities and projects I have worked on. Here is a quick summary of the major ones from last year. I guess the biggest project of the year has been migrating orphaned packages in Debian without a version control system to have a git repository on salsa.debian.org. When I started in April around 450 the orphaned packages needed git. I've since migrated around 250 of the packages to a salsa git repository, and around 40 packages were left when I took a break. Not sure who did the around 160 conversions I was not involved in, but I am very glad I got some help on the project. I stopped partly because some of the remaining packages needed more disk space to build than I have available on my development machine, and partly because some had a strange build setup I could not figure out. I had a time budget of 20 minutes per package, if the package proved problematic and likely to take longer, I moved to another package. Might continue later, if I manage to free up some disk space. Another rather big project was the translation to Norwegian Bokm l and publishing of the first book ever published by a S mi woman, the M ter vi liv eller d d? book by Elsa Laula, with a PD0 and CC-BY license. I released it during the summer, and to my surprise it has already sold several copies. As I suck at marketing, I did not expect to sell any. A smaller, but more long term project (for more than 10 years now), and related to orphaned packages in Debian, is my project to ensure a simple way to install hardware related packages in Debian when the relevant hardware is present in a machine. It made a fairly big advance forward last year, partly because I have been poking and begging package maintainers and upstream developers to include AppStream metadata XML in their packages. I've also released a few new versions of the isenkram system with some robustness improvements. Today 127 packages in Debian provide such information, allowing isenkram-lookup to propose them. Will keep pushing until the around 35 package names currently hard coded in the isenkram package are down to zero, so only information provided by individual packages are used for this feature. As part of the work on AppStream, I have sponsored several packages into Debian where the maintainer wanted to fix the issue but lacked direct upload rights. I've also sponsored a few other packages, when approached by the maintainer. I would also like to mention two hardware related packages in particular where I have been involved, the megactl and mfi-util packages. Both work with the hardware RAID systems in several Dell PowerEdge servers, and the first one is already available in Debian (and of course, proposed by isenkram when used on the appropriate Dell server), the other is waiting for NEW processing since this autumn. I manage several such Dell servers and would like the tools needed to monitor and configure these RAID controllers to be available from within Debian out of the box. Vaguely related to hardware support in Debian, I have also been trying to find ways to help out the Debian ROCm team, to improve the support in Debian for my artificial idiocy (AI) compute node. So far only uploaded one package, helped test the initial packaging of llama.cpp and tried to figure out how to get good speech recognition like Whisper into Debian. I am still involved in the LinuxCNC project, and organised a developer gathering in Norway last summer. A new one is planned the summer of 2025. I've also helped evaluate patches and uploaded new versions of LinuxCNC into Debian. After a 10 years long break, we managed to get a new and improved upstream version of lsdvd released just before Christmas. As I use it regularly to maintain my DVD archive, I was very happy to finally get out a version supporting DVDDiscID useful for uniquely identifying DVDs. I am dreaming of a Internet service mapping DVD IDs to IMDB movie IDs, to make life as a DVD collector easier. My involvement in Norwegian archive standardisation and the free software implementation of the vendor neutral Noark 5 API continued for the entire year. I've been pushing patches into both the API and the test code for the API, participated in several editorial meetings regarding the Noark 5 Tjenestegrensesnitt specification, submitted several proposals for improvements for the same. We also organised a small seminar for Noark 5 interested people, and is organising a new seminar in a month. Part of the year was spent working on and coordinating a Norwegian Bokm l translation of the marvellous children's book Ada and Zangemann , which focus on the right to repair and control your own property, and the value of controlling the software on the devices you own. The translation is mostly complete, and is now waiting for a transformation of the project and manuscript to use Docbook XML instead of a home made semi-text based format. Great progress is being made and the new book build process is almost complete. I have also been looking at how to companies in Norway can use free software to report their accounting summaries to the Norwegian government. Several new regulations make it very hard for companies to do use free software for accounting, and I would like to change this. Found a few drafts for opening up the reporting process, and have read up on some of the specifications, but nothing much is working yet. These were just the top of the iceberg, but I guess this blog post is long enough now. If you would like to help with any of these projects, please get in touch, either directly on the project mailing lists and forums, or with me via email, IRC or Signal. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

9 February 2025

Philipp Kern: 20 years

20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.
During my studies I took on more things. In January 2008 I joined the Release Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.
Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.I ended up as Stable Release Manager for a while, from August 2008 - when Martin Zobel-Helas moved into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the creation of -updates in favor of a separate volatile archive and a change of the update policy to allow for more common sense updates in the main archive vs. the very strict "breakage or security" policy we had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.
In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.
Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.
I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.

The future
I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.Hopefully I can contribute a bit or two to these efforts in the future.

Antoine Beaupr : A slow blogging year

Well, 2024 will be remembered, won't it? I guess 2025 already wants to make its mark too, but let's not worry about that right now, and instead let's talk about me. A little over a year ago, I was gloating over how I had such a great blogging year in 2022, and was considering 2023 to be average, then went on to gather more stats and traffic analysis... Then I said, and I quote:
I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...
What a load of bollocks.

A bad year for this blog 2024 was the second worst year ever in my blogging history, tied with 2009 at a measly 6 posts for the year:
anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/   grep 'href="\./'   grep -o 20[0-9][0-9]   sort   uniq -c   sort -nr   grep -v 2025   tail -3
      6 2024
      6 2009
      3 2014
I did write about my work though, detailing the migration from Gitolite to GitLab we completed that year. But after August, total radio silence until now.

Loads of drafts It's not that I have nothing to say: I have no less than five drafts in my working tree here, not counting three actual drafts recorded in the Git repository here:
anarcat@angela:anarc.at$ git s blog
## main...origin/main
?? blog/bell-bot.md
?? blog/fish.md
?? blog/kensington.md
?? blog/nixos.md
?? blog/tmux.md
anarcat@angela:anarc.at$ git grep -l '\!tag draft'
blog/mobile-massive-gallery.md
blog/on-dying.mdwn
blog/secrets-recovery.md
I just don't have time to wrap those things up. I think part of me is disgusted by seeing my work stolen by large corporations to build proprietary large language models while my idols have been pushed to suicide for trying to share science with the world. Another part of me wants to make those things just right. The "tagged drafts" above are nothing more than a huge pile of chaotic links, far from being useful for anyone else than me, and even then. The on-dying article, in particular, is becoming my nemesis. I've been wanting to write that article for over 6 years now, I think. It's just too hard.

Writing elsewhere There's also the fact that I write for work already. A lot. Here are the top-10 contributors to our team's wiki:
anarcat@angela:help.torproject.org$ git shortlog --numbered --summary --group="format:%al"   head -10
  4272  anarcat
   423  jerome
   117  zen
   116  lelutin
   104  peter
    58  kez
    45  irl
    43  hiro
    18  gaba
    17  groente
... but that's a bit unfair, since I've been there half a decade. Here's the last year:
anarcat@angela:help.torproject.org$ git shortlog --since=2024-01-01 --numbered --summary --group="format:%al"   head -10
   827  anarcat
   117  zen
   116  lelutin
    91  jerome
    17  groente
    10  gaba
     8  micah
     7  kez
     5  jnewsome
     4  stephen.swift
So I still write the most commits! But to truly get a sense of the amount I wrote in there, we should count actual changes. Here it is by number of lines (from commandlinefu.com):
anarcat@angela:help.torproject.org$ git ls-files   xargs -n1 git blame --line-porcelain   sed -n 's/^author //p'   sort -f   uniq -ic   sort -nr   head -10
  99046 Antoine Beaupr 
   6900 Zen Fu
   4784 J r me Charaoui
   1446 Gabriel Filion
   1146 Jerome Charaoui
    837 groente
    705 kez
    569 Gaba
    381 Matt Traudt
    237 Stephen Swift
That, of course, is the entire history of the git repo, again. We should take only the last year into account, and probably ignore the tails directory, as sneaky Zen Fu imported the entire docs from another wiki there...
anarcat@angela:help.torproject.org$ find [d-s]* -type f -mtime -365   xargs -n1 git blame --line-porcelain 2>/dev/null   sed -n 's/^author //p'   sort -f   uniq -ic   sort -nr   head -10
  75037 Antoine Beaupr 
   2932 J r me Charaoui
   1442 Gabriel Filion
   1400 Zen Fu
    929 Jerome Charaoui
    837 groente
    702 kez
    569 Gaba
    381 Matt Traudt
    237 Stephen Swift
Pretty good! 75k lines. But those are the files that were modified in the last year. If we go a little more nuts, we find that:
anarcat@angela:help.torproject.org$ $ git-count-words-range.py    sort -k6 -nr   head -10
parsing commits for words changes from command: git log '--since=1 year ago' '--format=%H %al'
anarcat 126116 - 36932 = 89184
zen 31774 - 5749 = 26025
groente 9732 - 607 = 9125
lelutin 10768 - 2578 = 8190
jerome 6236 - 2586 = 3650
gaba 3164 - 491 = 2673
stephen.swift 2443 - 673 = 1770
kez 1034 - 74 = 960
micah 772 - 250 = 522
weasel 410 - 0 = 410
I wrote 126,116 words in that wiki, only in the last year. I also deleted 37k words, so the final total is more like 89k words, but still: that's about forty (40!) articles of the average size (~2k) I wrote in 2022. (And yes, I did go nuts and write a new log parser, essentially from scratch, to figure out those word diffs. I did get the courage only after asking GPT-4o for an example first, I must admit.) Let's celebrate that again: I wrote 90 thousand words in that wiki in 2024. According to Wikipedia, a "novella" is 17,500 to 40,000 words, which would mean I wrote about a novella and a novel, in the past year. But interestingly, if I look at the repository analytics. I certainly didn't write that much more in the past year. So that alone cannot explain the lull in my production here.

Arguments Another part of me is just tired of the bickering and arguing on the internet. I have at least two articles in there that I suspect is going to get me a lot of push-back (NixOS and Fish). I know how to deal with this: you need to write well, consider the controversy, spell it out, and defuse things before they happen. But that's hard work and, frankly, I don't really care that much about what people think anymore. I'm not writing here to convince people. I have stop evangelizing a long time ago. Now, I'm more into documenting, and teaching. And, while teaching, there's a two-way interaction: when you give out a speech or workshop, people can ask questions, or respond, and you all learn something. When you document, you quickly get told "where is this? I couldn't find it" or "I don't understand this" or "I tried that and it didn't work" or "wait, really? shouldn't we do X instead", and you learn. Here, it's static. It's my little soapbox where I scream in the void. The only thing people can do is scream back.

Collaboration So. Let's see if we can work together here. If you don't like something I say, disagree, or find something wrong or to be improved, instead of screaming on social media or ignoring me, try contributing back. This site here is backed by a git repository and I promise to read everything you send there, whether it is an issue or a merge request. I will, of course, still read comments sent by email or IRC or social media, but please, be kind. You can also, of course, follow the latest changes on the TPA wiki. If you want to catch up with the last year, some of the "novellas" I wrote include: (Well, no, you can't actually follow changes on a GitLab wiki. But we have a wiki-replica git repository where you can see the latest commits, and subscribe to the RSS feed.) See you there!

6 February 2025

Dominique Dumont: Drawbacks of using Cookiecutter with Cruft

Hi Cookiecutter is a tool for building coding project templates. It s often used to provide a scaffolding to build lots of similar project. I ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects.  Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user s code. In other words, they wanted their users to apply a diff of the template modification to their code. At the beginning, all was fine. But problems began to appear during the lifetime of these projects. What went wrong ? In both cases, we had the following scenario: User team giving up on updates is a major problem because these update may bring security or compliance fixes.  Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there s no common ancestor, so 3-way merges are not possible. From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen. Possible solutions to get out of this tar pit: Of course your users won t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate). If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command. My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

5 February 2025

Reproducible Builds: Reproducible Builds in January 2025

Welcome to the first report in 2025 from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. reproduce.debian.net
  2. Two new academic papers
  3. Distribution work
  4. On our mailing list
  5. Upstream patches
  6. diffoscope
  7. Website updates
  8. Reproducibility testing framework

reproduce.debian.net The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, we are pleased to announce that in addition to the existing amd64.reproduce.debian.net and i386.reproduce.debian.net architecture-specific pages, we now build for a three more architectures (for a total of five) arm64 armhf and riscv64.

Two new academic papers Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian K stner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:
[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.
The entire PDF is available online to view.
In addition, Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale?. Answering strongly in the affirmative, the article s abstract reads as follows:
In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.
As above, the entire PDF of the article is available to view online.

Distribution work There as been the usual work in various distributions this month, such as:
  • 10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.
  • The FreeBSD Foundation announced that a planned project to deliver zero-trust builds has begun in January 2025 . Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly that is, that a third party can build bit-for-bit identical artifacts. The full announcement can be found online, which includes an estimated schedule and other details.

On our mailing list On our mailing list this month:
  • Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between SOURCE_DATE_EPOCH environment variable and testing that generated a number of replies.
  • Adithya Balakumar of Toshiba asked a question about whether it is possible to make ext4 filesystem images reproducible. Adithya s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the Last mount and Last write timestamps.
  • James Addison also investigated an interesting issue surrounding our disorderfs filesystem. In particular:
    FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them to fail.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285, 286 and 287 to Debian:
  • Security fixes:
    • Validate the --css command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. [ ]
    • Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. [ ][ ]
    • Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of pyexpat. [ ]
  • Bug fixes:
    • Correctly identify changes to only the line-endings of files; don t mark them as Ordering differences only. [ ]
    • When passing files on the command line, don t call specialize( ) before we ve checked that the files are identical or not. [ ]
    • Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. [ ]
    • Don t cause a traceback if cbfstool extraction failed.. [ ]
    • Use the surrogateescape mechanism to avoid a UnicodeDecodeError and crash when any decoding zipinfo output that is not UTF-8 compliant. [ ]
  • Testsuite improvements:
    • Don t mangle newlines when opening test fixtures; we want them untouched. [ ]
    • Move to assert_diff in test_text.py. [ ]
  • Misc improvements:
    • Drop unused subprocess imports. [ ][ ]
    • Drop an unused function in iso9600.py. [ ]
    • Inline a call and check of Config().force_details; no need for an additional variable in this particular method. [ ]
    • Remove an unnecessary return value from the Difference.check_for_ordering_differences method. [ ]
    • Remove unused logging facility from a few comparators. [ ]
    • Update copyright years. [ ][ ]
In addition, fridtjof added support for the ASAR .tar-like archive format. [ ][ ][ ][ ] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 [ ][ ] and 286 [ ][ ].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1 was uploaded to Debian unstable by Chris Lamb, making the following the changes:
  • Clarify the --verbose and non --verbose output of bin/strip-nondeterminism so we don t imply we are normalizing files that we are not. [ ]
  • Bump Standards-Version to 4.7.0. [ ]

Website updates There were a large number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add support for rebuilding the armhf architecture. [ ][ ]
    • Add support for rebuilding the arm64 architecture. [ ][ ][ ][ ]
    • Add support for rebuilding the riscv64 architecture. [ ][ ]
    • Move the i386 builder to the osuosl5 node. [ ][ ][ ][ ]
    • Don t run our rebuilders on a public port. [ ][ ]
    • Add database backups on all builders and add links. [ ][ ]
    • Rework and dramatically improve the statistics collection and generation. [ ][ ][ ][ ][ ][ ]
    • Add contact info to the main page [ ], thumbnails [ ] as well as the new, missing architectures. [ ]
    • Move the amd64 worker to the osuosl4 and node. [ ]
    • Run the underlying debrebuild script under nice. [ ]
    • Try to use TMPDIR when calling debrebuild. [ ][ ]
  • buildinfos.debian.net-related:
    • Stop creating buildinfo-pool_$ suite _$ arch .list files. [ ]
    • Temporarily disable automatic updates of pool links. [ ]
  • FreeBSD-related:
    • Fix the sudoers to actually permit builds. [ ]
    • Disable debug output for FreeBSD rebuilding jobs. [ ]
    • Upgrade to FreeBSD 14.2 [ ] and document that bmake was installed on the underlying FreeBSD virtual machine image [ ].
  • Misc:
    • Update the real year to 2025. [ ]
    • Don t try to install a Debian bookworm kernel from backports on the infom08 node which is running Debian trixie. [ ]
    • Don t warn about system updates for systems running Debian testing. [ ]
    • Fix a typo in the ZOMBIES definition. [ ][ ]
In addition:
  • Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. [ ]
  • Gioele Barabucci updated the rebuilder stats to first add a category for network errors [ ] as well as to categorise failures without a diffoscope log [ ].
  • Jessica Clarke also made some FreeBSD-related changes, including:
    • Ensuring we clean up the object directory for second build as well. [ ][ ]
    • Updating the sudoers for the relevant rm -rf command. [ ]
    • Update the cleanup_tmpdirs method to to match other removals. [ ]
  • Jochen Sprickerhof:
  • Roland Clobus:
    • Update the reproducible_debstrap job to call Debian s debootstrap with the full path [ ] and to use eatmydata as well [ ][ ].
    • Make some changes to deduce the CPU load in the debian_live_build job. [ ]
Lastly, both Holger Levsen [ ] and Vagrant Cascadian [ ] performed some node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

2 February 2025

Colin Watson: Free software activity in January 2025

Most of my Debian contributions this month were sponsored by Freexian. If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people. You can also support my work directly via Liberapay. Python team We finally made Python 3.13 the default version in testing! I fixed various bugs that got in the way of this: As with last month, I fixed a few more build regressions due to the removal of a deprecated intersphinx_mapping syntax in Sphinx 8.0: I ported a few packages to Django 5.1: I ported python-pypump to IPython 8.0. I fixed python-datamodel-code-generator to handle isort 6, and contributed that upstream. I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools: I removed the old python-celery-common transitional package from celery, since nothing in Debian needs it any more. I fixed or helped to fix various other build/test failures: I upgraded these packages to new upstream versions: Rust team I fixed rust-pyo3-ffi to avoid explicit Python version dependencies that were getting in the way of making Python 3.13 the default version. Security tools packaging team I uploaded libevt to fix a build failure on i386 and to tolerate future versions of dh-python that will drop their dependency on python3-setuptools. Installer team I helped with some testing of a debian-installer-utils patch as part of the /usr move. I need to get around to uploading this, since it looks OK now. Other small things Helmut Grohne reached out for help debugging a multi-arch coinstallability problem (you know it s going to be complicated when even Helmut can t figure it out on his own ) in binutils, and we had a call about that. I reviewed and applied a new Romanian translation of debconf s manual pages. I did my twice-yearly refresh of debmirror s mirror_size documentation, and applied a contribution to improve the example debmirror.conf. I fixed an arguable preprocessor string handling bug in man-db, and applied a fix for out-of-tree builds.

Joachim Breitner: Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It s winter right now, but when its summer again it s always a bit. this weekend I got closer to that goal. TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding Passively lit coding

Personal history Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since. Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I d attach it to my laptop and it just works . Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet s screen s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that. Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don t have to use Boox s monitor app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn t as convenient to be used regularly. I also played around with pure terminal approaches, e.g. SSH ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience). My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it s a bit spooky to run these work loads without the laptop s fan spinning up. In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there? VSCode itself doesn t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix) but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet s system.

code-server, running remotely A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment. I also decided against the more involved reverse proxy behind proper hostname with SSL setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use. That left me with the option of using a code-server s built-in support for self-signed certificates and a password:
$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true
With trust-on-first-use this seems reasonably secure. Update: I noticed that the browsers would forget that I trust this self-signed cert after restarting the browser, and also that I cannot install the page (as a Progressive Web App) unless it has a valid certificate. But since I don t have superuser access to that machine, I can t just follow the official recommendation of using a reverse proxy on port 80 or 431 with automatic certificates. Instead, I pointed a hostname that I control to that machine, obtained a certificate manually on my laptop (using acme.sh) and copied the files over, so the configuration now reads as follows:
bind-addr: 1.2.3.4:3933
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.cer
cert-key: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.key
(This is getting very specific to my particular needs and constraints, so I ll spare you the details.)

Service To keep code-server running I created a systemd service that s managed by my user s systemd instance:
~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target
[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server
[Install]
WantedBy=default.target
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server. For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved. I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn t be horrible, but I still wondered if I can do better. I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn t want to bother anyone on the weekend. So I am experimenting with Github s git-credential-manager now. I have configured it to use git s credential cache with an elevated timeout, so that once I log in, I don t have to again for one workday.
$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working! I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn t on Open-VSX. For some reason the F11 key doesn t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings. A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed. I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions. Now that I use a proper certificate, I can use it as a Progressive Web App, and with Firefox on Android this starts the app in full-screen mode (no system bars, no location bar). The F11 key still does t work, and using the command palette to enter fullscreen does nothing visible, but then Esc leaves that fullscreen mode and I suddenly have the system bars again. But maybe if I just don t do that I get the full screen experience. We ll see. I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically. I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer. A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

31 January 2025

Divine Attah-Ohiemi: Seeking Opportunities: Building a Career in Software Engineering and Beyond

My journey in CS has always been driven by curiosity, determination, and a deep love for understanding software solutions at its tiniest, most complex levels. Taking ALX Africa Software Engineer track after High school was where it all started for me. During the 1-year intensive bootcamp, I delved into the intricacies of Linux programming and low-level programming with C, which solidified my foundational knowledge. This experience not only enhanced my technical skills but also taught me the importance of adaptability and self-directed learning. I discovered how to approach challenges with curiosity, igniting a passion for exploring software solutions in their most intricate forms. Each module pushed me to think critically and creatively, transforming my understanding of technology and its capabilities. Let s just say that I have always been drawn to asking, How does this happen?" And I just go on and on until I find an answer eventually and sometimes I don t but that s okay. That curiosity, combined with a deep commitment to learning, has guided my journey. Debian Webmaster My drive has led me to get involved in open-source contributions, where I can put my knowledge to the test while helping my community. Engaging with real-world experts and learning from my mistakes has been invaluable. One of the highlights of this journey was joining the Debian Webmasters team as an intern through Outreachy. Here, I have the honor of working on redesigning and migrating the old Debian webpages to make them more user-friendly. This experience not only allows me to apply my skills in a practical setting but also deepens my understanding of collaborative software development. Building My Skills: The Foundation of My Experience Throughout my academic and professional journey, I have taken on many roles that have shaped my skills and prepared me for what s ahead I believe. I am definitely not a one-trick pony, and maybe not completely a jack of all trade either but I am a bit diverse I d like to think. Here are the key roles that have defined my journey so far: Volunteer Developer at Yoris Africa (June 2022 - August 2023) I began my career by volunteering at Yoris, where I collaborated with a talented team to design and build the frontend for a mobile app. My contributions extended beyond just the frontend; I also worked on backend solutions and microservices, gaining hands-on experience in full-stack development. This role was instrumental in shaping my understanding of software architecture, allowing me to contribute meaningfully to projects while learning from experienced developers in a dynamic environment. Freelance Academics Software Developer (September 2023 - October 2024) I freelanced as an academic software developer, where I pitched and developed software solutions for universities in my community. One of my most notable projects was creating a Computer-Based Testing (CBT) software for a medical school, which featured a unique questionnaire and scoring system tailored to their specific needs. This experience not only allowed me to apply my technical skills in a real-world setting but also deepened my understanding of educational software requirements and user experience, ultimately enhancing the learning process for students. Open Source Intern at Debian Webmaster Team (November 2024 -) Perhaps the most transformative experience has been my role as an intern at Debian Webmasters. This opportunity allowed me to delve into the fascinating world of open source. As an intern, I have the chance to work on a project where we are redesigning and migrating the Debian webpages to utilize a new and faster technology: Go templates with Hugo. For a detailed look at the work and progress I made during my internship, as well as information on this project and how to get involved, you can check out the wiki. My ultimate goal with this role is to build a vibrant community for Debian in Africa and, if given the chance, to host a debian-cd mirror for faster installations in my region. You can connect with me through LinkedIn, or X (formerly Twitter), or reach out via email.

29 January 2025

Sergio Talens-Oliag: Testing DeepSeek with Ollama and Open WebUI

With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci   grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)
For the installation I initially I looked into the approach suggested on this article, but after reviewing it I decided to go for a docker only approach, as it leaves my system clean and updates are easier.

Step 0: Install dockerI already had it on my machine, so nothing to do here.

Step 1: Install the nvidia-container-toolkit packageAs it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using apt from the NVIDIA website.

Step 2: Run the Open WebUI container bundled with OllamaI could install ollama directly on linux or run it on docker, but I found out that there is a container with Open WebUI bundled with Ollama, so I decided to use it instead. To start the container I ve executed the following command:
docker run -d \
  -e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
  -p 127.0.0.1:3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --gpus=all --name open-webui --restart always \
  ghcr.io/open-webui/open-webui:ollama
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs, as it is needed to export it to the host).

Step 3: Install DeepSeek-R1 model from Open WebUIAfter creating an admin user on the Open WebUI, I went to the select a model button and asked to download the deepseek-r1 model (at the time of writing this it matches version deepseek-r1:7b, which uses 4.7GB of space). After the download was completed I was able to start chatting with the model on my local machine.

Step 4: Ask a questionThe first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:

1. **History**:
   - Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
     developer from Denmark.
   - It was initially forked from the Linux distribution Red Hat's RH-Debian to
     gain independence and reduce reliance on Red Hat.

2. **Use Cases**:
   - Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
   - It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
   - Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.

3. **Supported Architectures**:
   - Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.

4. **Package Management**:
   - Debian comes with a robust package management system called **Debian Policy
     Language (DPL)**, which allows users to easily install, update, and manage
     software packages.
   - This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.

5. **Security Focus**:
   - Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
   - The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.

6. **Compatibility with Servers**:
   - Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
   - Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.

7. **Popularity**:
   - Debian has a large and active community of users and contributors.
   - It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other Distributions sections are hilarious, but they provided me with important data:
  • Now I know that APT is the real Debian Project Leader, what don t know is why we have elections each year ;)
  • Linux Distributions are recursive Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of Red Hat,
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer was really funny.

Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUIAs done before, to download the models I went to the select a model button and asked to download the deepseek-coder and deepseek-coder-v2 models (the default version of version one is said to be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install both for testing).

Step 6: Integrate Ollama with NeovimSince some months ago I ve been using Github Copilot with Neovim; I don t feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform repetitive tasks when programming. It seems that there are multiple neovim plugins that support ollama, for now I ve installed and configured the codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
  [...]
  -- Codecompanion plugin
  use  
    "olimorris/codecompanion.nvim",
    requires =  
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
     
   
  [...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup( 
  adapters =  
    ollama = function()
      return require('codecompanion.adapters').extend('ollama',  
        schema =  
          model =  
            default = 'deepseek-coder-v2:latest',
           
         ,
       )
    end,
   ,
  strategies =  
    chat =   adapter = 'ollama',  ,
    inline =   adapter = 'ollama',  ,
   ,
 )
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
I ve tested it a little bit and it seems to work fine, but I ll have to test it more to see if it is really useful, I ll try to do it on future projects.

ConclusionAt a personal level I don t like nor trust AI systems, but as long as they are treated as tools and not as a magical thing you must trust they have their uses and I m happy to see that open source tools like Ollama and models like DeepSeek available for everyone to use.

Russ Allbery: Review: The Sky Road

Review: The Sky Road, by Ken MacLeod
Series: Fall Revolution #4
Publisher: Tor
Copyright: 1999
Printing: August 2001
ISBN: 0-8125-7759-0
Format: Mass market
Pages: 406
The Sky Road is the fourth book in the Fall Revolution series, but it represents an alternate future that diverges after (or during?) the events of The Sky Fraction. You probably want to read that book first, but I'm not sure reading The Stone Canal or The Cassini Division adds anything to this book other than frustration. Much more on that in a moment. Clovis colha Gree is a aspiring doctoral student in history with a summer job as a welder. He works on the platform for the project, which the reader either slowly discovers from the book or quickly discovers from the cover is a rocket to get to orbit. As the story opens, he meets (or, as he describes it) is targeted by a woman named Merrial, a tinker who works on the guidance system. The early chapters provide only a few hints about Clovis's world: a statue of the Deliverer on a horse that forms the backdrop of their meeting, the casual carrying of weapons, hints that tinkers are socially unacceptable, and some division between the white logic and the black logic in programming. Also, because this is a Ken MacLeod novel, everyone is obsessed with smoking and tobacco the way that the protagonists of erotica are obsessed with sex. Clovis's story is one thread of this novel. The other, told in the alternating chapters, is the story of Myra Godwin-Davidova, chair of the governing Council of People's Commissars of the International Scientific and Technical Workers' Republic, a micronation embedded in post-Soviet Kazakhstan. Series readers will remember Myra's former lover, David Reid, as the villain of The Stone Canal and the head of the corporation Mutual Protection, which is using slave labor (sort of) to support a resurgent space movement and its attempt to take control of a balkanized Earth. The ISTWR is in decline and a minor power by all standards except one: They still have nuclear weapons. So, first, we need to talk about the series divergence. I know from reading about this book on-line that The Sky Road is an alternate future that does not follow the events of The Stone Canal and The Cassini Division. I do not know this from the text of the book, which is completely silent about even being part of a series. More annoyingly, while the divergence in the Earth's future compared to The Cassini Division is obvious, I don't know what the Jonbar hinge is. Everything I can find on-line about this book is maddeningly coy. Wikipedia claims the divergence happens at the end of The Sky Fraction. Other reviews and the Wikipedia talk page claim it happens in the middle of The Stone Canal. I do have a guess, but it's an unsatisfying one and I'm not sure how to test its correctness. I suppose I shouldn't care and instead take each of the books on their own terms, but this is the type of thing that my brain obsesses over, and I find it intensely irritating that MacLeod didn't explain it in the books themselves. It's the sort of authorial trick that makes me feel dumb, and books that gratuitously make me feel dumb are less enjoyable to read. The second annoyance I have with this book is also only partly its fault. This series, and this book in particular, is frequently mentioned as good political science fiction that explores different ways of structuring human society. This was true of some of the earlier books in a surprisingly superficial way. Here, I would call it hogwash. This book, or at least the Myra portion of it, is full of people doing politics in a tactical sense, but like the previous books of this series, that politics is mostly embedded in personal grudges and prior romantic relationships. Everyone involved is essentially an authoritarian whose ability to act as they wish is only contested by other authoritarians and is largely unconstrained by such things as persuasion, discussions, elections, or even theory. Myra and most of the people she meets are profoundly cynical and almost contemptuous of any true discussion of political systems. This is the trappings and mechanisms of politics without the intellectual debate or attempt at consensus, turning it into a zero-sum game won by whoever can threaten the others more effectively. Given the glowing reviews I've seen in relatively political SF circles, presumably I am missing something that other people see in MacLeod's approach. Perhaps this level of pettiness and cynicism is an accurate depiction of what it's like inside left-wing political movements. (What an appalling condemnation of left-wing political movements, if so.) But many of the on-line reviews lead me to instead conclude that people's understanding of "political fiction" is stunted and superficial. For example, there is almost nothing Marxist about this book it contains essentially no economic or class analysis whatsoever but MacLeod uses a lot of Marxist terminology and sets half the book in an explicitly communist state, and this seems to be enough for large portions of the on-line commentariat to conclude that it's full of dangerous, radical ideas. I find this sadly hilarious given that MacLeod's societies tend, if anything, towards a low-grade libertarianism that would be at home in a Robert Heinlein novel. Apparently political labels are all that's needed to make political fiction; substance is optional. So much for the politics. What's left in Clovis's sections is a classic science fiction adventure in which the protagonist has a radically different perspective from the reader and the fun lies in figuring out the world-building through the skewed perspective of the characters. This was somewhat enjoyable, but would have been more fun if Clovis had any discernible personality. Sadly he instead seems to be an empty receptacle for the prejudices and perspective of his society, which involve a lot of quasi-religious taboos and an essentially magical view of the world. Merrial is a more interesting character, although as always in this series the romance made absolutely no sense to me and seemed to be conjured by authorial fiat and weirdly instant sexual attraction. Myra's portion of the story was the part I cared more about and was more invested in, aided by the fact that she's attempting to do something more interesting than launch a crewed space vehicle for no obvious reason. She at least faces some true moral challenges with no obviously correct response. It's all a bit depressing, though, and I found Myra's unwillingness to ground her decisions in a more comprehensive moral framework disappointing. If you're going to make a protagonist the ruler of a communist state, even an ironic one, I'd like to hear some real political philosophy, some theory of sociology and economics that she used to justify her decisions. The bits that rise above personal animosity and vibes were, I think, said better in The Cassini Division. This series was disappointing, and I can't say I'm glad to have read it. There is some small pleasure in finishing a set of award-winning genre books so that I can have a meaningful conversation about them, but the awards failed to find me better books to read than I would have found on my own. These aren't bad books, but the amount of enjoyment I got out of them didn't feel worth the frustration. Not recommended, I'm afraid. Rating: 6 out of 10

28 January 2025

Russ Allbery: Review: Moose Madness

Review: Moose Madness, by Mar Delaney
Publisher: Kalikoi
Copyright: May 2021
ASIN: B094HGT1ZB
Format: Kindle
Pages: 68
Moose Madness is a sapphic shifter romance novella (on the short side for a novella) by the same author as Wolf Country. It was originally published in the anthology Her Wild Soulmate, which appears to be very out of print. Maggie (she hates the nickname Moose) grew up in Moose Point, a tiny fictional highway town in (I think) Alaska. (There is, unsurprisingly, an actual Moose Point in Alaska, but it's a geographic feature and not a small town.) She stayed after graduation and is now a waitress in the Moose Point Pub. She's also a shifter; specifically, she is a moose shifter like her mother, the town mayor. (Her father is a fox shifter.) As the story opens, the annual Moose Madness festival is about to turn the entire town into a blizzard of moose kitsch. Fiona Barton was Maggie's nemesis in high school. She was the cool, popular girl, a red-headed wolf shifter whose friend group teased and bullied awkward and uncoordinated Maggie mercilessly. She was also Maggie's impossible crush, although the very idea seemed laughable. Fi left town after graduation, and Maggie hadn't thought about her for years. Then she walks into Moose Point Pub dressed in biker leathers, with piercings and one side of her head shaved, back in town for a wedding in her pack. Much to the shock of both Maggie and Fi, they realize that they're soulmates as soon as their eyes meet. Now what? If you thought I wasn't going to read the moose and wolf shifter romance once I knew it existed, you do not know me very well. I have been saving it for when I needed something light and fun. It seemed like the right palette cleanser after a very disappointing book. Moose Madness takes place in the same universe as Wolf Country, which means there are secret shifters all over Alaska (and presumably elsewhere) and they have the strong magical version of love at first sight. If one is a shifter, one knows immediately as soon as one locks eyes with one's soulmate and this feeling is never wrong. This is not my favorite romance trope, but if I get moose shifter romance out of it, I'll endure. As you can tell from the setup, this is enemies-to-lovers, but the whole soulmate thing shortcuts the enemies to lovers transition rather abruptly. There's a bit of apologizing and air-clearing at the start, but most of the novella covers the period right after enemies have become lovers and are getting to know each other properly. If you like that part of the arc, you will probably enjoy this, but be warned that it's slight and somewhat obvious. There's a bit of tension from protective parents and annoying pack mates, but it's sorted out quickly and easily. If you want the characters to work for the relationship, this is not the novella for you. It's essentially all vibes. I liked the vibes, though! Maggie is easy to like, and Fi does a solid job apologizing. I wish there was quite a bit more moose than we get, but Delaney captures the combination of apparent awkwardness and raw power of a moose and has a good eye for how beautiful large herbivores can be. This is not the sort of book that gives a moment's thought to wolves being predators and moose being, in at least some sense, prey animals, so if you are expecting that to be a plot point, you will be disappointed. As with Wolf Country, Delaney elides most of the messier and more ethically questionable aspects of sometimes being an animal. This is a sweet, short novella about two well-meaning and fundamentally nice people who are figuring out that middle school and high school are shitty and sometimes horrible but don't need to define the rest of one's life. It's very forgettable, but it made me smile, and it was indeed a good palette cleanser. If you are, like me, the sort of person who immediately thought "oh, I have to read that" as soon as you saw the moose shifter romance, keep your expectations low, but I don't think this will disappoint. If you are not that sort of person, you can safely miss this one. Rating: 6 out of 10

27 January 2025

Russ Allbery: Review: The House That Fought

Review: The House That Fought, by Jenny Schwartz
Series: Uncertain Sanctuary #3
Publisher: Jenny Schwartz
Copyright: December 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 199
The House That Fought is the third and final book of the self-published space fantasy trilogy starting with The House That Walked Between Worlds. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata. At the end of the last book, one of Kira's random and vibe-based trust decisions finally went awry. She has been betrayed! She's essentially omnipotent, the betrayal does not hurt her in any way, and, if anything, it helps the plot resolution, but she has to spend some time feeling bad about it first. Eventually, though, the band of House residents return to the problem of Earth's missing magic. By Earth here, I mean our world, which technically isn't called Earth in the confusing world-building of this series. Earth within this universe is an archetypal world that is the origin world for humans, the two types of dinosaurs, and Neanderthals. There are numerous worlds that have split off from it, including Human, the one world where humans are dominant, which is what we think of as Earth and what Kira calls Earth half the time. And by worlds, I mean entire universes (I think?), because traveling between "worlds" is dimensional travel, not space travel. But there is also space travel? The world building started out confusing and has degenerated over the course of the series. Given that the plot, such as it is, revolves around a world-building problem, this is not a good sign. Worse, though, is that the quality of the writing has become unedited, repetitive drivel. I liked the first book and enjoyed a few moments of the second book, but this conclusion is just bad. This is the sort of book that the maxim "show, don't tell" was intended to head off. The dull, thudding description of the justification for every character emotion leaves no room for subtlety or reader curiosity.
Evander was elf and I was human. We weren't the same. I had magic. He had the magic I'd unconsciously locked into his augmentations. We were different and in love. Speaking of our differences could be a trigger. I peeked at him, worried. My customary confidence had taken a hit. "We're different," he answered my unspoken question. "And we work anyway. We'll work to make us work."
There is page after page after page of this sort of thing: facile emotional processing full of cliches and therapy-speak, built on the most superficial of relationships. There's apparently a romance now, which happened with very little build-up, no real discussion or communication between the characters, and only the most trite and obvious relationship work. There is a plot underneath all this, but it's hard to make it suspenseful given that Kira is essentially omnipotent. Schwartz tries to turn the story into a puzzle that requires Kira figure out what's going on before she can act, but this is undermined by the confusing world-building. The loose ends the plot has accumulated over the previous two books are mostly dropped, sometimes in a startlingly casual way. I thought Kira would care who killed her parents, for example; apparently, I was wrong. The previous books caught my attention with a more subtle treatment of politics than I expect from this sort of light space fantasy. The characters had, I thought, a healthy suspicion of powerful people and a willingness to look for manipulation or ulterior motives. Unfortunately, we discover here that this is not due to an appreciation of the complexity of power and motive in governments. Instead, it's a reflexive bias against authority and structured society that sounds like an Internet libertarian complaining about taxes. Powerful people should be distrusted because all governments are corrupt and bad and steal your money in order to waste it. Oh, except for the cops and the military; they're generally good people you should trust. In retrospect, I should have expected this turn given the degree to which Schwartz stressed the independence of sorcerers. I thought that was going somewhere more interesting than sorcerers as self-appointed vigilantes who are above the law and can and should do anything they damn well please. Sadly, it was not. Adding to the lynch mob feeling, the ending of this book is a deeply distasteful bit of magical medieval punishment that I thought was vile, and which is, of course, justified by bad things happening to children. No societal problems were solved, but Kira got her petty revenge and got to be gleeful and smug about it. This is apparently what passes for a happy ending. I don't even know what to say about the bizarre insertion of Christianity, which makes little sense given the rest of the world-building. It's primarily a way for Kira to avoid understanding or thinking about an important part of the plot. As sadly seems to often be the case in books like this, Kira's faith doesn't appear to prompt any moral analysis or thoughtful ethical concern about her unlimited power, just certainty that she's right and everyone else is wrong. This was dire. It is one of those self-published books that I feel a little bad about writing this negative of a review about, because I think most of the problem was that the author's skill was not up to the story that she wanted to tell. This happens a lot in self-published fiction, particularly since Kindle Unlimited has started rewarding quantity over quality. But given how badly the writing quality degraded over the course of the series, and how offensive the ending was, I do want to warn other people off of the series. There is so much better fiction out there. Avoid this one, and probably the rest of the series unless you're willing to stop after the first book. Rating: 2 out of 10

26 January 2025

Otto Kek l inen: 10 habits to help becoming a Debian Maintainer

Featured image of post 10 habits to help becoming a Debian MaintainerBecoming a Debian maintainer is a journey that combines technical expertise, community collaboration, and continuous learning. In this post, I ll share 10 key habits that will both help you navigate the complexities of Debian packaging without getting lost, and also enable you to contribute more effectively to one of the world s largest open source projects.

1. Read and re-read the Debian Policy, the Developer s Reference and the git-buildpackage manual Anyone learning Debian packaging and aspiring to become a Debian maintainer is likely to wade through a lot of documentation, only to realize that much of it is outdated or sometimes outright incorrect. Therefore, it is important to learn right from the start which sources are the most reliable and truly worth reading and re-reading. I recommend these documents, in order of importance:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive, and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • The git-buildpackage man pages: While the Policy focuses on the end result and is intentionally void of practical instructions on creating or maintaining Debian packages, the Developer s Reference goes into greater detail. However, it too lacks step-by-step instructions. For the exact commands, consult the man pages of git-buildpackage and its subcommands (e.g., gbp clone, gbp import-orig, gbp pq, gbp dch, gbp push). See also my post on Debian source package git branch and tags for an easy to understand diagrams.

2. Make reading man pages a habit In addition to the above, try to make a habit of checking out the man page of every new tool you use to ensure you are using it as intended. The best place to read accurate and up-to-date documentation is manpages.debian.org. The manual pages are maintained alongside the tools by their developers, ensuring greater accuracy than any third-party documentation. If you are using a tool in the way the tool author documented, you can be confident you are doing the right thing, even if it wasn t explicitly mentioned in some third-party guide about Debian packaging best practices.

3. Read and write emails While members of the Debian community have many channels of communication, the mailing lists are by far the most prominent. Asking questions on the appropriate list is a good way to get current advice from other people doing Debian packaging. Staying subscribed to lists of interest is also a good way to read about new developments as they happen. Note that every post is public and archived permanently, so the discussions on the mailing lists also form a body of documentation that can later be searched and referred to. Regularly writing short and well-structured emails on the mailing lists is great practice for improving technical communication skills a useful ability in general. For Debian specifically, being active on mailing lists helps build a reputation that can later attract collaborators and supporters for more complex initiatives.

4. Create and use an OpenPGP key Related to reputation and identity, OpenPGP keys play a central role in the Debian community. OpenPGP is used to various degrees to sign git commits and tags, sign and encrypt email, and most importantly to sign Debian packages so their origin can be verified. The process of becoming a Debian Maintainer and eventually a Debian Developer culminates in getting your OpenPGP key included in the Debian keyring, which is used to control who can upload packages into the Debian archive. The earlier you create a key and start using it to gain reputation for that specific key that is used to sign your work, the better. Note that due to a recent schism in the OpenPGP standards working group, it is safest to create an OpenPGP key using GnuPG version 2.2.x (not 2.4.x), or using Sequoia-PGP.

5. Integrate Salsa CI in all work One reason Debian remains popular, even 30 years after its inception, is due to its culture of maintaining high standards. For a newcomer, learning all the quality assurance tools such as Lintian, Piuparts, Adequate, various build variations, and reproducible builds may be overwhelming. However, these tasks are easier to manage thanks to Salsa CI, the continuous integration pipeline in Debian that runs tests on every commit at salsa.debian.org. The earlier you activate Salsa CI in the package repository you are working on, the faster you will achieve high quality in your package with fewer missteps. You can also further customize a package specific salsa-ci.yml to have more testing coverage. Example Salsa CI pipeline with customizations

6. Fork on Salsa and use draft Merge Requests to solicit feedback All modern Debian packages are hosted on salsa.debian.org. If you want to make a change to any package, it is easy to fork, make an initial attempt at the change, and publish it as a draft Merge Request (MR) on Salsa to solicit feedback. People might have surprising reasons to object to the change you propose, or they might need time to get used to the idea before agreeing to it. Also, some people might object to a vague idea out of suspicion but agree once they see the exact implementation. There may also be a surprising number of people supporting your idea, and if there is an MR, they have a place to show their support at. Don t expect every Merge Request to be accepted. However, proposing an idea as running code in an MR is far more effective than raising the idea on a mailing list or in a bug report. Get into the habit of publishing plenty of merge requests to solicit feedback and drive discussions toward consensus.

7. Use git rebase frequently Linear git history is much easier to read. The ease of reading git log and git blame output is vital in Debian, where packages often have updates from multiple people spanning many years even decades. Debian packagers likely spend more time than the average software developer reading git history. Make sure you master git commands such as gitk --all, git citool --amend, git commit -a --fixup <commit id>, git rebase -i --autosquash <target branch>, git cherry-pick <commit id 1> <id 2> <id 3>, and git pull --rebase. If rebasing is not done on your initiative, rest assured others will ask you to do it. Thus, if the commands above are familiar, rebasing will be quick and easy for you.

8. Reviews: give some, get some In open source, the larger a project becomes, the more it attracts contributions, and the bottleneck for its growth isn t how much code developers can create but how much code submissions can be properly reviewed. At the time of writing, the main Salsa group Debian has over 800 open merge requests pending reviews and approvals. Feel free to read and comment on any merge request you find. You don t have to be a subject matter expert to provide valuable feedback. Even if you don t have specific feedback, your comment as another human acknowledging that you read the MR and found no issues is viewed positively by the author. Besides, if you spend enough time reviewing MRs in a specific domain, you will eventually become an expert in it. Code reviews are not just about providing feedback to the submitter; they are also great learning opportunities for the reviewer. As a rule of thumb, you should review at least twice as many merge requests as you submit yourself.

9. Improve Debian by improving upstream It is common that while packaging software for Debian, bugs are uncovered and patched in Debian. Do not forget to submit the fixes upstream, and add a Forwarded field to the file in debian/patches! As the person building and packaging something in Debian, you automatically become an authority on that software, and the upstream is likely glad to receive your improvements. While submitting patches upstream is a bit of work initially, getting improvements merged upstream eventually saves time for everyone and makes packaging in Debian easier, as there will be fewer patches to maintain with each new upstream release.

10. Don t hold any habits too firmly Last but not least: Once people learn a specific way of working, they tend to stick to it for decades. Learning how to create and maintain Debian packages requires significant effort, and people tend to stop learning once they feel they ve reached a sufficient level. This tendency to get stuck in a local optimum is understandable and natural, but try to resist it. It is likely that better techniques will evolve over time, so stay humble and re-evaluate your beliefs and practices every few years. Mastering these habits takes time, but each small step brings you closer to making a meaningful impact on Debian. By staying curious, collaborative, and adaptable, you can ensure your contributions stand the test of time just like Debian itself. Good luck on your journey toward becoming a Debian Maintainer!

22 January 2025

Jonathan McDowell: Christmas Movies

I watch a lot of films. Since completing the IMDB Top 250 back in 2016 I ve kept an eye on it, and while I don t go out of my way to watch the films that newly appear in it I generally sit at over 240 watched. I should note I don t consider myself a film buff/critic, however. I watch things for enjoyment, and a lot of the time that s kicking back and relaxing and disengaging my brain. So I don t get into writing reviews, just high level lists of things I ve watched, sometimes with a few comments. With that in mind, let s talk about Christmas movies. Yes, I appreciate it s the end of January, but generally during December we watch things that have some sort of Christmas theme. New ones if we can find them, but also some of what we consider classics . This almost always starts with Scrooged after we ve put up the tree. I don t always like Bill Murray (I couldn t watch The Life Aquatic with Steve Zissou and I think Lost in Translation is overrated), but he s in a bunch of things I really like, and Scrooged is one of those. I don t care where you sit on whether Die Hard is a Christmas movie or not, it s a good movie and therefore regularly gets a December watch. Die Hard 2 also fits into that category of sequel at least as good as the original , though Helen doesn t agree. We watched it anyway, and I finally made the connection between the William Sadler hotel scene and Michael Rooker s in Mallrats. It turns out I m a Richard Curtis fan. Love Actually has not aged well; most times I watch it I find something new questionable about it, and I always end up hating Alan Rickman for cheating on Emma Thompson, but I do like watching it. He had a new one, That Christmas, out this year, so we watched it as well. Another new-to-us film this year was Spirited. I generally like Ryan Reynolds, and Will Ferrell is good as long as he s not too overboard, so I had high hopes. I enjoyed it, but for some reason not as much as I d expected, and I doubt it s getting added to the regular watch list. Larry doesn t generally like watching full length films, but he (and we), enjoyed The Grinch, which I actually hadn t seen before. He s not as fussed on The Muppet Christmas Carol, but we watched it every year, generally on Christmas or Boxing Day. Favourite thing I saw on the Fediverse in December was Do you know there s a book of The Muppet Christmas Carol, and they don t mention that there s muppets in it once? There are a various other light hearted Christmas films we regularly watch. This year included The Holiday (I have so many issues with even just the practicalities of a short notice house swap), and Last Christmas (lots of George Michael music, what s not to love? Also it was only on this watch through that we realised the lead character is the Mother of Dragons). We started, but could not finish, Carry On. I saw it described somewhere as copaganda, and that feels accurate. It does not accurately reflect any of my interactions with TSA at airports, especially during busy periods. Things we didn t watch this year, but are regularly in the mix, include Fatman, Violent Night (looking forward to the sequel, hopefully this year), and Lethal Weapon. Klaus is kinda at the other end of the spectrum, but very touching, and we ve watched it a couple of years now. Given what we seem to like, any suggestions for other films to add? It s nice to have enough in the mix that we get some variety every year.

21 January 2025

Dirk Eddelbuettel: ttdo 0.0.10 on CRAN: Small Extension

A new minor release of our ttdo package arrived on CRAN a few days ago. The ttdo package extends the excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs (as shown in the screenshot below) which seemingly is so compelling an idea that it eventually got copied by another package which shall remain unnamed And as of this release, we also support visual diffs as provided by tinysnapshot by Vincent Arel-Bundock. ttdo screenshot This release chiefly adds support for visual diff plots. As we extend tinytest for use in the R autograder we maintain and deploy within the lovely PrairieLearn framework, we need to encode the diffplot that has been produced by the autograder (commonly in a separate Docker process) to communicate it back to the controlling (Docker also) instance of PrairieLearn. We also did the usual bits of package maintenance keeping badges, URLs, continuous integration and minor R coding standards current. As usual, the NEWS entry follows.

Changes in ttdo version 0.0.10 (2025-01-21)
  • Regular packaging updates to badges and continuous integration setup
  • Add support for a 'visual difference' taking advantage of a tinysnapshot feature returning a ttvd-typed result
  • Added (versioned) dependency on tinysnapshot (to also get the plot 'style' enhancement in the most recent version) and base64enc
  • Return ttdo-typed result if diffobj-printed result is returned
  • Ensure all package-documentation links in manual page have anchors to the package

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

20 January 2025

Divine Attah-Ohiemi: Progress Report: First Half of My Outreachy Internship

Hello everyone!, I m excited to share a progress report on my Outreachy internship with the Debian community. As I reach the halfway point of this journey, I want to reflect on what I ve accomplished so far and outline my modified goals for the second half of the internship. In truth, there wasn t a strict timeline for my project migrating Debian webpage content to Hugo because the original repository contained thousands of pages. The initial goal was to develop a proof of concept for: Thanks to our daily standups, where we brainstorm and revise contributions, we ve made significant progress. The wiki documentation discussing the technical decisions taken to meet these goals is currently in progress here. During the first half of my internship, I have improved and refined my skills in several areas. I learned new Markdown syntaxes, studied and utilized Apache's mod_rewrite, and halfway studied GNU Make to use Perl scripts for processing data for dynamic content. I recommend Managing Projects with GNU Make by Robert Mecklenburg it's a great book for beginners! While I didn t get stuck on any particular goal, the most challenging aspect was adding Hugo aliases to help with Apache's multilingual content negotiation. The way the webwml repository generates multilingual content differs from debianhugo. For instance, in webwml, the structure looks like this: english/index.wml -> /index.en.html (with a symlink from index.html to index.en.html) and french/index.wml -> /index.fr.html. In contrast, debianhugo uses en/_index.md -> /index.html and fr/_index.md -> /fr/index.html. Apache's multilingual content negotiation checks for index.<user preferred lang code>.html in the current directory, which works well with webwml since all related translations are generated in the same directory. However, with debianhugo using subdirectories for languages other than English, we had to set up aliases for every other language page to be generated in the frontmatter. For example, in fr/_index.md, we added this to the front matter:
...
aliases:
  - /index.fr.html
...
This setup allows Hugo to generate multilingual HTML files in the initial home directory solely for the purpose of setting up a 301 redirect to the same page in the language subdirectory. However, if the client sets their preferred language to English, Apache content negotiation tries to find /index.en.html. If it doesn t find it, it defaults to any other language-suffixed file, which can lead to unexpected behavior. For example, if English is set as the preferred language, accessing the site may serve /index.fr.html, which then redirects to /fr/index.html. This was a significant challenge, and you can see a demo of this hosted here. If I were to start the project over, I would document every decision as I make them in the wiki, no matter how rough the documentation turns out. Waiting until the midpoint of the project to document was not a good idea. As I move into the second half of my internship, the goals we ve set include improving our project wiki documentation and continuing the migration process while enhancing the user experience of complicated sections. I m looking forward to making even more progress and sharing my journey with you all. Happy coding!

15 January 2025

Dirk Eddelbuettel: RcppFastFloat 0.0.5 on CRAN: New Upstream, Updates

A new minor release of RcppFastFloat just arrived on CRAN. The package wraps fast_float, another nice library by Daniel Lemire. For details, see the arXiv preprint or published paper showing that one can convert character representations of numbers into floating point at rates at or exceeding one gigabyte per second. This release updates the underlying fast_float library version to the current version 7.0.0, and updates a few packaging aspects.

Changes in version 0.0.5 (2025-01-15)
  • No longer set a compilation standard
  • Updates to continuous integration, badges, URLs, DESCRIPTION
  • Update to fast_float 7.0.0
  • Per CRAN Policy comment-out compiler 'diagnostic ignore' instances

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the [issue tracker][issue tickets] at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Next.

Previous.