Search Results: "micha"

21 May 2024

Michael Ablassmeier: lvm thin send/recv

A few days ago i found this mail on the LKML that introduces support for userspace access to LVM thin provisioned metadata snapshots. I didn t know this is possible. Using the thin provisioning tools you can then export the metadata information for your LVM snapshots to track changed regions between them. The workflow is pretty straight forward, yet not really documented:
# lvcreate -ay -Ky --snapshot -n full_backup thingroup/vol1
  # dmsetup message /dev/mapper/thingroup-thinpool-tpool 0 reserve_metadata_snap
  # lvcreate -ay -Ky --snapshot -n inc_backup thingroup/vol1
  # thin_delta  -m --snap1 $(lvs --noheadings -o thin_id thingroup/full_backup) --snap2 $(lvs --noheadings -o thin_id thingroup/inc_backup) > delta_dump
  # dmsetup message /dev/mapper/thingroup-thinpool-tpool 0 release_metadata_snap
This all has already been implemented by a nice utility called thin-send-recv, which based on this functionality allows to (incrementally) send LVM snapshots to remote systems just like zfs send or zfs recv.

13 April 2024

Paul Tagliamonte: Domo Arigato, Mr. debugfs

Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don t think I m the only one. At work I ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I ve decided to hack up a project that I ve wanted to do ever since 2015 write a debug filesystem. Let s get to it.

Back to the Future Time to admit something. I really love Plan 9. It s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good like, correct. The bit that I ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server you mount the ftp server and access files like any other files. It s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic. The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption 9p is in all sorts of places folks don t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you re noticing a pattern here, you d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value. As a result, there s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine unlike fuse, where you d need client-specific software to run in order to mount the directory. For instance, let s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name mountpointname to /mnt.
$ mount -t 9p \
-o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \
127.0.0.1 \
/mnt
Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

WHEREIN I STYX WITH IT Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It s a series of client to server requests, which receive a server to client response. These are, respectively, T messages, which transmit a request to the server, which trigger an R message in response (Reply messages). These messages are TLV payload with a very straight forward structure so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages. Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

MR ROBOTO The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there s an escape hatch allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.
/// Simplified version of the arigato File trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.File.html
trait File  
/// OpenFile is the type returned by this File via an Open call.
 type OpenFile: OpenFile;
/// Return the 9p Qid for this file. A file is the same if the Qid is
 /// the same. A Qid contains information about the mode of the file,
 /// version of the file, and a unique 64 bit identifier.
 fn qid(&self) -> Qid;
/// Construct the 9p Stat struct with metadata about a file.
 async fn stat(&self) -> FileResult<Stat>;
/// Attempt to update the file metadata.
 async fn wstat(&mut self, s: &Stat) -> FileResult<()>;
/// Traverse the filesystem tree.
 async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
/// Request that a file's reference be removed from the file tree.
 async fn unlink(&mut self) -> FileResult<()>;
/// Create a file at a specific location in the file tree.
 async fn create(
&mut self,
name: &str,
perm: u16,
ty: FileType,
mode: OpenMode,
extension: &str,
) -> FileResult<Self>;
/// Open the File, returning a handle to the open file, which handles
 /// file i/o. This is split into a second type since it is genuinely
 /// unrelated -- and the fact that a file is Open or Closed can be
 /// handled by the  arigato  server for us.
 async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
 
/// Simplified version of the arigato OpenFile trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
trait OpenFile  
/// iounit to report for this file. The iounit reported is used for Read
 /// or Write operations to signal, if non-zero, the maximum size that is
 /// guaranteed to be transferred atomically.
 fn iounit(&self) -> u32;
/// Read some number of bytes up to  buf.len()  from the provided
 ///  offset  of the underlying file. The number of bytes read is
 /// returned.
 async fn read_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
/// Write some number of bytes up to  buf.len()  from the provided
 ///  offset  of the underlying file. The number of bytes written
 /// is returned.
 fn write_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
 

Thanks, decade ago paultag! Let s do it! Let s use arigato to implement a 9p filesystem we ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don t know much about how an apt repo works, here s the 2-second crash course on what we re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.
$ curl \
https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \
  unxz
This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let s take a look at the debug headers for the netlabel-tools package in unstable which is a package named netlabel-tools-dbgsym in unstable-debug.
Package: netlabel-tools-dbgsym
Source: netlabel-tools (0.30.0-1)
Version: 0.30.0-1+b1
Installed-Size: 79
Maintainer: Paul Tagliamonte <paultag@debian.org>
Architecture: amd64
Depends: netlabel-tools (= 0.30.0-1+b1)
Description: debug symbols for netlabel-tools
Auto-Built-Package: debug-symbols
Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a
Description-md5: a0e587a0cf730c88a4010f78562e6db7
Section: debug
Priority: optional
Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
Size: 62776
SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60
So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files but we re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It s crude, and very single-purpose, but I m feeling a bit lazy.

Who needs dpkg?! For folks who haven t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.
[8 byte .ar file magic]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
...
First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let s break apart a .deb file by hand something that is a bit of a rite of passage (or at least it used to be? I m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.
$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ ls
control.tar.xz debian-binary
data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ tar --list -f data.tar.xz   grep '.debug$'
./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all. After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can t seek, but gdb needs to, we ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user. From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

A quick diversion about compression I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of How do I seek around an lzma2 compressed data stream ; which is a lot more complex of a question. Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name targz on his GitHub, which has been a ton of fun to read through. The gist is that, by dumping the decompressor s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific checkpoints along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off creating a similar block mechanism against the wishes of gzip. It means you d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you ve taken. Given the complexity of xz and lzma2, I don t think this is possible for me at the moment especially given most of the files I ll be requesting will not be loaded from again especially when I can just cache the debug header by Build-Id. I want to implement this (because I m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I d ever say out loud), but for now I m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

The Good First, the good news right? It works! That s pretty cool. I m positive my younger self would be amused and happy to see this working; as is current day paultag. Let s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let s take it for a spin:
$ mount \
-t 9p \
-o trans=tcp,version=9p2000.u,aname=unstable-debug \
192.168.0.2 \
/usr/lib/debug/.build-id/
And, let s prove to ourselves that this actually mounted before we go trying to use it:
$ mount   grep build-id
192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)
Slick. We ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let s take a look at it.
$ ls /usr/lib/debug/.build-id/
00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff
Outstanding. Let s try using gdb to debug a binary that was provided by the Debian archive, and see if it ll load the ELF by build-id from the right .deb in the unstable-debug suite:
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)
Yes! Yes it will!
$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
/usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped

The Bad Linux s support for 9p is mainline, which is great, but it s not robust. Network issues or server restarts will wedge the mountpoint (Linux can t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

The Ugly Unfortunately, we don t know the file size(s) until we ve actually opened the underlying tar file and found the correct member, so for most files, we don t know the real size to report when getting a stat. We can t parse the tarfiles for every stat call, since that d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let s try with a size of 0 to start:
$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
[...]
This obviously won t work since gdb will throw away all our hard work because of stat s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let s try it again:
$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)
Much better. I mean, terrible but better. Better for now, anyway.

Kilroy was here Do I think this is a particularly good idea? I mean; kinda. I m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don t think I ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don t think this ll be shipping anywhere anytime soon. Along with me publishing this post, I ve pushed up all my repos; so you should be able to play along at home! There s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs. At least I can say I was here and I got it working after all these years.

23 March 2024

Dirk Eddelbuettel: littler 0.3.20 on CRAN: Moar Features!

max-heap image The twentyfirst release of littler as a CRAN package landed on CRAN just now, following in the now eighteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later. littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years. littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette. This release contains another fair number of small changes and improvements to some of the scripts I use daily to build or test packages, adds a new front-end ciw.r for the recently-released ciw package offering a CRAN Incoming Watcher , a new helper installDeps2.r (extending installDeps.r), a new doi-to-bib converter, allows a different temporary directory setup I find helpful, deals with one corner deployment use, and more. The full change description follows.

Changes in littler version 0.3.20 (2024-03-23)
  • Changes in examples scripts
    • New (dependency-free) helper installDeps2.r to install dependencies
    • Scripts rcc.r, tt.r, tttf.r, tttlr.r use env argument -S to set -t to r
    • tt.r can now fill in inst/tinytest if it is present
    • New script ciw.r wrapping new package ciw
    • tttf.t can now use devtools and its loadall
    • New script doi2bib.r to call the DOI converter REST service (following a skeet by Richard McElreath)
  • Changes in package
    • The CI setup uses checkout@v4 and the r-ci-setup action
    • The Suggests: is a little tighter as we do not list all packages optionally used in the the examples (as R does not check for it either)
    • The package load messag can account for the rare build of R under different architecture (Berwin Turlach in #117 closing #116)
    • In non-vanilla mode, the temporary directory initialization in re-run allowing for a non-standard temp dir via config settings

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

7 February 2024

Reproducible Builds: Reproducible Builds in January 2024

Welcome to the January 2024 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. If you are interested in contributing to the project, please visit our Contribute page on our website.

How we executed a critical supply chain attack on PyTorch John Stawinski and Adnan Khan published a lengthy blog post detailing how they executed a supply-chain attack against PyTorch, a popular machine learning platform used by titans like Google, Meta, Boeing, and Lockheed Martin :
Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies the list goes on. In short, it was bad. Quite bad.
The attack pivoted on PyTorch s use of self-hosted runners as well as submitting a pull request to address a trivial typo in the project s README file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.

New Arch Linux forensic filesystem tool On our mailing list this month, long-time Reproducible Builds developer kpcyrd announced a new tool designed to forensically analyse Arch Linux filesystem images. Called archlinux-userland-fs-cmp, the tool is supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt. Crucially, however, at no point is any file from the mounted filesystem eval d or otherwise executed. Parsers are written in a memory safe language. More information about the tool can be found on their announcement message, as well as on the tool s homepage. A GIF of the tool in action is also available.

Issues with our SOURCE_DATE_EPOCH code? Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH environment variable:
I m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now. Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.
Chris ends his message with a request that those with intimate or low-level knowledge of time_t, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.

Distribution updates In Debian this month, Roland Clobus posted another detailed update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours) . Additionally 7 of the 8 bookworm images from the official download link build reproducibly at any later time. In addition to this, three reviews of Debian packages were added, 17 were updated and 15 were removed this month adding to our knowledge about identified issues. Elsewhere, Bernhard posted another monthly update for his work elsewhere in openSUSE.

Community updates There were made a number of improvements to our website, including Bernhard M. Wiedemann fixing a number of typos of the term nondeterministic . [ ] and Jan Zerebecki adding a substantial and highly welcome section to our page about SOURCE_DATE_EPOCH to document its interaction with distribution rebuilds. [ ].
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 254 and 255 to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive (.XAR/.PKG) files courtesy of Seth Michael Larson [ ][ ], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [ ][ ][ ][ ]. Thanks!

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Reduce the number of arm64 architecture workers from 24 to 16. [ ]
    • Use diffoscope from the Debian release being tested again. [ ]
    • Improve the handling when killing unwanted processes [ ][ ][ ] and be more verbose about it, too [ ].
    • Don t mark a job as failed if process marked as to-be-killed is already gone. [ ]
    • Display the architecture of builds that have been running for more than 48 hours. [ ]
    • Reboot arm64 nodes when they hit an OOM (out of memory) state. [ ]
  • Package rescheduling changes:
    • Reduce IRC notifications to 1 when rescheduling due to package status changes. [ ]
    • Correctly set SUDO_USER when rescheduling packages. [ ]
    • Automatically reschedule packages regressing to FTBFS (build failure) or FTBR (build success, but unreproducible). [ ]
  • OpenWrt-related changes:
    • Install the python3-dev and python3-pyelftools packages as they are now needed for the sunxi target. [ ][ ]
    • Also install the libpam0g-dev which is needed by some OpenWrt hardware targets. [ ]
  • Misc:
    • As it s January, set the real_year variable to 2024 [ ] and bump various copyright years as well [ ].
    • Fix a large (!) number of spelling mistakes in various scripts. [ ][ ][ ]
    • Prevent Squid and Systemd processes from being killed by the kernel s OOM killer. [ ]
    • Install the iptables tool everywhere, else our custom rc.local script fails. [ ]
    • Cleanup the /srv/workspace/pbuilder directory on boot. [ ]
    • Automatically restart Squid if it fails. [ ]
    • Limit the execution of chroot-installation jobs to a maximum of 4 concurrent runs. [ ][ ]
Significant amounts of node maintenance was performed by Holger Levsen (eg. [ ][ ][ ][ ][ ][ ][ ] etc.) and Vagrant Cascadian (eg. [ ][ ][ ][ ][ ][ ][ ][ ]). Indeed, Vagrant Cascadian handled an extended power outage for the network running the Debian armhf architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. [ ] Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org to deal with a new SMTP email attack. [ ]

Upstream patches The Reproducible Builds project tries to detects, dissects and fix as many (currently) unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: Separate to this, Vagrant Cascadian followed up with the relevant maintainers when reproducibility fixes were not included in newly-uploaded versions of the mm-common package in Debian this was quickly fixed, however. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

29 January 2024

Michael Ablassmeier: qmpbackup 0.28

Over the last weekend i had some spare time to improve qmpbackup a little more, the new version: and some minor code reworks. Hope its useful for someone.

22 January 2024

Dirk Eddelbuettel: x13binary 1.1.60 on CRAN: Upstream Update, Updated Build

The x13binary team is thrilled to share the availability of Release 1.1.60-1 of the x13binary package providing the X-13ARIMA-SEATS program by the US Census Bureau which arrived on CRAN earlier today. This release brings the package up to speed with the most current release by the Census Bureau. More importantly, we finally made good on an old promise to ourselves and now install the binary by compiling from its Fortran sources! No more pre-made binaries. This required some work by Kirill, Michael, and Jeroen to finalize matter because, as we all know, the CRAN build processes and tool chains can be a little byzantine in their details. Use on platforms not covered by binaries from CRAN (or r-universe) should just work too as the demands on the (Fortran) compiler are fairly standard. All in all the build is fairly lightweight and quick even when rebuilding from source. Courtesy of my CRANberries, there is also a diffstat report for this release showing changes to the previous release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 January 2024

Reproducible Builds (diffoscope): diffoscope 254 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 254. This version includes the following changes:
[ Chris Lamb ]
* Reflow some code according to black.
[ Seth Michael Larson ]
* Add support for comparing the 'eXtensible ARchive' (.XAR/.PKG) file format.
[ Vagrant Cascadian ]
* Add external tool on GNU Guix for 7z.
You find out more by visiting the project homepage.

12 January 2024

Dirk Eddelbuettel: digest 0.6.34 on CRAN: Maintanance

Release 0.6.34 of the digest package arrived at CRAN today and has also been uploaded to Debian already. digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3, and crc32c), and ebales easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 63.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects. (Oh and we also just passed the 20th anniversary of the initial CRAN upload. Time flies, as they say.) This release contains small (build-focussed) enhancements contributed by Michael Chirico, and another set of fixed for printf format warnings this time on Windows. My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 January 2024

Dirk Eddelbuettel: Rcpp 1.0.12 on CRAN: New Maintenance / Update Release

rcpp logo The Rcpp Core Team is once again thrilled to announce a new release 1.0.12 of the Rcpp package. It arrived on CRAN early today, and has since been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course at r2u should catch up tomorrow. The release was uploaded yesterday, and run its reverse dependencies overnight. Rcpp always gets flagged nomatter what because the grandfathered .Call(symbol) but we had not single change to worse among over 2700 reverse dependencies! This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2791 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 254 in BioConductor. On CRAN, 13.8% of all packages depend (directly) on Rcpp, and 59.9% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 78.1 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1766 (JSS, 2011) and 292 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 617. This release is incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.12 (2024-01-08)
  • Changes in Rcpp API:
    • Missing header includes as spotted by some recent tools were added in two places (Michael Chirico in #1272 closing #1271).
    • Casts to avoid integer overflow in matrix row/col selections have neem added (Aaron Lun #1281).
    • Three print format correction uncovered by R-devel were applied with thanks to Tomas Kalibera (Dirk in #1285).
    • Correct a print format correction in the RcppExports glue code (Dirk in #1288 fixing #1287).
    • The upcoming OBJSXP addition to R 4.4.0 is supported in the type2name mapper (Dirk and I aki in #1293).
  • Changes in Rcpp Attributes:
    • Generated interface code from base R that fails under LTO is now corrected (I aki in #1274 fixing a StackOverflow issue).
  • Changes in Rcpp Documentation:
    • The caption for third figure in the introductory vignette has been corrected (Dirk in #1277 fixing #1276).
    • A small formatting issue was correct in an Rd file as noticed by R-devel (Dirk in #1282).
    • The Rcpp FAQ vignette has been updated (Dirk in #1284).
    • The Rcpp.bib file has been refreshed to current package versions.
  • Changes in Rcpp Deployment:
    • The RcppExports file for an included test package has been updated (Dirk in #1289).

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues). If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

4 January 2024

Michael Ablassmeier: Migrating a system to Hetzner cloud using REAR and kexec

I needed to migrate an existing system to an Hetzner cloud VPS. While it is possible to attach KVM consoles and custom ISO images to dedicated servers, i didn t find any way to do so with regular cloud instances. For system migrations i usually use REAR, which has never failed me. (and also has saved my ass during recovery multiple times). It s an awesome utility! It s possible to do this using the Hetzner recovery console too, but using REAR is very convenient here, because it handles things like re-creating the partition layout and network settings automatically! The steps are:

Example To create a rescue image on the source system:
apt install rear
echo OUTPUT=ISO > /etc/rear/local.conf
rear mkrescue -v
[..]
Wrote ISO image: /var/lib/rear/output/rear-debian12.iso (185M)
My source system had a 128 GB disk, so i registered an instance on Hetzner cloud with greater disk size to make things easier: image Now copy the ISO image to the newly created instance and extract its data:
 apt install kexec-tools
 scp rear-debian12.iso root@49.13.193.226:/tmp/
 modprobe loop
 mount -o loop rear-debian12.iso /mnt/
 cp /mnt/isolinux/kernel /tmp/
 cp /mnt/isolinux/initrd.cgz /tmp/
Install kexec if not installed already:
 apt install kexec-tools
Note down the current gateway configuration, this is required later on to make the REAR recovery console reachable via SSH:
root@testme:~# ip route
default via 172.31.1.1 dev eth0
172.31.1.1 dev eth0 scope link
Reboot the running VPS instance into the REAR recovery image using somewhat the same kernel cmdline:
root@testme:~# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.1.0-13-amd64 root=UUID=5174a81e-5897-47ca-8fe4-9cd19dc678c4 ro consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0
kexec --initrd /tmp/initrd.cgz --command-line="consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0" /tmp/kernel
Connection to 49.13.193.226 closed by remote host.
Connection to 49.13.193.226 closed
Now watch the system on the Console booting into the REAR system: image Login the recovery console (root without password) and fix its default route to make it reachable:
ip addr
[..]
2: enp1s0
..
$ ip route add 172.31.1.1 dev enp1s0
$ ip route add default via 172.31.1.1
ping 49.13.193.226
64 bytes from 49.13.193.226: icmp_seq=83 ttl=52 time=27.7 ms
The network configuration might differ, the source system in this example used DHCP, as the target does. If REAR detects changed static network configuration it guides you through the setup pretty nicely. Login via SSH (REAR will store your ssh public keys in the image) and start the recovery process, follow the steps as suggested by REAR:
ssh -l root 49.13.193.226
Welcome to Relax-and-Recover. Run "rear recover" to restore your system !
RESCUE debian12:~ # rear recover
Relax-and-Recover 2.7 / Git
Running rear recover (PID 673 date 2024-01-04 19:20:22)
Using log file: /var/log/rear/rear-debian12.log
Running workflow recover within the ReaR rescue/recovery system
Will do driver migration (recreating initramfs/initrd)
Comparing disks
Device vda does not exist (manual configuration needed)
Switching to manual disk layout configuration (GiB sizes rounded down to integer)
/dev/vda had size 137438953472 (128 GiB) but it does no longer exist
/dev/sda was not used on the original system and has now 163842097152 (152 GiB)
Original disk /dev/vda does not exist (with same size) in the target system
Using /dev/sda (the only available of the disks) for recreating /dev/vda
Current disk mapping table (source => target):
  /dev/vda => /dev/sda
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
[..]
User confirmed recreated disk layout
[..]
This step re-recreates your original disk layout and mounts it to /mnt/local/ (this example uses a pretty lame layout, but usually REAR will handle things like lvm/btrfs just nicely):
mount
/dev/sda3 on /mnt/local type ext4 (rw,relatime,errors=remount-ro)
/dev/sda1 on /mnt/local/boot type ext4 (rw,relatime)
Now clone your source systems data to /mnt/local/ with whatever utility you like to use and exit the recovery step. After confirming everything went well, REAR will setup the bootloader (and all other config details like fstab entries and adjusted network configuration) for you as required:
rear> exit
Did you restore the backup to /mnt/local ? Are you ready to continue recovery ? yes
User confirmed restored files
Updated initramfs with new drivers for this system.
Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist).
Installing GRUB2 boot loader...
Determining where to install GRUB2 (no GRUB2_INSTALL_DEVICES specified)
Found possible boot disk /dev/sda - installing GRUB2 there
Finished 'recover'. The target system is mounted at '/mnt/local'.
Exiting rear recover (PID 7103) and its descendant processes ...
Running exit tasks
Now reboot the recovery console and watch it boot into your target systems configuration: image Being able to use this procedure for complete disaster recovery within Hetzner cloud VPS (using off-site backups) gives me a better feeling, too.

1 January 2024

Russ Allbery: 2023 Book Reading in Review

In 2023, I finished and reviewed 53 books, continuing a trend of year-over-year increases and of reading the most books since 2012 (the last year I averaged five books a month). Reviewing continued to be uneven, with a significant slump in the summer and smaller slumps in February and November, and a big clump of reviews finished in October in addition to my normal year-end reading and reviewing vacation. The unevenness this year was mostly due to finishing books and not writing reviews immediately. Reviews are much harder to write when the finished books are piling up, so one goal for 2024 is to not let that happen again. I enter the new year with one book finished and not yet reviewed, after reading a book about every day and a half during my December vacation. I read two all-time favorite books this year. The first was Emily Tesh's debut novel Some Desperate Glory, which is one of the best space opera novels I have ever read. I cannot improve on Shelley Parker-Chan's blurb for this book: "Fierce and heartbreakingly humane, this book is for everyone who loved Ender's Game, but Ender's Game didn't love them back." This is not hard science fiction but it is fantastic character fiction. It was exactly what I needed in the middle of a year in which I was fighting a "burn everything down" mood. The second was Night Watch by Terry Pratchett, the 29th Discworld and 6th Watch novel. Throughout my Discworld read-through, Pratchett felt like he was on the cusp of a truly stand-out novel, one where all the pieces fit and the book becomes something more than the sum of its parts. This was that book. It's a book about ethics and revolutions and governance, but also about how your perception of yourself changes as you get older. It does all of the normal Pratchett things, just... better. While I would love to point new Discworld readers at it, I think you do have to read at least the Watch novels that came before it for it to carry its proper emotional heft. This was overall a solid year for fiction reading. I read another 15 novels I rated 8 out of 10, and 12 that I rated 7 out of 10. The largest contributor to that was my Discworld read-through, which was reliably entertaining throughout the year. The run of Discworld books between The Fifth Elephant (read late last year) and Wintersmith (my last of this year) was the best run of Discworld novels so far. One additional book I'll call out as particularly worth reading is Thud!, the Watch novel after Night Watch and another excellent entry. I read two stand-out non-fiction books this year. The first was Oliver Darkshire's delightful memoir about life as a rare book seller, Once Upon a Tome. One of the things I will miss about Twitter is the regularity with which I stumbled across fascinating people and then got to read their books. I'm off Twitter permanently now because the platform is designed to make me incoherently angry and I need less of that in my life, but it was very good at finding delightfully quirky books like this one. My other favorite non-fiction book of the year was Michael Lewis's Going Infinite, a profile of Sam Bankman-Fried. I'm still bemused at the negative reviews that this got from people who were upset that Lewis didn't turn the story into a black-and-white morality play. Bankman-Fried's actions were clearly criminal; that's not in dispute. Human motivations can be complex in ways that are irrelevant to the law, and I thought this attempt to understand that complexity by a top-notch storyteller was worthy of attention. Also worth a mention is Tony Judt's Postwar, the first book I reviewed in 2023. A sprawling history of post-World-War-II Europe will never have the sheer readability of shorter, punchier books, but this was the most informative book that I read in 2023. 2024 should see the conclusion of my Discworld read-through, after which I may return to re-reading Mercedes Lackey or David Eddings, both of which I paused to make time for Terry Pratchett. I also have another re-read similar to my Chronicles of Narnia reviews that I've been thinking about for a while. Perhaps I will start that next year; perhaps it will wait for 2025. Apart from that, my intention as always is to read steadily, write reviews as close to when I finished the book as possible, and make reading time for my huge existing backlog despite the constant allure of new releases. Here's to a new year full of more new-to-me books and occasional old favorites. The full analysis includes some additional personal reading statistics, probably only of interest to me.

30 December 2023

Russ Allbery: Review: The Hound of Justice

Review: The Hound of Justice, by Claire O'Dell
Series: Janet Watson Chronicles #2
Publisher: Harper Voyager
Copyright: July 2019
ISBN: 0-06-269938-5
Format: Kindle
Pages: 325
The Hound of Justice is a near-future thriller novel with Sherlock Holmes references. It is a direct sequel to A Study in Honor. This series is best read in order. Janet Watson is in a much better place than she was in the first book. She has proper physical therapy, a new arm, and a surgeon's job waiting for her as soon as she can master its features. A chance meeting due to an Inauguration Day terrorist attack may even develop into something more. She just needs to get back into the operating room and then she'll feel like her life is back on track. Sara Holmes, on the other hand, is restless, bored, and manic, rudely intruding on Watson's date. Then she disappears, upending Watson's living arrangements. She's on the trail of something. When mysterious destructible notes start appearing in Watson's books, it's clear that she wants help. The structure of this book didn't really work for me. The first third or so is a slice-of-life account of Watson's attempt to resume her career as a surgeon against a backdrop of ongoing depressing politics. This part sounds like the least interesting, but I was thoroughly engrossed. Watson is easy to care about, hospital politics are strangely interesting, and while the romance never quite clicked for me, it had potential. I was hoping for another book like A Study in Honor, where Watson's life and Holmes's investigations entwine and run in parallel. That was not to be. The middle third of the book pulls Watson away to Georgia and a complicated mix of family obligations and spy-novel machinations. If this had involved Sara's fae strangeness, verbal sparring, and odd tokens of appreciation, maybe it would have worked, but Sara Holmes is entirely off-camera. Watson is instead dealing with a minor supporting character from the first book, who drags her through disguises, vehicle changes, and border stops in a way that felt excessive and weirdly out of place. (Other reviews say that this character is the Mycroft Holmes equivalent; the first initial of Micha's name fits, but nothing else does so far as I can tell.) Then the last third of the novel turns into a heist. I like a heist novel as much as the next person, but a good heist story needs a team with chemistry and interplay, and I didn't know any of these people. There was way too little Sara Holmes, too much of Watson being out of her element in a rather generic way, and too many steps that Watson is led through without giving the reader a chance to enjoy the competence of the team. It felt jarring and disconnected, like Watson got pulled out of one story and dropped into an entirely different story without a proper groundwork. The Hound of Justice still has its moments. Watson is a great character and I'm still fully invested in her life. She was pulled into this mission because she's the person Holmes knows with the appropriate skills, and when she finally gets a chance to put those skills to use, it's quite satisfying. But, alas, the magic of A Study in Honor simply isn't here, in part because Sara Holmes is missing for most of the book and her replacements and stand-ins are nowhere near as intriguing. The villain's plan seems wildly impractical and highly likely to be detected, and although I can come up with some explanations to salvage it, those don't appear in the book. And, as in the first book, the villain seems very one-dimensional and simplistic. This is certainly not a villain worthy of Holmes. Fittingly, given the political movements O'Dell is commenting on, a lot of this book is about racial politics. O'Dell contrasts the microaggressions and more subtle dangers for Watson as a black woman in Washington, D.C., with the more explicit and active racism of the other places to which she travels over the course of the story. She's trying very hard to give the reader a feeling for what it's like to be black in the United States. I don't have any specific complaints about this, and I'm glad she's attempting it, but I came away from this book with a nagging feeling that Watson's reactions were a tiny bit off. It felt like a white person writing about racism rather than a black person writing about racism: nothing is entirely incorrect, but the emotional beats aren't quite where black authors would put them. I could be completely wrong about this, and am certainly much less qualified to comment than O'Dell is, but there were enough places that landed slightly wrong that I wanted to note it. I would still recommend A Study in Honor, but I'm not sure I can recommend this book. This is one of those series where the things that I enjoyed the most about the first book weren't what the author wanted to focus on in subsequent books. I would read more about the day-to-day of Watson's life, and I would certainly read more of Holmes and Watson sparring and circling and trying to understand each other. I'm less interested in somewhat generic thrillers with implausible plots and Sherlock Holmes references. At the moment, this is academic, since The Hound of Justice is the last book of the series so far. Rating: 6 out of 10

22 December 2023

Gunnar Wolf: Pushing some reviews this way

Over roughly the last year and a half I have been participating as a reviewer in ACM s Computing Reviews, and have even been honored as a Featured Reviewer. Given I have long enjoyed reading friends reviews of their reading material (particularly, hats off to the very active Russ Allbery, who both beats all of my frequency expectations (I could never sustain the rythm he reads to!) and holds documented records for his >20 years as a book reader, with far more clarity and readability than I can aim for!), I decided to explicitly share my reviews via this blog, as the audience is somewhat congruent; I will also link here some reviews that were not approved for publication, clearly marking them so. I will probably work on wrangling my Jekyll site to display an (auto-)updated page and RSS feed for the reviews. In the meantime, the reviews I have published are:

17 December 2023

Dirk Eddelbuettel: littler 0.3.19 on CRAN: Several Updates

max-heap image The twentieth release of littler as a CRAN package landed a few minutes ago, following in the now seventeen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later. littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years. littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette. This release contains a fair number of small changes and improvements to some of the example scripts is run daily. The full change description follows.

Changes in littler version 0.3.19 (2023-12-17)
  • Changes in examples scripts
    • The help or usage text display for r2u.r, ttt.r, check.r has been improved, expanded or corrected, respectively
    • installDeps.r has a new argument for dependency selection
    • An initial 'single test file' runner tttf.r has been added
    • r2u.r has two new options for setting / varying the Debian build version of package that is built, and one for BioConductor builds, one for a 'dry run' build, and a new --compile option
    • installRSPM.r, installPPM.r, installP3M.r have been updates to reflect the name changes
    • installRub.r now understands 'package@universe' too
    • tt.r flips the default of the --effects switch

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

5 December 2023

Louis-Philippe V ronneau: Montreal's Debian & Stuff - November 2023

Hello from a snowy Montr al! My life has been pretty busy lately1 so please forgive this late report. On November 19th, our local Debian User Group met at Montreal's most prominent hackerspace, Foulab. We've been there a few times already, but since our last visit, Foulab has had some membership/financial troubles. Happy to say things are going well again and a new team has taken over the space. This meetup wasn't the most productive day for me (something about being exhausted apparently makes it hard to concentrate), but other people did a bunch of interesting stuff :) Pictures Here are a bunch of pictures I took! Foulab is always a great place to snap quirky things :) A sign on a whiteboard that says 'Bienvenue aux laboratoires qui rends fou' The entrance of the bio-hacking house, with a list of rules An exploded keyboard with a 'Press F1 to continue' sign An inflatable Tux with a Foulab T-Shirt A picture of the woodworking workshop

  1. More busy than the typical end of semester rush... At work, we are currently renegotiating our collective bargaining agreement and things aren't going so well. We went on strike for a few days already and we're planning on another 7 days starting on Friday 8th.

4 December 2023

Russ Allbery: Cumulative haul

I haven't done one of these in quite a while, long enough that I've already read and reviewed many of these books. John Joseph Adams (ed.) The Far Reaches (sff anthology)
Poul Anderson The Shield of Time (sff)
Catherine Asaro The Phoenix Code (sff)
Catherine Asaro The Veiled Web (sff)
Travis Baldree Bookshops & Bonedust (sff)
Sue Burke Semiosis (sff)
Jacqueline Carey Cassiel's Servant (sff)
Rob Copeland The Fund (nonfiction)
Mar Delaney Wolf Country (sff)
J.S. Dewes The Last Watch (sff)
J.S. Dewes The Exiled Fleet (sff)
Mike Duncan Hero of Two Worlds (nonfiction)
Mike Duncan The Storm Before the Storm (nonfiction)
Kate Elliott King's Dragon (sff)
Zeke Faux Number Go Up (nonfiction)
Nicola Griffith Menewood (sff)
S.L. Huang The Water Outlaws (sff)
Alaya Dawn Johnson The Library of Broken Worlds (sff)
T. Kingfisher Thornhedge (sff)
Naomi Kritzer Liberty's Daughter (sff)
Ann Leckie Translation State (sff)
Michael Lewis Going Infinite (nonfiction)
Jenna Moran Magical Bears in the Context of Contemporary Political Theory (sff collection)
Ari North Love and Gravity (graphic novel)
Ciel Pierlot Bluebird (sff)
Terry Pratchett A Hat Full of Sky (sff)
Terry Pratchett Going Postal (sff)
Terry Pratchett Thud! (sff)
Terry Pratchett Wintersmith (sff)
Terry Pratchett Making Money (sff)
Terry Pratchett Unseen Academicals (sff)
Terry Pratchett I Shall Wear Midnight (sff)
Terry Pratchett Snuff (sff)
Terry Pratchett Raising Steam (sff)
Terry Pratchett The Shepherd's Crown (sff)
Aaron A. Reed 50 Years of Text Games (nonfiction)
Dashka Slater Accountable (nonfiction)
Rory Stewart The Marches (nonfiction)
Emily Tesh Silver in the Wood (sff)
Emily Tesh Drowned Country (sff)
Valerie Vales Chilling Effect (sff)
Martha Wells System Collapse (sff)
Martha Wells Witch King (sff)

16 November 2023

Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint


Photo by Pixabay
Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.

This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.

The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.

The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.

Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.

Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:

For questions and comments please post to Kernel section on Ubuntu Discourse.



25 October 2023

Russ Allbery: Review: Going Infinite

Review: Going Infinite, by Michael Lewis
Publisher: W.W. Norton & Company
Copyright: 2023
ISBN: 1-324-07434-5
Format: Kindle
Pages: 255
My first reaction when I heard that Michael Lewis had been embedded with Sam Bankman-Fried working on a book when Bankman-Fried's cryptocurrency exchange FTX collapsed into bankruptcy after losing billions of dollars of customer deposits was "holy shit, why would you talk to Michael Lewis about your dodgy cryptocurrency company?" Followed immediately by "I have to read this book." This is that book. I wasn't sure how Lewis would approach this topic. His normal (although not exclusive) area of interest is financial systems and crises, and there is lots of room for multiple books about cryptocurrency fiascoes using someone like Bankman-Fried as a pivot. But Going Infinite is not like The Big Short or Lewis's other financial industry books. It's a nearly straight biography of Sam Bankman-Fried, with just enough context for the reader to follow his life. To understand what you're getting in Going Infinite, I think it's important to understand what sort of book Lewis likes to write. Lewis is not exactly a reporter, although he does explain complicated things for a mass audience. He's primarily a storyteller who collects people he finds fascinating. This book was therefore never going to be like, say, Carreyrou's Bad Blood or Isaac's Super Pumped. Lewis's interest is not in a forensic account of how FTX or Alameda Research were structured. His interest is in what makes Sam Bankman-Fried tick, what's going on inside his head. That's not a question Lewis directly answers, though. Instead, he shows you Bankman-Fried as Lewis saw him and was able to reconstruct from interviews and sources and lets you draw your own conclusions. Boy did I ever draw a lot of conclusions, most of which were highly unflattering. However, one conclusion I didn't draw, and had been dubious about even before reading this book, was that Sam Bankman-Fried was some sort of criminal mastermind who intentionally plotted to steal customer money. Lewis clearly doesn't believe this is the case, and with the caveat that my study of the evidence outside of this book has been spotty and intermittent, I think Lewis has the better of the argument. I am utterly fascinated by this, and I'm afraid this review is going to turn into a long summary of my take on the argument, so here's the capsule review before you get bored and wander off: This is a highly entertaining book written by an excellent storyteller. I am also inclined to believe most of it is true, but given that I'm not on the jury, I'm not that invested in whether Lewis is too credulous towards the explanations of the people involved. What I do know is that it's a fantastic yarn with characters who are too wild to put in fiction, and I thoroughly enjoyed it. There are a few things that everyone involved appears to agree on, and therefore I think we can take as settled. One is that Bankman-Fried, and most of the rest of FTX and Alameda Research, never clearly distinguished between customer money and all of the other money. It's not obvious that their home-grown accounting software (written entirely by one person! who never spoke to other people! in code that no one else could understand!) was even capable of clearly delineating between their piles of money. Another is that FTX and Alameda Research were thoroughly intermingled. There was no official reporting structure and possibly not even a coherent list of employees. The environment was so chaotic that lots of people, including Bankman-Fried, could have stolen millions of dollars without anyone noticing. But it was also so chaotic that they could, and did, literally misplace millions of dollars by accident, or because Bankman-Fried had problems with object permanence. Something that was previously less obvious from news coverage but that comes through very clearly in this book is that Bankman-Fried seriously struggled with normal interpersonal and societal interactions. We know from multiple sources that he was diagnosed with ADHD and depression (Lewis describes it specifically as anhedonia, the inability to feel pleasure). The ADHD in Lewis's account is quite severe and does not sound controlled, despite medication; for example, Bankman-Fried routinely played timed video games while he was having important meetings, forgot things the moment he stopped dealing with them, was constantly on his phone or seeking out some other distraction, and often stimmed (by bouncing his leg) to a degree that other people found it distracting. Perhaps more tellingly, Bankman-Fried repeatedly describes himself in diary entries and correspondence to other people (particularly Caroline Ellison, his employee and on-and-off secret girlfriend) as being devoid of empathy and unable to access his own emotions, which Lewis supports with stories from former co-workers. I'm very hesitant to diagnose someone via a book, but, at least in Lewis's account, Bankman-Fried nearly walks down the symptom list of antisocial personality disorder in his own description of himself to other people. (The one exception is around physical violence; there is nothing in this book or in any of the other reporting that I've seen to indicate that Bankman-Fried was violent or physically abusive.) One of the recurrent themes of this book is that Bankman-Fried never saw the point in following rules that didn't make sense to him or worrying about things he thought weren't important, and therefore simply didn't. By about a third of the way into this book, before FTX is even properly started, very little about its eventual downfall will seem that surprising. There was no way that Sam Bankman-Fried was going to be able to run a successful business over time. He was extremely good at probabilistic trading and spotting exploitable market inefficiencies, and extremely bad at essentially every other aspect of living in a society with other people, other than a hit-or-miss ability to charm that worked much better with large audiences than one-on-one. The real question was why anyone would ever entrust this man with millions of dollars or decide to work for him for longer than two weeks. The answer to those questions changes over the course of this story. Later on, it was timing. Sam Bankman-Fried took the techniques of high frequency trading he learned at Jane Street Capital and applied them to exploiting cryptocurrency markets at precisely the right time in the cryptocurrency bubble. There was far more money than sense, the most ruthless financial players were still too leery to get involved, and a rising tide was lifting all boats, even the ones that were piles of driftwood. When cryptocurrency inevitably collapsed, so did his businesses. In retrospect, that seems inevitable. The early answer, though, was effective altruism. A full discussion of effective altruism is beyond the scope of this review, although Lewis offers a decent introduction in the book. The short version is that a sensible and defensible desire to use stronger standards of evidence in evaluating charitable giving turned into a bizarre navel-gazing exercise in making up statistical risks to hypothetical future people and treating those made-up numbers as if they should be the bedrock of one's personal ethics. One of the people most responsible for this turn is an Oxford philosopher named Will MacAskill. Sam Bankman-Fried was already obsessed with utilitarianism, in part due to his parents' philosophical beliefs, and it was a presentation by Will MacAskill that converted him to the effective altruism variant of extreme utilitarianism. In Lewis's presentation, this was like joining a cult. The impression I came away with feels like something out of a science fiction novel: Bankman-Fried knew there was some serious gap in his thought processes where most people had empathy, was deeply troubled by this, and latched on to effective altruism as the ethical framework to plug into that hole. So much of effective altruism sounds like a con game that it's easy to think the participants are lying, but Lewis clearly believes Bankman-Fried is a true believer. He appeared to be sincerely trying to make money in order to use it to solve existential threats to society, he does not appear to be motivated by money apart from that goal, and he was following through (in bizarre and mostly ineffective ways). I find this particularly believable because effective altruism as a belief system seems designed to fit Bankman-Fried's personality and justify the things he wanted to do anyway. Effective altruism says that empathy is meaningless, emotion is meaningless, and ethical decisions should be made solely on the basis of expected value: how much return (usually in safety) does society get for your investment. Effective altruism says that all the things that Sam Bankman-Fried was bad at were useless and unimportant, so he could stop feeling bad about his apparent lack of normal human morality. The only thing that mattered was the thing that he was exceptionally good at: probabilistic reasoning under uncertainty. And, critically to the foundation of his business career, effective altruism gave him access to investors and a recruiting pool of employees, things he was entirely unsuited to acquiring the normal way. There's a ton more of this book that I haven't touched on, but this review is already quite long, so I'll leave you with one more point. I don't know how true Lewis's portrayal is in all the details. He took the approach of getting very close to most of the major players in this drama and largely believing what they said happened, supplemented by startling access to sources like Bankman-Fried's personal diary and Caroline Ellis's personal diary. (He also seems to have gotten extensive information from the personal psychiatrist of most of the people involved; I'm not sure if there's some reasonable explanation for this, but based solely on the material in this book, it seems to be a shocking breach of medical ethics.) But Lewis is a storyteller more than he's a reporter, and his bias is for telling a great story. It's entirely possible that the events related here are not entirely true, or are skewed in favor of making a better story. It's certainly true that they're not the complete story. But, that said, I think a book like this is a useful counterweight to the human tendency to believe in moral villains. This is, frustratingly, a counterweight extended almost exclusively to higher-class white people like Bankman-Fried. This is infuriating, but that doesn't make it wrong. It means we should extend that analysis to more people. Once FTX collapsed, a lot of people became very invested in the idea that Bankman-Fried was a straightforward embezzler. Either he intended from the start to steal everyone's money or, more likely, he started losing money, panicked, and stole customer money to cover the hole. Lots of people in history have done exactly that, and lots of people involved in cryptocurrency have tenuous attachments to ethics, so this is a believable story. But people are complicated, and there's also truth in the maxim that every villain is the hero of their own story. Lewis is after a less boring story than "the crook stole everyone's money," and that leads to some bias. But sometimes the less boring story is also true. Here's the thing: even if Sam Bankman-Fried never intended to take any money, he clearly did intend to mix customer money with Alameda Research funds. In Lewis's account, he never truly believed in them as separate things. He didn't care about following accounting or reporting rules; he thought they were boring nonsense that got in his way. There is obvious criminal intent here in any reading of the story, so I don't think Lewis's more complex story would let him escape prosecution. He refused to follow the rules, and as a result a lot of people lost a lot of money. I think it's a useful exercise to leave mental space for the possibility that he had far less obvious reasons for those actions than that he was a simple thief, while still enforcing the laws that he quite obviously violated. This book was great. If you like Lewis's style, this was some of the best entertainment I've read in a while. Highly recommended; if you are at all interested in this saga, I think this is a must-read. Rating: 9 out of 10

15 October 2023

Michael Ablassmeier: Testing system updates using libvirts checkpoint feature

If you want to test upgrades on virtual machines (running on libvit/qemu/kvm) these are usually the most common steps: As with recent versions, both libvirt and qemu have full support for dirty bitmaps (so called checkpoints). These checkpoints, once existent, will track changes to the block level layer and can be exported via NBD protocol. Usually one can create these checkpoints using virsh checkpoint-create[-as], with a proper xml description. Using the pull based model, the following is possible: The overlay image will only use the disk space for the blocks changed during upgrade: no need to create a full clone which may waste a lot of disk space. In order to simplify the first step, its possible to use virtnbdbackup for creating the required consistent checkpoint and export its data using a unix domain socket. Update: As alternative, ive just created a small utility called vircpt to create and export checkpoints. In my example im using a debian11 virtual machine with qemu guest agent configured:
# virsh list --all
 Id Name State 
 ------------------------------------------ 
 1 debian11_default running
Now let virtnbdbackup create an checkpoint, freeze the filesystems during creation and tell libvirt to provide us with a usable NBD server listening on an unix socket:
# virtnbdbackup -d debian11_default -o /tmp/foo -s
INFO lib common - printVersion [MainThread]: Version: 1.9.45 Arguments: ./virtnbdbackup -d debian11_default -o /tmp/foo -s
[..] 
INFO root virtnbdbackup - main [MainThread]: Local NBD Endpoint socket: [/var/tmp/virtnbdbackup.5727] 
INFO root virtnbdbackup - startBackupJob [MainThread]: Starting backup job.
INFO fs fs - freeze [MainThread]: Freezed [2] filesystems. 
INFO fs fs - thaw [MainThread]: Thawed [2] filesystems. 
INFO root virtnbdbackup - main [MainThread]: Started backup job for debugging, exiting.
We can now use nbdinfo to display some information about the NBD export:
# nbdinfo "nbd+unix:///vda?socket=/var/tmp/virtnbdbackup.5727" 
    protocol: newstyle-fixed without TLS, using structured packets 
    export="vda": 
    export-size: 137438953472 (128G) 
    content: 
        DOS/MBR boot sector uri: nbd+unix:///vda?socket=/var/tmp/virtnbdbackup.5727
And create a backing image that we can use to test an in-place upgrade:
# qemu-img create -F raw -b nbd+unix:///vda?socket=/var/tmp/virtnbdbackup.5727 -f qcow2 upgrade.qcow2
Now we have various ways for booting the image:
# qemu-system-x86_64 -hda upgrade.qcow2 -m 2500 --enable-kvm
image After performing the required tests within the virtual machine we can simply kill the active NBD backup job :
# virtnbdbackup -d debian11_default -o /tmp/foo -k
INFO lib common - printVersion [MainThread]: Version: 1.9.45 Arguments: ./virtnbdbackup -d debian11_default -o /tmp/foo -k 
[..]
INFO root virtnbdbackup - main [MainThread]: Stopping backup job
And remove the created qcow image:
# rm -f upgrade.qcow2

7 October 2023

Louis-Philippe V ronneau: Montreal's Debian & Stuff - "September" 2023

Last Sunday, our local Debian user group gathered to chat, to work on Debian and to do other, non-Debian related hacking. A "Debian & Stuff"! It had been a while since we held a proper meetup. Our last event was the Montreal BSP we organised back in March 2023... We somewhat missed the window for a June meetup and summer events never seem to gather a good crowd, so I didn't try to organise one. All this to say it was nice to see folks from the Montreal Debian community :) This event was also the first time we were hosted by L'Espace des possibles - Petite Patrie, a social venue that aims to provide a space for not-for-profit activities, like repair caf s, sowing classes, board game nights, etc. It was really nice and we will surely meet there again in the future. A group picture during the event Many people came to the event, including some new ones. Although people always tend to come and go during the day, a total of 12 people attended the event. As always, people worked on very different projects! One of the focus of this D&S was assembling AirGradient DIY basic kits. Our local community has been talking a lot about air quality metrics in the past few months1. Tiago thus decided to have a company print the PCBs for this kit and graciously gave away a few spares. Michael then took upon himself to order parts on AliExpress and a few of us ended up soldering the kits together while chatting. An AirGradient DIY basic kit, semi-assembled Otherwise, some Debian work was also done: The whole event was super fun, the tacos we had for lunch were delicious (and very authentic!), and we ended up at a local microbrewery to share a pint later in the evening. Looking forward to the next event!

  1. Mostly as a result of the large forest fires in Canada this summer. I myself blogged twice about air quality-related projects recently.

Next.