Search Results: "jon"

10 July 2025

Tianon Gravi: Yubi Whati? (YubiKeys, ECDSA, and X.509)

Off-and-on over the last several weeks, I've been spending time trying to learn/understand YubiKeys better, especially from the perspective of ECDSA and signing. I had a good mental model for how "slots" work (canonically referenced by their hexadecimal names such as 9C), but found that it had a gap related to "objects"; while closing that, I was annoyed that the main reference table for this gap lives primarily in either a PDF or inside several implementations, so I figured I should create the reference I want to see in the world, but that it would also be useful to write down some of my understanding for my own (and maybe others') future reference. So, to that end, I'm going to start with a bit ( ) of background information, with the heavy caveat that this only applies to "PIV" ("FIPS 201") usage of YubiKeys, and that I only actually care about ECDSA, although I've been reassured that it's the same for at least RSA (anything outside this is firmly Here Be Not Tianon; "gl hf dd"). (Incidentally, learning all this helped me actually appreciate the simplicity of cloud-based KMS solutions, which was an unexpected side effect. ) At a really high level, ECDSA is like many other (asymmetric) cryptographic solutions you've got a public key and a private key, the private key can be used to "sign" data (tiny amounts of data, in fact, like P-256 can only reasonably sign 256 bits of data, which is where cryptographic hashes like SHA256 come in as secure analogues for larger data in small bit sizes), and the public key can then be used to verify that the data was indeed signed by the private key, and only someone with the private key could've done so. There's some complex math and RNGs involved, but none of that's actually relevant to this post, so find that information elsewhere. Unfortunately, this is where things go off the rails: PIV is X.509 ("x509") heavy, and there's no X.509 in the na ve view of my use case. In a YubiKey (or any other PIV-signing-supporting smart card? do they actually have competitors in this specific niche? ), a given "slot" can hold one single private key. There are ~24 slots which can hold a private key and be used for signing, although "Slot 9c" is officially designated as the "Digital Signature" slot and is encouraged for signing purposes. One of the biggest gotchas is that with pure-PIV (and older YubiKey firmware ) the public key for a given slot is only available at the time the key is generated, and the whole point of the device in the first place is that the private key is never, ever available from it (all cryptographic operations happen inside the device), so if you don't save that public key when you first ask the device to generate a private key in a particular slot, the public key is lost forever (asterisk).
$ # generate a new ECDSA P-256 key in "slot 9c" ("Digital Signature")
$ # WARNING: THIS WILL GLEEFULLY WIPE SLOT 9C WITHOUT PROMPTING
$ yubico-piv-tool --slot 9c --algorithm ECCP256 --action generate
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtGoWRGyjjUlJFXpu8BL6Rnx8jjKR
5+Mzl2Vepgor+k7N9q7ppOtSMWefjFVR0SEPmXqXINNsCi6LpLtNEigIRg==
-----END PUBLIC KEY-----
Successfully generated a new private key.
$ # this is the only time/place we (officially) get this public key
With that background, now let's get to the second aspect of "slots" and how X.509 fits. For every aforementioned slot, there is a corresponding "object" (read: place to store arbitrary data) which is corresponding only by convention. For all these "key" slots the (again, by convention) corresponding "object" is explicitly supposed to be an X.509 certificate (see also the PDF reference linked above). It turns out this is a useful and topical place to store that public key we need to keep handy! It's also an interesting place to shove additional details about what the key in a given slot is being used for, if that's your thing. Converting the raw public key into a (likely self-signed) X.509 certificate is an exercise for the reader, but if you want to follow the conventions, you need some way to convert a given "slot" to the corresponding "object", and that is the lookup table I wish existed in more forms. So, without further ado, here is the anti-climax:
Slot Object Description
0x9A 0x5FC105 X.509 Certificate for PIV Authentication
0x9E 0x5FC101 X.509 Certificate for Card Authentication
0x9C 0x5FC10A X.509 Certificate for Digital Signature
0x9D 0x5FC10B X.509 Certificate for Key Management
0x82 0x5FC10D Retired X.509 Certificate for Key Management 1
0x83 0x5FC10E Retired X.509 Certificate for Key Management 2
0x84 0x5FC10F Retired X.509 Certificate for Key Management 3
0x85 0x5FC110 Retired X.509 Certificate for Key Management 4
0x86 0x5FC111 Retired X.509 Certificate for Key Management 5
0x87 0x5FC112 Retired X.509 Certificate for Key Management 6
0x88 0x5FC113 Retired X.509 Certificate for Key Management 7
0x89 0x5FC114 Retired X.509 Certificate for Key Management 8
0x8A 0x5FC115 Retired X.509 Certificate for Key Management 9
0x8B 0x5FC116 Retired X.509 Certificate for Key Management 10
0x8C 0x5FC117 Retired X.509 Certificate for Key Management 11
0x8D 0x5FC118 Retired X.509 Certificate for Key Management 12
0x8E 0x5FC119 Retired X.509 Certificate for Key Management 13
0x8F 0x5FC11A Retired X.509 Certificate for Key Management 14
0x90 0x5FC11B Retired X.509 Certificate for Key Management 15
0x91 0x5FC11C Retired X.509 Certificate for Key Management 16
0x92 0x5FC11D Retired X.509 Certificate for Key Management 17
0x93 0x5FC11E Retired X.509 Certificate for Key Management 18
0x94 0x5FC11F Retired X.509 Certificate for Key Management 19
0x95 0x5FC120 Retired X.509 Certificate for Key Management 20
See also "piv-objects.json" for a machine-readable copy of this data. (Major thanks to paultag and jon gzip johnson for helping me learn and generally putting up with me, but especially dealing with my live-stream-of-thoughts while I stumble through the dark. )

30 June 2025

Russell Coker: Links June 2025

Jonathan McDowell wrote part 2 of his blog series about setting up a voice assistant on Debian, I look forward to reading further posts [1]. I m working on some related things for Debian that will hopefully work with this. I m testing out OpenSnitch on Trixie inspired by this blog post, it s an interesting package [2]. Valerie wrote an informative article about creating mesh networks using LORA for emergency use [3]. Interesting article about Signal and Windows Recall. That gives us some things to consider regarding ML features on Linux systems [4]. Insightful article about AI and the end of prestige [5]. We should all learn about LLMs. Jonathan Dowland wrote an informative blog post about how to manage namespaces on Linux [6]. The Consumer Rights wiki is a great resource for raising awareness of corporations exploiting their customers for computer related goods and services [7]. Interesting article about Schizophrenia and the cliff-edge function of evolution [8].

27 June 2025

Jonathan Dowland: Viva

On Monday I had my Viva Voce (PhD defence), and passed (with minor corrections).
Post-viva refreshment Post-viva refreshment
It's a relief to have passed after 8 years of work. I'm not quite done of course, as I have the corrections to make! Once those are accepted I'll upload my thesis here.

19 June 2025

Jonathan Carter: My first tag2upload upload

Tag2upload? The tag2upload service has finally gone live for Debian Developers in an open beta. If you ve never heard of tag2upload before, here is a great primer presented by Ian Jackson and prepared by Ian Jackson and Sean Whitton. In short, the world has moved on to hosting and working with source code in Git repositories. In Debian, we work with source packages that are used to generated the binary artifacts that users know as .deb files. In Debian, there is so much tooling and culture built around this. For example, our workflow passes what we call the island test you could take every source package in Debian along with you to an island with no Internet, and you ll still be able to rebuild or modify every package. When changing the workflows, you risk losing benefits like this, and over the years there has been a number of different ideas on how to move to a purely or partially git flow for Debian, none that really managed to gain enough momentum or project-wide support. Tag2upload makes a lot of sense. It doesn t take away any of the benefits of the current way of working (whether technical or social), but it does make some aspects of Debian packages significantly simpler and faster. Even so, if you re a Debian Developer and more familiar with how the sausage have made, you ll have noticed that this has been a very long road for the tag2upload maintainers, they ve hit multiple speed bumps since 2019, but with a lot of patience and communication and persistence from all involved (and almost even a GR), it is finally materializing.

Performing my first tag2upload So, first, I needed to choose which package I want to upload. We re currently in hard freeze for the trixie release, so I ll look for something simple that I can upload to experimental.
I chose bundlewrap, it s quote a straightforward python package, and updates are usually just as straightforward, so it s probably a good package to work on without having to deal with extra complexities in learning how to use tag2upload. So, I do the usual uscan and dch -i to update my package
And then I realise that I still want to build a source package to test it in cowbuilder. Hmm, I remember that Helmut showed me that building a source package isn t necessary in sbuild, but I have a habit of breaking my sbuild configs somehow, but I guess I should revisit that. So, I do a dpkg-buildpackage -S -sa and test it out with cowbuilder, because that s just how I roll (at least for now, fixing my local sbuild setup is yak shaving for another day, let s focus!). I end up with a binary that looks good, so I m satisfied that I can upload this package to the Debian archives. So, time to configure tag2upload. The first step is to set up the webhook in Salsa. I was surprised two find two webhooks already configured:
I know of KGB that posts to IRC, didn t know that this was the mechanism it does that by before. Nice! Also don t know what the tagpending one does, I ll go look into that some other time. Configuring a tag2upload webhook is quite simple, add a URL, call the name tag2upload, and select only tag push events:
I run the test webhook, and it returned a code 400 message about a missing message header, which the documentation says is normal. Next, I install git-debpush from experimental. The wiki page simply states that you can use the git-debpush command to upload, but doesn t give any examples on how to use it, and its manpage doesn t either. And when I run just git-debpush I get:
jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: check failed: upstream tag upstream/4.22.0 is not an ancestor of refs/heads/debian/master; probably a mistake ('upstream-nonancestor' check)
pristine-tar is /usr/bin/pristine-tar
git-debpush: some check(s) failed; you can pass --force to ignore them
I have no idea what that s supposed to mean. I was also not sure whether I should tag anything to begin with, or if some part of the tag2upload machinery automatically does it. I think I might have tagged debian/4.23-1 before tagging upstream/4.23 and perhaps it didn t like it, I reverted and did it the other way around and got a new error message. Progress!
jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: could not determine the git branch layout
git-debpush: please supply a --quilt= argument
Looking at the manpage, it looks like quilt=baredebian matches my package the best, so I try that:
jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush --quilt=baredebian
Enumerating objects: 70, done.
Counting objects: 100% (70/70), done.
Delta compression using up to 12 threads
Compressing objects: 100% (37/37), done.
Writing objects: 100% (37/37), 8.97 KiB   2.99 MiB/s, done.
Total 37 (delta 30), reused 0 (delta 0), pack-reused 0 (from 0)
To salsa.debian.org:python-team/packages/bundlewrap.git
6f55d99..3d5498f debian/master -> debian/master
 * [new tag] upstream/4.23.1 -> upstream/4.23.1
 * [new tag] debian/4.23.1-1_exp1 -> debian/4.23.1-1_exp1
Ooh! That looked like it did something! And a minute later I received the notification of the upload in my inbox:
So, I m not 100% sure that this makes things much easier for me than doing a dput, but, it s not any more difficult or more work either (once you know how it works), so I ll be using git-debpush from now on, and I m sure as I get more used to the git workflow of doing things I ll understand more of the benefits. And at last, my one last use case for using FTP is now properly dead. RIP FTP :)

31 May 2025

Russell Coker: Links May 2025

Christopher Biggs gave an informative Evrything Open lecture about voice recognition [1]. We need this on Debian phones. Guido wrote an informative blog post about booting a custom Android kernel on a Pixel 3a [2]. Good work in writing this up, but a pity that Google made the process so difficult. Interesting to read about an expert being a victim of a phishing attack [3]. It can happen to anyone, everyone has moments when they aren t concentrating. Interesting advice on how to leak to a journalist [4]. Brian Krebs wrote an informative article about the ways that Trump is deliberately reducing the cyber security of the US government [5]. Brian Krebs wrote an interesting article about the main smishng groups from China [6]. Louis Rossmann (who is known for high quality YouTube videos about computer repair) made an informative video about a scammy Australian company run by a child sex offender [7]. The Helmover was one of the wildest engineering projects of WW2, an over the horizon guided torpedo that could one-shot a battleship [8]. Interesting blog post about DDoSecrets and the utter failure of the company Telemessages which was used by the US government [9]. Jonathan McDowell wrote an interesting blog post about developing a free software competitor to Alexa etc, the listening hardware costs $13US per node [10]. Noema Magazine published an insightful article about Rewilding the Internet, it has some great ideas [11].

28 May 2025

Jonathan Dowland: Linux Mount Namespaces

I've been refreshing myself on the low-level guts of Linux container technology. Here's some notes on mount namespaces. In the below examples, I will use more than one root shell simultaneously. To disambiguate them, the examples will feature a numbered shell prompt: 1# for the first shell, and 2# for the second. Preliminaries Namespaces are normally associated with processes and are removed when the last associated process terminates. To make them persistent, you have to bind-mount the corresponding virtual file from an associated processes's entry in /proc, to another path1. The receiving path needs to have its "propogation" property set to "private". Most likely your system's existing mounts are mostly "public". You can check the propogation setting for mounts with
1# findmnt -o+PROPAGATION
We'll create a new directory to hold mount namespaces we create, and set its Propagation to private, via a bind-mount of itself to itself.
1# mkdir /root/mntns
1# mount --bind --make-private /root/mntns /root/mntns
The namespace itself needs to be bind-mounted over a file rather than a directory, so we'll create one.
1# touch /root/mntns/1
Creating and persisting a new mount namespace
1# unshare --mount=/root/mntns/1
We are now 'inside' the new namespace in a new shell process. We'll change the shell prompt to make this clearer
PS1='inside# '
We can make a filesystem change, such as mounting a tmpfs
inside# mount -t tmpfs /mnt /mnt
inside# touch /mnt/hi-there
And observe it is not visible outside that namespace
2# findmnt /mnt
2# stat /mnt/hi-there
stat: cannot statx '/mnt/hi-there': No such file or directory
Back to the namespace shell, we can find an integer identifier for the namespace via the shell processes /proc entry:
inside# readlink /proc/$$/ns/mnt
It will be something like mnt:[4026533646]. From another shell, we can list namespaces and see that it exists:
2# lsns -t mnt
        NS TYPE NPROCS   PID USER             COMMAND
 
4026533646 mnt       1 52525 root             -bash
If we exit the shell that unshare created,
inside# exit
running lsns again should2 still list the namespace, albeit with the NPROCS column now reading 0.
2# lsns -t mnt
We can see that a virtual filesystem of type nsfs is mounted at the path we selected when we ran unshare:
2# grep /root/mntns/1 /proc/mounts 
nsfs /root/mntns/1 nsfs rw 0 0
Entering the namespace from another process This is relatively easy:
1# nsenter --mount=/root/mntns/1
1# stat /mnt/hi-there
  File: /mnt/hi-there
 
More to come in future blog posts! References These were particularly useful in figuring this out:

  1. This feels really weird to me. At least at first. I suppose it fits with the "everything is a file" philosophy.
  2. I've found lsns in util-linux 2.38.1 (from 2022-08-04) doesn't list mount namespaces with no associated processes; but 2.41 (from 2025-03-18) does. The fix landed in 2022-11-08. For extra fun, I notice that a namespace can be held persistent with a file descriptor which is unlinked from the filesystem

14 May 2025

Jonathan McDowell: Local Voice Assistant Step 3: A Detour into Tensorflow

To build our local voice satellite on a Debian system rather than using the ATOM Echo device we need something that can handle the wake word component; the piece that means we only send audio to the Home Assistant server for processing by whisper.cpp when we ve detected someone is trying to talk to us. openWakeWord seems to be one of the better ways to do this, and is well supported. However. It relies on TensorFlow Lite (now LiteRT) which is a complicated mess of machine learning code. tflite-runtime is available from PyPI, but that s prebuilt and we re trying to avoid that. Despite, on initial impressions, it looking quite complicated to deal with building TensorFlow - Bazel is an immediate warning - it turns out to be incredibly simple to build your own .deb:
$ wget -O tensorflow-v2.15.1.tar.gz https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.15.1.tar.gz
 
$ tar -axf tensorflow-v2.15.1.tar.gz
$ cd tensorflow-2.15.1/
$ BUILD_NUM_JOBS=$(nproc) BUILD_DEB=y tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
 
$ find . -name *.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime-dbgsym_2.15.1-1_amd64.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime_2.15.1-1_amd64.deb
This is hiding an awful lot of complexity, however. In particular the number of 3rd party projects that are being downloaded in the background (and compiled, to be fair, rather than using binary artefacts). We can build the main C++ wrapper .so directly with cmake, allowing us to investigate a bit more:
mkdir tf-build
cd tf-build/
cmake \
    -DCMAKE_C_FLAGS="-I/usr/include/python3.11" \
    -DCMAKE_CXX_FLAGS="-I/usr/include/python3.11" \
    ../tensorflow-2.15.1/tensorflow/lite/
cmake --build . -t _pywrap_tensorflow_interpreter_wrapper
 
[100%] Built target _pywrap_tensorflow_interpreter_wrapper
$ ldd _pywrap_tensorflow_interpreter_wrapper.so
    linux-vdso.so.1 (0x00007ffec9588000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f22d00d0000)
    libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f22cf600000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f22d00b0000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22cf81f000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f22d01d1000)
Looking at the output we can see that pthreadpool, FXdiv, FP16 + PSimd are all downloaded, and seem to have ways to point to a local copy. That seems positive. However, there are even more hidden dependencies, which we can see if we look in the _deps/ subdirectory of the build tree. These don t appear to be as easy to override, and not all of them have packages already in Debian. First, the ones that seem to be available: abseil-cpp, cpuinfo, eigen, farmhash, flatbuffers, gemmlowp, ruy + xnnpack (lots of credit to the Debian Deep Learning Team for these, and in particular Mo Zhou) Dependencies I couldn t see existing packages for are: OouraFFT, ml_dtypes & neon2sse. At this point I just used the package I built with the initial steps above. I live in hope someone will eventually package this properly for Debian, or that I ll find the time to try and help out, but that s not going to be today. I wish upstream developers made it easier to use system copies of their library dependencies. I wish library developers made it easier to build and install system copies of their work. pkgconf is not new tech these days (pkg-config appears to date back to 2000), and has decent support in CMake. I get that there can be issues with incompatibilities even in minor releases, or awkwardness in doing builds of multiple connected projects, but at least give me the option to do so.

Jonathan Dowland: Orbital

Orbital at NX, Newcastle in 2023 Orbital at NX, Newcastle in 2023
I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album. I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld) As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)
  1. The Naked and the Dead (1992) The Naked and the Dead by Orbital From an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended. Anyway I've now written more about a song I haven't recommended than the one I did
  2. Remind (1993) Remind by Orbital From the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there. In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)
  3. You Lot (2004) From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.
  4. Beached (2000) Beached (Long version) by Orbital, Angelo Badalamenti Co-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.
  5. Spare Parts Express (1999) Spare Parts Express by Orbital Critics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later. Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)
  6. Attached (1994) Attached by Orbital This originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.

7 May 2025

Jonathan Dowland: procmail versus exim filters

I ve been using Procmail to filter mail for a long time. Reading Antoine s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions). My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language. Requirements A good first step is to look at what I'm using Procmail for:
  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories
  2. I file messages into different folders depending on the outcome of the above filters
  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules
  4. I move mailing list mail into folders, semi-automatically (see list filtering)
  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.
  6. I file a copy of some messages, the name of which is partly derived from the current calendar year
Exim Filters I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters. autolists Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:
if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif
Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine). killfile An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:
if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif
I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement. It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar. external filters With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails. With Exim filters, we can use pipe to invoke an external program:
pipe "$home/mail/mailreaver.crm -u $home/mail/"
However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors. Here's Exim's documentation on what happens when the external command fails:
Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.
That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message. The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves. removing subject tagging Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters. copy mail to archive folder I can't see a way to derive a folder name from the calendar year. next steps Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do. However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.

  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

2 May 2025

Jonathan Dowland: Korg Minilogue XD

I didn't buy the Arturia Microfreak or the Behringer Model-D; I bought a Korg Minilogue XD.
Korg Minilogue XD, and Zoom R8 Korg Minilogue XD, and Zoom R8
I wanted an all-in-one unit which meant a built-in keyboard. I was keen on analogue oscillators, partly for the sound, but mostly to ensure that most of the controls were immediately accessible. The Minilogue-XD has two analogue oscillators and an analogue filter. It also has some useful, pure digital stuff: post-effects (chorus, flanger, echo, etc.); and a third, digital oscillator. The digital oscillator is programmable. There's an SDK, shared between the Minilogue-XD and some other Korg synths (at least the Prologue and NTS-1). There's a cottage industry of independent musicians writing and selling digital patches, e.g. STRING User Oscillator. Here's an example of a drone programmed using the SDK for the NTS-1:
Eventually I expect to have fun exploring the SDK, but for now I'm keeping it firmly away from computers (hence the Zoom R8 multitrack recorder in the above image: more on that in a future blog post). The Korg has been gathering dust whilst I was writing up, but now I hope to find some time to play.

1 May 2025

Jonathan McDowell: Local Voice Assistant Step 2: Speech to Text and back

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary. The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough. [I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.] I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn t packaged, that s meant I ve packaged things up as I go. I m not at the stage anything is suitable for upload to Debian proper, but equally I ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you re building for bookworm, but it doesn t need rebuilt. You need a Whisper model that s been converts to ggml format; they can be found on Hugging Face. I ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn t seem to make much of a difference for voice control rather than plain transcribing. [One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.] I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:
[Unit]
Description=Wyoming whisper.cpp server
After=network.target
[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en
MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true
[Install]
WantedBy=multi-user.target
It needs the Wyoming Protocol integration enabled in Home Assistant; you can Add Entry and enter localhost + 10030 for host + port and it ll get added. Then in the Voice Assistant configuration there ll be a whisper.cpp option available. Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I ll come back to that in a future post. For now I took the easy option and used the built in Google Translate option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn t entirely obvious:
media_source:
With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as Hey Jarvis, turn on the study light work out of the box. I haven t yet got into defining my own phrases, partly because I know some of the things I want ( What time is it? ) are already added in later Home Assistant versions than the one I m running. Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I ve been pretty impressed with the work that s gone into it all. Next step, running a voice satellite on a Debian box.

30 April 2025

Utkarsh Gupta: FOSS Activites in April 2025

Here s my 67th monthly but brief update about the activities I ve done in the F/L/OSS world.

Debian
This was my 76th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ There s a bunch of things I do, both, technical and non-technical. Here s what I did:
  • Updating Matomo to v5.3.1.
  • Lots of bursary stuff for DC25. We rolled out the results for the first batch.
  • Helping Andreas Tille with and around FTP team bits.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
This was my 51st month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did (there s so much and some of it might not be public yet!), here s a quick TL;DR of what I did:
  • Released 25.04 Plucky Puffin! \o/
  • Helped open the 25.10 Questing Quokka archive. Let the development begin!
  • Jon, VP of Engineering, asked me to lead the Canonical Release team - that was definitely not something I saw coming. :)
  • We re now doing Ubuntu monthly releases for the devel releases - I ll be the tech lead for the project.
  • Preparing for the May sprints - too many new things and new responsibilities. :)

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support). This was my 67th month as a Debian LTS and 54th month as a Debian ELTS paid contributor.
Due to DC25 bursary work, Ubuntu 25.04 release, and other travel bits, I only worked for 2.00 hours for LTS and 4.50 hours for ELTS. I did the following things:
  • [ELTS] Had already backported patches for adminer for the following CVEs:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/adminer.
    • As the same CVEs are affected LTS, we decided to release for LTS first and then for ELTS but since I had no hours for LTS, I decided to do a bit more of testing for ELTS to make sure things don t regress in buster.
    • Will prepare LTS (and also s-p-u, sigh) updates this month and get back to ELTS thereafter.
  • [LTS] Started to prepare the LTS update for adminer for the same CVEs as for ELTS:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Haven t fully backported the patch yet but this is what I intend to do for this month (now that I have hours :D).
  • [LTS] Partially attended the LTS meeting on Jitsi. Summary here.
    • Partially because I was fighting SSO auth issues with Jitsi. Looks like there were some upstream issues/activity and it was resulting in gateway crashes but all good now.
    • I was following the running notes and keeping up with things as much as I could. :)

Until next time.
:wq for today.

24 April 2025

Jonathan McDowell: Local Voice Assistant Step 1: An ATOM Echo voice satellite

Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I d rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I ve had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just Download our prebuilt components . While I admire the efforts to get Home Assistant fully packaged for Debian I accept that s a tricky proposition, and settle for running it in a venv on a Debian stable container. Voice requires a lot more binary components, and I want to have voice satellites in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself. This is the start of a write-up of that. I ll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let s start with some requirements: My house server is an AMD Ryzen 7 5700G, so my expectation was that I d have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I m still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I ll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I m missing. Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don t care about what s under the hood, there are a bunch of more user friendly details on Home Assistant s site itself, and they have pre-built images you can just deploy. My first step was sorting out a voice satellite . This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I d seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn t got around to setting up. Here, we ignore a bit about delving into exactly what s going on under the hood, even if we re compiling locally. This is a constrained embedded device and while I m familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it s thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to Hey Jarvis rather than the default Okay Nabu , and that was a good reason to bother doing a local build. We ll get into actually building a voice satellite on Debian in later posts. I started with the default upstream assistant config and tweaked it a little for my setup:
diff of my configuration tweaks
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml    2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml  2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
 substitutions:
-  name: m5stack-atom-echo
+  name: study-atom-echo
   friendly_name: M5Stack Atom Echo
-  micro_wake_word_model: okay_nabu  # alexa, hey_jarvis, hey_mycroft are also supported
+  micro_wake_word_model: hey_jarvis  # alexa, hey_jarvis, hey_mycroft are also supported
 
 esphome:
   name: $ name 
@@ -16,15 +16,26 @@
     version: 4.4.8
     platform_version: 5.4.0
 
+# Enable logging
 logger:
+
+# Enable Home Assistant API
 api:
+  encryption:
+    key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
 
 ota:
   - platform: esphome
-    id: ota_esphome
+    password: "itsnotarealthing"
 
 wifi:
+  ssid: "My Wifi Goes Here"
+  password: "AndThePasswordGoesHere"
+
+  # Enable fallback hotspot (captive portal) in case wifi connection fails
   ap:
+    ssid: "Study-Atom-Echo Fallback Hotspot"
+    password: "ThisIsRandom"
 
 captive_portal:

(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.) It turns out to be fairly easy to setup ESPHome in a venv and get it to build + flash the image for you:
Instructions for building + flashing ESPHome to ATOM Echo
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$  pip install esphome==2024.12.4
Collecting esphome==2024.12.4
  Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
 
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
 
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM:   [=         ]  10.6% (used 34632 bytes from 327680 bytes)
Flash: [========  ]  79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
 
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.
Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.

And then you can watch it boot (this is mine already configured up in Home Assistant):
Watching the ATOM Echo boot
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed      : 40MHz
I (40) boot.esp32: SPI Mode       : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label            Usage          Type ST Offset   Length
I (66) boot:  0 otadata          OTA data         01 00 00009000 00002000
I (73) boot:  1 phy_init         RF data          01 01 0000b000 00001000
I (81) boot:  2 app0             OTA app          00 10 00010000 001c0000
I (88) boot:  3 app1             OTA app          00 11 001d0000 001c0000
I (96) boot:  4 nvs              WiFi data        01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name:     study-atom-echo
I (705) cpu_start: App version:      2024.12.4
I (710) cpu_start: Compile time:     Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256:  1db4989a56c6c930...
I (722) cpu_start: ESP-IDF:          4.4.8
I (727) cpu_start: Min chip rev:     v0.0
I (732) cpu_start: Max chip rev:     v3.99 
I (737) cpu_start: Chip rev:         v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]  InputEn: 1  OutputEn: 0  OpenDrain: 0  Pullup: 0  Pulldown: 0  Intr:0 
[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]:   Color mode: RGB
[D][template.switch:046]:   Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]:   State: ON
[D][light:051]:   Brightness: 60%
[D][light:059]:   Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]:   Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]: 
[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set
[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE
[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 
[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6
[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32
[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6
[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6
[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760
[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760
[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440
[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled
[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled
[C][wifi:061]: Starting WiFi...
[C][wifi:062]:   Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06
[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]: 
[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1
[D][esp-idf:000][wifi]: 
[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1
[D][esp-idf:000][wifi]: 
[W][component:157]: Component wifi set Warning flag: scanning for networks
 
[I][wifi:617]: WiFi Connected!
 
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
 
[C][logger:185]: Logger:
[C][logger:186]:   Level: DEBUG
[C][logger:188]:   Log Baud Rate: 115200
[C][logger:189]:   Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]:   Pin: 27
[C][esp32_rmt_led_strip:189]:   Channel: 0
[C][esp32_rmt_led_strip:214]:   RGB Order: GRB
[C][esp32_rmt_led_strip:215]:   Max refresh rate: 0
[C][esp32_rmt_led_strip:216]:   Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]:   Update Interval: 60.0s
[C][template.select:069]:   Optimistic: YES
[C][template.select:070]:   Initial Option: On device
[C][template.select:071]:   Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]:   Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]:   Default Transition Length: 0.0s
[C][light:095]:   Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]:   Restore Mode: restore defaults to ON
[C][template.switch:057]:   Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]:   Restore Mode: always OFF
[C][template.switch:057]:   Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]:   Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]:   Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]:   Address: study-atom-echo.local:3232
[C][esphome.ota:075]:   Version: 2
[C][esphome.ota:078]:   Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]:   Boot considered successful after 60 seconds
[C][safe_mode:021]:   Invoke after 10 boot attempts
[C][safe_mode:023]:   Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]:   Address: study-atom-echo.local:6053
[C][api:143]:   Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]:   models:
[C][micro_wake_word:015]:     - Wake Word: Hey Jarvis
[C][micro_wake_word:016]:       Probability cutoff: 0.970
[C][micro_wake_word:017]:       Sliding window size: 5
[C][micro_wake_word:021]:     - VAD Model
[C][micro_wake_word:022]:       Probability cutoff: 0.500
[C][micro_wake_word:023]:       Sliding window size: 5
[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4
[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD

That s enough to get a voice satellite that can be configured up in Home Assistant; you ll need the ESPHome Integration added, then for the noise_psk key you use the same string as I have under api/encryption/key in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 base64 to generate mine). If you re like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it ll send audio from that port to Home Assistant, then receive back audio to the same port). At this point you can now shout Hey Jarvis, what time is it? at the Echo, and the white light will turn flashing blue (indicating it s heard the wake word). Which means we re ready to teach Home Assistant how to do something with the incoming audio.

17 April 2025

Jonathan Dowland: Hledger UI themes

Last year I intended to write an update on my use of hledger, but that was waylaid for various reasons and I need to revisit how (if) I'm using it, so that's put off for longer. I do want to mention one contribution I made upstream: a dark theme for the UI, and some unfinished work on consistent colours. Consistent terminal colours are an interesting issue: the most common terminal colour modes (8 and 256) use indexing into a palette, but the definition of the colours is ambiguous: the 8-colour palette is formally specified by ANSI as names (red, green, etc.); the 256-colour palette is effectively defined by xterm (a useful chart) but I'm not sure all terminal emulators that support it have chosen the same colour values. A consequence of indexed-colour is that the end-user may redefine what the colour values are. Whether this is a good thing or a bad thing depends on your point of view. As an end-user, it's attractive to be able to tune the colour scheme; but as a software author, it means you have no real idea what your users are going to see, and matters like ensuring contrast are impossible. Some terminals support 24-bit "true" colour, in which the colours are specified as an RGB triplet. Using these mean the software author can be reasonably sure all users will see the same thing (for a fungible definition of "same"), at the cost of user configurability. However, since it's less well supported, we start having to worry about fallback behaviour. In the case of hledger-ui, which provides several colour schemes, that's probably OK, because the user configurability is achieved by choosing one of the schemes. (or writing your own, in extremis). However, the dark theme I contributed uses the 8-colour palette, in common with the other themes, and my explorations into using predictable colours are unfinished.

15 April 2025

Jonathan Dowland: submitted

Today I submitted my PhD thesis, 8 years since I started (give or take). Next step, Viva. Normal service may resume shortly

25 March 2025

Dirk Eddelbuettel: littler 0.3.21 on CRAN: Lots Moar Features!

max-heap image The twentysecond release of littler as a CRAN package landed on CRAN just now, following in the now nineteen year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later. littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years. littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette. This release, the first is almost exactly one year, brings enhancements to six scripts as well as three new ones. The new ones crup.r offers CRan UPloads from the command-line, deadliners.r lists CRAN package by CRAN deadline, and wb.r uploads to win-builder (replacing an older shell script of mine). Among the updated ones, kitten.r now creates more complete DESCRIPTION files in the packages it makes, and several scripts support additional options. A number of changes were made to packaging as well, some of which were contributed by Jon and Michael which is of course always greatly appreciated. The trigger for the release was, just like for RQuantLib earlier today, a CRAN nag on bashisms half of which actually false here it was in a comment only. Oh well. The full change description follows.

Changes in littler version 0.3.21 (2025-03-24)
  • Changes in examples scripts
    • Usage text for ciw.r is improved, new options were added (Dirk)
    • The noble release is supported by r2u.r (Dirk)
    • The installRub.r script has additional options (Dirk)
    • The ttlt.r script has a new load_package argument (Dirk)
    • A new script deadliners.r showing CRAN packages 'under deadline' has been added, and then refined (Dirk)
    • The kitten.r script can now use whoami and argument githubuser on the different *kitten helpers it calls (Dirk)
    • A new script wb.r can upload to win-builder (Dirk)
    • A new script crup.r can upload a CRAN submission (Dirk)
    • In rcc.r, the return from rcmdcheck is now explicitly printed (Dirk)
    • In r2u.r the dry-run option is passed to the build command (Dirk)
  • Changes in package
    • Regular updates to badges, continuous integration, DESCRIPTION and configure.ac (Dirk)
    • Errant osVersion return value are handled more robustly (Michael Chirico in #121)
    • The current run-time path is available via variable LITTLER_SCRIPT_PATH (Jon Clayden in #122)
    • The cleanup script remove macOS debug symbols (Jon Clayden in #123)

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

24 March 2025

Jonathan McDowell: Who pays the cost of progress in software?

I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt. My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else s problem). There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt. I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It s not quite as extreme as rolling out a new service, because it s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors. A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn t managed to catch everyone who might be affected, so by the time it s discovered it s release critical, because at least one package no longer builds in unstable. Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it s easier to track the interconnections between different components. Debian doesn t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I m going to have to figure out what changed elsewhere to cause it. However he s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem. I don t know if I have a point to this post. I think it s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don t then allow a tardy developer to block progress.

8 March 2025

Debian Brasil: MiniDebConf Belo Horizonte 2024 - a brief report

From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city. MiniDebConf BH 2024 banners This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Bras lia 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history. Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities. MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event. MiniDebConf BH 2024 flisol 2024 edition numbers During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of: MiniDebConf BH 2024 palestra The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants. Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors. The organization was carried out by 14 people who started working at the end of 2023, including Prof. Lo c Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event. As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees. See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024. The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses. The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters. MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo Activities The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities. MiniDebConf BH 2024 palestra On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cacha as and souvenirs, as well as tasting some local foods. MiniDebConf BH 2024 mercado After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant. MiniDebConf BH 2024 palestra With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela S o Francisco de Assis, better known as Igrejinha da Pampulha. MiniDebConf BH 2024 palestra We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas. MiniDebConf BH 2024 palestra Crowdfunding For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people! Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated. MiniDebConf BH 2024 doadores Food, accommodation and/or travel grants for participants Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help. In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request. Number of people benefited: The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaragu Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike). Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request. In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him. The experience of offering the bursaries was really good because it allowed several people to come from other cities. MiniDebConf BH 2024 grupo Photos and videos You can watch recordings of the talks at the links below: Thanks We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024. MiniDebConf BH 2024 grupo Sponsors Gold: Silver: Bronze: Organizers

2 March 2025

Jonathan McDowell: RIP: Steve Langasek

[I d like to stop writing posts like this. I ve been trying to work out what to say now for nearly 2 months (writing the mail to -private to tell the Debian project about his death is one of the hardest things I ve had to write, and I bottled out and wrote something that was mostly just factual, because it wasn t the place), and I ve decided I just have to accept this won t be the post I want it to be, but posted is better than languishing in drafts.] Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn t entirely unexpected, but that doesn t make it any easier. I ve struggled to work out what to say about Steve. I ve seen many touching comments from others in Debian about their work with him, but what that s mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred). My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We d decided to walk to a local mall to meet up with some other folk (I can t recall how they were getting there, but it wasn t walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don t know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we d had), but it did. Unlike others I never texted Steve much; we d occasionally chat on IRC, but nothing major. That didn t seem to matter when we actually saw each other in person though, we just picked up like we d seen each other the previous week. DebConf became a recurring theme of when we d see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn t the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano. Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person. The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he s gone.

28 February 2025

Jonathan Dowland: printables.com feed

I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one. I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves. Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.

Next.