Search Results: "Jonathan McDowell"

22 January 2025

Jonathan McDowell: Christmas Movies

I watch a lot of films. Since completing the IMDB Top 250 back in 2016 I ve kept an eye on it, and while I don t go out of my way to watch the films that newly appear in it I generally sit at over 240 watched. I should note I don t consider myself a film buff/critic, however. I watch things for enjoyment, and a lot of the time that s kicking back and relaxing and disengaging my brain. So I don t get into writing reviews, just high level lists of things I ve watched, sometimes with a few comments. With that in mind, let s talk about Christmas movies. Yes, I appreciate it s the end of January, but generally during December we watch things that have some sort of Christmas theme. New ones if we can find them, but also some of what we consider classics . This almost always starts with Scrooged after we ve put up the tree. I don t always like Bill Murray (I couldn t watch The Life Aquatic with Steve Zissou and I think Lost in Translation is overrated), but he s in a bunch of things I really like, and Scrooged is one of those. I don t care where you sit on whether Die Hard is a Christmas movie or not, it s a good movie and therefore regularly gets a December watch. Die Hard 2 also fits into that category of sequel at least as good as the original , though Helen doesn t agree. We watched it anyway, and I finally made the connection between the William Sadler hotel scene and Michael Rooker s in Mallrats. It turns out I m a Richard Curtis fan. Love Actually has not aged well; most times I watch it I find something new questionable about it, and I always end up hating Alan Rickman for cheating on Emma Thompson, but I do like watching it. He had a new one, That Christmas, out this year, so we watched it as well. Another new-to-us film this year was Spirited. I generally like Ryan Reynolds, and Will Ferrell is good as long as he s not too overboard, so I had high hopes. I enjoyed it, but for some reason not as much as I d expected, and I doubt it s getting added to the regular watch list. Larry doesn t generally like watching full length films, but he (and we), enjoyed The Grinch, which I actually hadn t seen before. He s not as fussed on The Muppet Christmas Carol, but we watched it every year, generally on Christmas or Boxing Day. Favourite thing I saw on the Fediverse in December was Do you know there s a book of The Muppet Christmas Carol, and they don t mention that there s muppets in it once? There are a various other light hearted Christmas films we regularly watch. This year included The Holiday (I have so many issues with even just the practicalities of a short notice house swap), and Last Christmas (lots of George Michael music, what s not to love? Also it was only on this watch through that we realised the lead character is the Mother of Dragons). We started, but could not finish, Carry On. I saw it described somewhere as copaganda, and that feels accurate. It does not accurately reflect any of my interactions with TSA at airports, especially during busy periods. Things we didn t watch this year, but are regularly in the mix, include Fatman, Violent Night (looking forward to the sequel, hopefully this year), and Lethal Weapon. Klaus is kinda at the other end of the spectrum, but very touching, and we ve watched it a couple of years now. Given what we seem to like, any suggestions for other films to add? It s nice to have enough in the mix that we get some variety every year.

5 January 2025

Jonathan McDowell: Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I d do my annual recap of my Free Software activities. For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences In 2024 I managed to make it to FOSDEM again. It s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I m already booked to go this year as well. I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I d a talk submission in for FOSDEM about our use of attestation and why it s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn t selected. Maybe I ll find somewhere else suitable to do it. BSides Belfast may or may not count - it s a security conference, but there s a lot of overlap with various bits of Free software, so I feel it deserves a mention. I skipped DebConf for 2024 for a variety of reasons, but I m expecting to make DebConf25 in Brest, France in July.

Debian Most of my contributions to Free software continue to happen within Debian. In 2023 I d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I ll have to find some time to build the appropriate DFSG tarball and update it. rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded. kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that s tracking Kodi 22 and we re still on Kodi 21 so I plan to follow the Omega branch for now. Which I ve just noticed had a 21.0.8 release this week. Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I ve had zero concerns about any of his packaging. The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there. I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3. The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups. OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well. libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1. I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze. Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there s been some recent activity, but it doesn t seem to be release quality), but there was an outstanding ITP and I ve some familiarity with the space as we ve been using it at work as part of investigating TCG OPAL encryption. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux I d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I ve another fix in progress that I hope to submit in 2025, but it s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects I didn t end up doing much in the way of externally published personal project work in 2024. Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet. listadmin3 got some minor updates based on external feedback / MRs. It s nice to know it s useful to other folk even in its basic state. That wraps up 2024. I ve got no particular goals for this year at present. Ideally I d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I ll settle for making sure all my Debian packages are in reasonable state for Trixie.

24 October 2024

Emmanuel Kasper: back to blogging and running a feed reader as a containerized systemd service

After reading about Jonathan McDowell feed reader install and the back to blogging initiative, I decided to install a feed reader to follow all those nice blog posts. With a feed reader you can compose your own feed of news based on blog posts, websites, mastodon toots. And then you are independant from ad oriented ranking algorithms of social networks. Since Jonathan used FreshRSS as a feed reader, I started with the same software. On a quick glance on its github page, it sounded like a good project:
  • active contributions
  • different channels for stable and latest version of the software
  • container images pointing to the stable release
  • support multiple databases for storage, including PostgreSQL
  • correct documentation mentioning security caveats
I prefer to do the container image installation using podman since:
  • upgrades from FreshRSS are easy to do and can be done separately from operating system upgrades
  • I do not mess my based operating system with php (subjective) and in case of a compromized freshrss, the freshrss/apache install would be still restrained to its own Linux namespaces, separated from the rest of the system.
Podman is image compatible with Docker as they both implement the OCI runtime specification, and have a nearly identical command line interface. This installation will be done on a Debian server, but should work too on any Linux distribution. Initial setup
  • start a container image based on the start command provided by the FreshRSS project. The podman command line is nearly identical to the docker command line, excepts that podman expects the fully qualified domain name associated with the container image, and I chose to run the freshrss container on the localhost interface only. I also use a defined version tag, because using the latest tag makes it complicated to track which exact ersion I have installed.
# podman pull docker.io/freshrss/freshrss:1.20.1
# podman run --detach --restart unless-stopped --log-opt max-size=10m \
  --publish 127.0.0.1:8081:80 \
  --env TZ=Europe/Paris \
  --env 'CRON_MIN=1,31' \
  --volume freshrss_data:/var/www/FreshRSS/data \
  --volume freshrss_extensions:/var/www/FreshRSS/extensions \
  --name freshrss \
  docker.io/freshrss/freshrss:1.20.1
  • verify where the podman volumes have been created. This is where the user data of freshrss will be stored.
# podman volume ls
# podman volume inspect freshrss_data
  • now that freshrss is installed, you can start its configuration wizard at localhost:8081. You should keep the default sqlite choice
  • finally after running the wizard, you can login again and add some feeds
  • verify that your config has been stored outside the container, and inside the volume (so that it will not be erased in case of upgrages)
# ls -l /var/lib/containers/storage/volumes/freshrss_data/_data/users/
  • verify the state of sqlite database
echo '.tables'  sqlite3  /var/lib/containers/storage/volumes/freshrss_data/_data/users/<your freshrss user>/db.sqlite 
category  entry     entrytag  entrytmp  feed      tag
Going with FreshRSS in Production Podman has this very nice feature that it can generate a systemd unit from a running container, and use systemd to start a container on boot. This is in contrary to docker where the docker daemon does the stop/start of containers on boot. I prefer the systemd approach as it treats containers the same way as other system services. Once the freshrss container is running we can generate a systemd unit of it with:
# podman generate systemd --new --name freshrss   tee /etc/systemd/system/container-freshrss.service
Let s stop the container we started previously, and use systemd to manage it:
# podman stop freshrss
# systemctl enable --now container-freshrss.service
We can verify that we have a listening socket on the localhost interface, on the source port 8081
# systemctl status container-freshrss.service
  ...
# ss --listening --numeric --process '( sport = 8081 )'
Netid         State           Recv-Q          Send-Q                   Local Address:Port                   Peer Address:Port         Process         
tcp           LISTEN          0               4096                         127.0.0.1:8081                        0.0.0.0:*             users:(("conmon",pid=4464,fd=5))
Nota Bene: conmon (8) is the process managing the network namespace in which fresh-rss is running, hence it is displayed as the process owning the listening socket Exposing FreshRSS to the external world We have now a running service, but we need to make it reachable from the internet. The simplest, classical way, is to create a subdomain and a VirtualHost configured as a reverse proxy to access the service at 127.0.0.1:8081. Fortunately the FreshRSS authors have documented this setup in https://github.com/FreshRSS/FreshRSS/tree/edge/Docker#alternative-reverse-proxy-using-apache and those steps are no different from a standard application behind a web reverse proxy. Upgrading freshrss container to a newer version A documentation showing how to install a piece of software is nothing when it does not show how to upgrade that said software. Installing is easy, upgrading is where the challenge is. Fortunately to the good stateless design of freshrss (everything is in the sqlite database, which is backed by a non-epheremal volume in our setup), switchting versions is a peace of cake.
# podman pull docker.io/freshrss/freshrss:1.20.2
# systemctl stop container-freshrss.service
# sed -i 's,docker.io/freshrss/freshrss:1.20.1,docker.io/freshrss/freshrss:1.20.2,' /etc/systemd/system/container-freshrss.service
# systemctl daemon-reload
# systemctl start container-freshrss.service
If you need to rollback, you just need to revert version numbers in the instruction above. Enjoy your own reader feed ! I will add the following feeds of blogs I like, let us see if I follow them better with a feed reader !

23 September 2024

Jonathan McDowell: The (lack of a) return-to-office conspiracy

During COVID companies suddenly found themselves able to offer remote working where it hadn t previously been on offer. That s changed over the past 2 or so years, with most places I m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week. I ve seen a lot of folk stating they ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let s talk about some of the reasons why companies might want to enforce some sort of RTO.

Real estate value Let s clear this one up first. It s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there s a substantial exit fee.

Occupancy levels That said, once you have anyone in the building the equation changes. If you re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you re getting your money s worth. There s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you ve ruled out the option of company/office wide events.

Coexistence builds relationships As a remote worker I wish it wasn t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific teambuilding style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I m just not in the office regularly enough. You might not care about this ( I just need to put my head down and code, not talk to people ), but don t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn t ready for it yet).

Coexistence allows for unexpected interactions People hate the phrase water cooler chat , and I get that, but it covers the idea of casual conversations that just won t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I ve found many places end up with folk taking conversations to DMs, or they happen in private channels. It happens more naturally in an office environment.

It s easier for bad managers to manage bad performers Again, this falls into the category of things that shouldn t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That s hard though, there are (rightly) a bunch of rights workers have (I m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.

Summary Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.

22 August 2024

Jonathan McDowell: Thoughts on Advent of Code + Rust

Diego wrote about his dislike for Advent of Code and that reminded me I hadn t written up my experience from 2023. Mostly because, spoiler, I never actually completed it and always intended to do so and then write it up. I think it s time to accept I m not going to do that, and write down some thoughts before I forget all of them. These are somewhat vague, given the time that s elapsed, but I think still relevant. You might also find Roger s problem write up interesting. I ve tried AoC a couple of times before; I think I had a very brief attempt back in 2021, and I got 4 days in for 2022. For Advent of Code 2023 I tried much harder to actually complete the challenges, and got most of the way there. I didn t allow myself to move on to the next day until fully completing the previous day, and didn t end up doing the second half of December 24th, or any of December 25th.

Rust First I want to talk about Rust, which is the language I chose to use for the problems. I ve dabbled a little in it, but I d like more familiarity with the basic language, and some programming problems seemed like a good way to get that. It s a language I want to like; I ve spent a lot of my career writing C, do more in Go these days, and generally think Rust promises a low level, run-time light environment like C but with the rough edges taken off. I set myself the challenge of using just bare Rust; no external crates, no use of cargo. I was accused of playing on hard mode by doing this, but it really wasn t the intention - I figured that I should be able to do what I needed without recourse to anything outside the core language, and didn t want what seemed like the extra complexity of dealing with cargo. That caused problems, however. I m used to by-default generic error handling in Go through the error type, but Rust seems to have much more tightly typed errors. I was pointed at anyhow as the right way to do this in Rust. I still find this surprising; I ended up using unwrap() a lot when I think with more generic error handling I could have used ?. The other thing I discovered is that by default rustc is heavy on the debug output. I got significantly better results on some of the solutions with rustc -O -C target-cpu=native source.rs. I probably shouldn t be surprised by this, but worth noting. Rust, to me, has a syntax only a C++ programmer could love. I am not a C++ programmer. Coming from C I found Go to be a nice, simple syntax to learn. Rust has not been the same. There s a lot more punctuation, and it s not always clear to me what it s doing. This applies more when reading other people s code than when writing it myself, obviously, but I see a lot of Rust code that could give Perl a run for its money in terms of looking like line noise. The borrow checker didn t bug me too much, but did add overhead to my thinking. The Rust compiler is generally very good at outputting helpful error messages when the programmer is an idiot. I ended up having to use a RefCell for one solution, and using .iter() for loops rather than explicit iterators (why, why is this different?). I also kept forgetting to explicitly mark variables as mutable when declaring them. Things I liked? There s a rich set of first class data types. Look, I m a C programmer, I m easily pleased. You give me some sort of hash array and I ll be happy. Rust manages that, tuples, strings, all the standard bits any modern language can provide. The whole impl thing for adding methods to structures I like as a way of providing some abstraction, though I think Go has a nicer syntax for it. The compiler, as mentioned, is great at spitting out useful errors for the most part. Also although I wasn t using external crates for AoC I do appreciate there s a decent ecosystem there now (though that brings up another gripe: rust seems to still be a fairly fast moving target, to the extent I can no longer rely on the compiler in Debian stable to be able to compile random projects I find).

Advent of Code Let s talk about the advent of code bit now. Hopefully it s long enough since it came out that this won t be spoilers for anyone, but if you haven t attempted the 2023 AoC and might, you might want to stop reading here. First, a refresher on the format for those who might not be aware of it. Problems are posted daily from December 1st until the 25th. Each is in 2 parts; the second part is not viewable until you have provided the correct answer for the first part. There s a whole leaderboard thing going on, but the puzzle opens at midnight UTC-5 so generally by the time I wake up and have time to look the problem has been solved many times over; no chance of getting listed. Credit to AoC creator, Eric Wastl, for writing up the set of problems in an entertaining fashion. I quite enjoyed seeing how the puzzle would be phrased each day, and the whole thing obviously brings a lot of joy to folk I know. I always start AoC thinking it ll be a fun set of puzzles to solve. Then something happens and I miss a day or two, and all of a sudden I ve a bunch of catching up to do and it s all a bit more of a chore. I hit that at some points this time, but made a concerted effort to try and power through it. That perseverance was required up front, because I found the second part of Day 1 to be ill specified, and had to iterate a few times to actually calculate the desired solution (IIRC, issues about whether sevenone at the end of a line ended up as 7 or 1 really tripped me up). I don t recall any other problems that bit me as hard on the specification as this one, but it happening up front was unfortunate. The short example input doesn t always help with this either; either it s not enough to be able to extrapolate patterns, or it doesn t show all the variations you need to account for (that aren t fully specified in the text), or in a few cases it turned out I needed to understand the shape of the actual data to produce a solution that could actually complete in a reasonable time. Which brings me to another matter, sometimes brute force doesn t actually work. This is fine, but the second part of the day s problem can change the approach you d take. So sometimes I got lucky in the way I handled the first half, and doing the second half was a simple 5 minute tweak, and sometimes I had to entirely change the way I was storing data. You might claim that if I was a better programmer I d have always produced a first half solution that was amenable to extension for the second half. First, I dispute that; I think there are always situations where the problem domain can change in enough directions that you can t handle all of them without a lot of effort. Secondly, I didn t find AoC an environment that encouraged me to optimise for generic solutions. Maybe some of the puzzles in isolation would allow for that, but a month of daily problems to solve while still engaging in regular life meant I hacked things up, took short cuts based on the knowledge I had of the input data, etc, etc. Overall I can see the appeal, but the sheer quantity and the fact I write code as part of my day job just made it feel too much like a chore, rather than a fun mental exercise. I did wonder how they d look as a set of interview puzzles (obviously a subset, rather than all of them), but I m not sure how you d actually use them for that - I wouldn t want anyone to have to solve them in a live interview. So, in case it s not obvious, I m not planning to engage in AoC again this yet. But I m continuing to persevere with Rust (though most of my work stuff is thankfully still Go).

31 July 2024

Jonathan McDowell: Using QEmu for UEFI/TPM testing

This is one of those posts that s more for my own reference than likely to be helpful for others. If you re unlucky it ll have some useful tips for you. If I m lucky then I ll get a bunch of folk pointing out some optimisations. First, what I m trying to achieve. I want a virtual machine environment where I can manually do tests on random kernels, and also various TPM related experiments. So I don t want something as fixed as a libvirt setup. I d like the following: That turns out to be possible, but it took a bunch of trial and error to get there. So I m writing it down. I generally do this on a Fedora based host system (FC40 at present, but this all worked with FC38 + FC39 too), and I run Debian 12 (bookworm) as the guest. At present I m using qemu 8.2.2 and swtpm 0.9.0, both from the FC40 packages. One other issue I spent too long tracking down is that the version of grub 2.06 in bookworm does not manage to pass the TPMEventLog through to the guest kernel properly. The events get measured and the PCRs updated just fine, but /sys/kernel/security/tpm0/binary_bios_measurements doesn t even get created. Using either grub 2.06 from FC40, or the 2.12 backport in bookworm-backports, makes this work just fine. Anyway, for reference, the following is the script I use to start the swtpm, and then qemu. The debugcon line can be dropped if you re not interested in OVMF debug logging. This needs the guest OS to be configured up for a serial console, but avoids the overhead of graphics emulation. As I said at the start, I m open to any hints about other options I should be passing; as long as I get acceptable performance in the guest I care more about reducing host load than optimising for the guest.
#!/bin/sh
BASEDIR=/home/noodles/debian-qemu
if [ ! -S $ BASEDIR /swtpm/swtpm-sock ]; then
    echo Starting swtpm:
    swtpm socket --tpmstate dir=$ BASEDIR /swtpm \
        --tpm2 \
        --ctrl type=unixio,path=$ BASEDIR /swtpm/swtpm-sock &
fi
echo Starting QEMU:
qemu-system-x86_64 -enable-kvm -m 2048 \
    -machine type=q35 \
    -smbios type=1,serial=N00DL35,uuid=fd225315-f72a-4d66-9b16-55363c6c938b \
    -drive if=pflash,format=qcow2,readonly=on,file=/usr/share/edk2/ovmf/OVMF_CODE_4M.qcow2 \
    -drive if=pflash,format=raw,file=$ BASEDIR /OVMF_VARS.fd \
    -global isa-debugcon.iobase=0x402 -debugcon file:$ BASEDIR /debian.ovmf.log \
    -device virtio-blk-pci,drive=drive0,id=virblk0 \
    -drive file=$ BASEDIR /debian-12-efi.qcow2,if=none,id=drive0,discard=on \
    -net nic,model=virtio -net user \
    -chardev socket,id=chrtpm,path=$ BASEDIR /swtpm/swtpm-sock \
    -tpmdev emulator,id=tpm0,chardev=chrtpm \
    -device tpm-tis,tpmdev=tpm0 \
    -display none \
    -nographic \
    -boot menu=on

27 June 2024

Jonathan McDowell: Sorting out backup internet #4: IPv6

The final piece of my 5G (well, 4G) based based backup internet connection I needed to sort out was IPv6. While Three do support IPv6 in their network they only seem to enable it for certain devices, and the MC7010 is not one of those devices, even though it also supports IPv6. I use v6 a lot - over 50% of my external traffic, last time I looked. One suggested option was that I could drop the IPv6 Router Advertisements when the main external link went down, but I have a number of internal services that are only presented on v6 addresses so I needed to ensure clients in the house continued to have access to those. As it happens I ve used the Hurricane Electric IPv6 Tunnel Broker in the past, so my pass was re-instating that. The 5G link has a real external IPv4 address, and it s possible to update the endpoint using a simple HTTP GET. I added the following to my /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route where we are dealing with an interface IP change:
# Update IPv6 tunnel with Hurricane Electric
curl --interface $interface 'https://username:password@ipv4.tunnelbroker.net/nic/update?hostname=1234'
I needed some additional configuration to bring things up, so /etc/network/interfaces got the following, configuring the 6in4 tunnel as well as the low preference default route, and source routing via the 5g table, similar to IPv4:
pre-up ip tunnel add he-ipv6 mode sit remote 216.66.80.26
pre-up ip link set he-ipv6 up
pre-up ip addr add 2001:db8:1234::2/64 dev he-ipv6
pre-up ip -6 rule add from 2001:db8:1234::/64 lookup 5g
pre-up ip -6 route add default dev he-ipv6 table 5g
pre-up ip -6 route add default dev he-ipv6 metric 1000
post-down ip tunnel del he-ipv6
We need to deal with IPv4 changes in for the tunnel endpoint, so modem-interface-route also got:
ip tunnel change he-ipv6 local $new_ip_address
/etc/nftables.conf had to be taught to accept the 6in4 packets from the tunnel in the input chain:
# Allow HE tunnel
iifname "sfp.31" ip protocol 41 ip saddr 216.66.80.26 accept
Finally, I had to engage in something I never thought I d deal with; IPv6 NAT. HE provide a /48, and my FTTP ISP provides me with a /56, so this meant I could do a nice stateless 1:1 mapping:
table ip6 nat  
  chain postrouting  
    type nat hook postrouting priority 0
    oifname "he-ipv6" snat ip6 prefix to ip6 saddr map   2001:db8:f00d::/56 : 2001:db8:666::/56  
   
 
This works. Mostly. The problem is that HE, not unreasonably, expect your IPv4 address to be pingable. And it turns out Three have some ranges that this works on, and some that it doesn t. Which means it s a bit hit and miss whether you can setup the tunnel. I spent a while trying to find an alternative free IPv6 tunnel provider with a UK endpoint. There s less call for them these days, so I didn t manage to find any that actually worked (or didn t have a similar pingable requirement). I did consider whether I wanted to end up with routes via a VM, as I described in the failover post, but looking at costings for VMs with providers who could actually give me an IPv6 range I decided the cost didn t make it worthwhile; the VM cost ended up being more than the backup SIM is costing monthly. Finally, it turns out happy eyeballs mostly means that when the 5G ends up on an IP that we can t setup the IPv6 tunnel on, things still mostly work. Browser usage fails over quickly and it s mostly my own SSH use that needs me to force IPv4. Purists will groan, but this turns out to be an acceptable trade-off for me, at present. Perhaps if I was seeing frequent failures the diverse routes approach to a VM would start to make sense, but for now I m pretty happy with the configuration in terms of having a mostly automatic backup link take over when the main link goes down.

25 April 2024

Jonathan McDowell: Sorting out backup internet #3: failover

With local recursive DNS and a 5G modem in place the next thing was to work on some sort of automatic failover when the primary FTTP connection failed. My wife works from home too and I sometimes travel so I wanted to make sure things didn t require me to be around to kick them into switch the link in use. First, let s talk about what I didn t do. One choice to try and ensure as seamless a failover as possible would be to get a VM somewhere out there. I d then run Wireguard tunnels over both the FTTP + 5G links to the VM, and run some sort of routing protocol (RIP, OSPF?) over the links. Set preferences such that the FTTP is preferred, NAT v4 to the VM IP, and choose somewhere that gave me a v6 range I could just use directly. This has the advantage that I m actively checking link quality to the outside work, rather than just to the next hop. It also means, if the failover detection is fast enough, that existing sessions stay up rather than needing re-established. The downsides are increased complexity, adding another point of potential failure (the VM + provider), the impact on connection quality (even with a decent endpoint it s an extra hop and latency), and finally the increased cost involved. I can cope with having to reconnect my SSH sessions in the event of a failure, and I d rather be sure I can make full use of the FTTP connection, so I didn t go this route. I chose to rely on local link failure detection to provide the signal for failover, and a set of policy routing on top of that to make things a bit more seamless. Local link failure turns out to be fairly easy. My FTTP is a PPPoE configuration, so in /etc/ppp/peers/aquiss I have:
lcp-echo-interval 1
lcp-echo-failure 5
lcp-echo-adaptive
Which gives me a failover of ~ 5s if the link goes down. I m operating the 5G modem in bridge rather than router mode, which means I get the actual IP from the 5G network via DHCP. The DHCP lease the modem hands out is under a minute, and in the event of a network failure it only hands out a 192.168.254.x IP to talk to its web interface. As the 5G modem is the last resort path I choose not to do anything special with this, but the information is at least there if I need it. To allow both interfaces to be up and the FTTP to be preferred I m simply using route metrics. For the PPP configuration that s:
defaultroute-metric 100
and for the 5G modem I have:
iface sfp.31 inet dhcp
    metric 1000
    vlan-raw-device sfp
There s a wrinkle in that pppd will not replace an existing default route, so I ve created /etc/ppp/ip-up.d/default-route to ensure it s added:
#!/bin/bash
[ "$PPP_IFACE" = "pppoe-wan" ]   exit 0
# Ensure we add a default route; pppd will not do so if we have
# a lower pref route out the 5G modem
ip route add default dev pppoe-wan metric 100   true
Additionally, in /etc/dhcp/dhclient.conf I ve disabled asking for any server details (DNS, NTP, etc) - I have internal setups for the servers I want, and don t want to be trying to select things over the 5G link by default. However, what I do want is to be able to access the 5G modem web interface and explicitly route some traffic out that link (e.g. so I can add it to my smokeping tests). For that I need some source based routing. First step, add a 5g table to /etc/iproute2/rt_tables:
16  5g
Then I ended up with the following in /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route, which is more complex than I d like but seems to do what I want:
#!/bin/sh
case "$reason" in
    BOUND RENEW REBIND REBOOT)
        # Check if we've actually changed IP address
        if [ -z "$old_ip_address" ]  
           [ "$old_ip_address" != "$new_ip_address" ]  
           [ "$reason" = "BOUND" ]   [ "$reason" = "REBOOT" ]; then
            if [ ! -z "$old_ip_address" ]; then
                ip rule del from $old_ip_address lookup 5g
            fi
            ip rule add from $new_ip_address lookup 5g
            ip route add default dev sfp.31 table 5g   true
            ip route add 192.168.254.1 dev sfp.31 2>/dev/null   true
        fi
    ;;
    EXPIRE)
        if [ ! -z "$old_ip_address" ]; then
            ip rule del from $old_ip_address lookup 5g
        fi
    ;;
    *)
    ;;
esac
What does all that aim to do? We want to ensure traffic directed to the 5G WAN address goes out the 5G modem, so I can SSH into it even when the main link is up. So we add a rule directing traffic from that IP to hit the 5g routing table, and a default route in that table which uses the 5G link. There s no configuration for the FTTP connection in that table, so if the 5G link is down the traffic gets dropped, which is what we want. We also configure 192.168.254.1 to go out the link to the modem, as that s where the web interface lives. I also have a curl callout (curl --interface sfp.31 to ensure it goes out the 5G link) after the routes are configured to set dynamic DNS with Mythic Beasts, which helps with knowing where to connect back to. I seem to see IP address changes on the 5G link every couple of days at least. Additionally, I have an entry in the interfaces configuration carving out the top set of the netblock my smokeping server is in:
    up ip rule add from 192.0.2.224/27 lookup 5g
My smokeping /etc/smokeping/config.d/Probes file then looks like:
*** Probes ***
+ FPing
binary = /usr/bin/fping
++ FPingNormal
++ FPing5G
sourceaddress = 192.0.2.225
+ FPing6
binary = /usr/bin/fping
which allows me to use probe = FPing5G for targets to test them over the 5G link. That mostly covers the functionality I want for a backup link. There s one piece that isn t quite solved, however, IPv6, which can wait for another post.

18 April 2024

Jonathan McDowell: Sorting out backup internet #2: 5G modem

Having setup recursive DNS it was time to actually sort out a backup internet connection. I live in a Virgin Media area, but I still haven t forgiven them for my terrible Virgin experiences when moving here. Plus it involves a bigger contractual commitment. There are no altnets locally (though I m watching youfibre who have already rolled out in a few Belfast exchanges), so I decided to go for a 5G modem. That gives some flexibility, and is a bit easier to get up and running. I started by purchasing a ZTE MC7010. This had the advantage of being reasonably cheap off eBay, not having any wifi functionality I would just have to disable (it s going to plug it into the same router the FTTP connection terminates on), being outdoor mountable should I decide to go that way, and, finally, being powered via PoE. For now this device sits on the window sill in my study, which is at the top of the house. I printed a table stand for it which mostly does the job (though not as well with a normal, rather than flat, network cable). The router lives downstairs, so I ve extended a dedicated VLAN through the study switch, down to the core switch and out to the router. The PoE study switch can only do GigE, not 2.5Gb/s, but at present that s far from the limiting factor on the speed of the connection. The device is 3 branded, and, as it happens, I ve ended up with a 3 SIM in it. Up until recently my personal phone was with them, but they ve kicked me off Go Roam, so I ve moved. Going with 3 for the backup connection provides some slight extra measure of resiliency; we now have devices on all 4 major UK networks in the house. The SIM is a preloaded data only SIM good for a year; I don t expect to use all of the data allowance, but I didn t want to have to worry about unexpected excess charges. Performance turns out to be disappointing; I end up locking the device to 4G as the 5G signal is marginal - leaving it enabled results in constantly switching between 4G + 5G and a significant extra latency. The smokeping graph below shows a brief period where I removed the 4G lock and allowed 5G: Smokeping 4G vs 5G graph (There s a handy zte.js script to allow doing this from the device web interface.) I get about 10Mb/s sustained downloads out of it. EE/Vodafone did not lead to significantly better results, so for now I m accepting it is what it is. I tried relocating the device to another part of the house (a little tricky while still providing switch-based PoE, but I have an injector), without much improvement. Equally pinning the 4G to certain bands provided a short term improvement (I got up to 40-50Mb/s sustained), but not reliably so. speedtest.net results This is disappointing, but if it turns out to be a problem I can look at mounting it externally. I also assume as 5G is gradually rolled out further things will naturally improve, but that might be wishful thinking on my part. Rather than wait until my main link had a problem I decided to try a day working over the 5G connection. I spend a lot of my time either in browser based apps or accessing remote systems via SSH, so I m reasonably sensitive to a jittery or otherwise flaky connection. I picked a day that I did not have any meetings planned, but as it happened I ended up with an adhoc video call arranged. I m pleased to say that it all worked just fine; definitely noticeable as slower than the FTTP connection (to be expected), but all workable and even the video call was fine (at least from my end). Looking at the traffic graph shows the expected ~ 10Mb/s peak (actually a little higher, and looking at the FTTP stats for previous days not out of keeping with what we see there), and you can just about see the ~ 3Mb/s symmetric use by the video call at 2pm: 4G traffic during the work day The test run also helped iron out the fact that the content filter was still enabled on the SIM, but that was easily resolved. Up next, vaguely automatic failover.

11 April 2024

Jonathan McDowell: Sorting out backup internet #1: recursive DNS

I work from home these days, and my nearest office is over 100 miles away, 3 hours door to door if I travel by train (and, to be honest, probably not a lot faster given rush hour traffic if I drive). So I m reliant on a functional internet connection in order to be able to work. I m lucky to have access to Openreach FTTP, provided by Aquiss, but I worry about what happens if there s a cable cut somewhere or some other long lasting problem. Worst case I could tether to my work phone, or try to find some local coworking space to use while things get sorted, but I felt like arranging a backup option was a wise move. Step 1 turned out to be sorting out recursive DNS. It s been many moons since I had to deal with running DNS in a production setting, and I ve mostly done my best to avoid doing it at home too. dnsmasq has done a decent job at providing for my needs over the years, covering DHCP, DNS (+ tftp for my test device network). However I just let it forward to my ISP s nameservers, which means if that link goes down it ll no longer be able to resolve anything outside the house. One option would have been to either point to a different recursive DNS server (Cloudfare s 1.1.1.1 or Google s Public DNS being the common choices), but I ve no desire to share my lookup information with them. As another approach I could have done some sort of failover of resolv.conf when the primary network went down, but then I would have to get into moving files around based on networking status and that felt a bit clunky. So I decided to finally setup a proper local recursive DNS server, which is something I ve kinda meant to do for a while but never had sufficient reason to look into. Last time I did this I did it with BIND 9 but there are more options these days, and I decided to go with unbound, which is primarily focused on recursive DNS. One extra wrinkle, pointed out by Lars, is that having dynamic name information from DHCP hosts is exceptionally convenient. I ve kept dnsmasq as the local DHCP server, so I wanted to be able to forward local queries there. I m doing all of this on my RB5009, running Debian. Installing unbound was a simple matter of apt install unbound. I needed 2 pieces of configuration over the default, one to enable recursive serving for the house networks, and one to enable forwarding of queries for the local domain to dnsmasq. I originally had specified the wildcard address for listening, but this caused problems with the fact my router has many interfaces and would sometimes respond from a different address than the request had come in on.
/etc/unbound/unbound.conf.d/network-resolver.conf
server:
  interface: 192.0.2.1
  interface: 2001::db8:f00d::1
  access-control: 192.0.2.0/24 allow
  access-control: 2001::db8:f00d::/56 allow

/etc/unbound/unbound.conf.d/local-to-dnsmasq.conf
server:
  domain-insecure: "example.org"
  do-not-query-localhost: no
forward-zone:
  name: "example.org"
  forward-addr: 127.0.0.1@5353

I then had to configure dnsmasq to not listen on port 53 (so unbound could), respond to requests on the loopback interface (I have dnsmasq restricted to only explicitly listed interfaces), and to hand out unbound as the appropriate nameserver in DHCP requests - once dnsmasq is not listening on port 53 it no longer does this by default.
/etc/dnsmasq.d/behind-unbound
interface=lo
port=5353
dhcp-option=option6:dns-server,[2001::db8:f00d::1]
dhcp-option=option:dns-server,192.0.2.1

With these minor changes in place I now have local recursive DNS being handled by unbound, without losing dynamic local DNS for DHCP hosts. As an added bonus I now get 10/10 on Test IPv6 - previously I was getting dinged on the ability for my DNS server to resolve purely IPv6 reachable addresses. Next step, actually sorting out a backup link.

7 January 2024

Jonathan McDowell: Free Software Activities for 2023

This year was hard from a personal and work point of view, which impacted the amount of Free Software bits I ended up doing - even when I had the time I often wasn t in the right head space to make progress on things. However writing this annual recap up has been a useful exercise, as I achieved more than I realised. For previous years see 2019, 2020, 2021 + 2022.

Conferences The only Free Software related conference I made it to this year was DebConf23 in Kochi, India. Changes with projects at work meant I couldn t justify anything work related. This year I m planning to make it to FOSDEM, and haven t made a decision on DebConf24 yet.

Debian Most of my contributions to Free software continue to happen within Debian. I started the year working on retrogaming with Kodi on Debian. I got this to a much better state for bookworm, with it being possible to run the bsnes-mercury emulator under Kodi using RetroArch. There are a few other libretro backends available for RetroArch, but Kodi needs some extra controller mappings packaged up first. Plenty of uploads were involved, though some of this was aligning all the dependencies and generally cleaning things up in iterations. I continued to work on a few packages within the Debian Electronics Packaging Team. OpenOCD produced a new release in time for the bookworm release, so I uploaded 0.12.0-1. There were a few minor sigrok cleanups - sigrok 0.3, libsigrokdecode 0.5.3-4 + libsigrok 0.5.2-4 / 0.5.2-5. While I didn t manage to get the work completed I did some renaming of the ESP8266 related packages - gcc-xtensa-lx106 (which saw a 13 upload pre-bookworm) has become gcc-xtensa (with 14) and binutils-xtensa-lx106 has become binutils-xtensa (with 6). Binary packages remain the same, but this is intended to allow for the generation of ESP32 compiler toolchains from the same source. onak saw 0.6.3-1 uploaded to match the upstream release. I also uploaded libgpg-error 1.47-1 (though I can claim no credit for any of the work in preparing the package) to help move things forward on updating gnupg2 in Debian. I NMUed tpm2-pkcs11 1.9.0-0.1 to fix some minor issues pre-bookworm release; I use this package myself to store my SSH key within my laptop TPM, so I care about it being in a decent state. sg3-utils also saw a bit of love with 1.46-2 + 1.46-3 - I don t work in the storage space these days, but I m still listed as an uploaded and there was an RC bug around the library package naming that I was qualified to fix and test pre-bookworm. Related to my retroarch work I sponsored uploads of mgba for Ryan Tandy: 0.10.0+dfsg-1, 0.10.0+dfsg-2, 0.10.1+dfsg-1, 0.10.2+dfsg-1, mgba 0.10.1+dfsg-1+deb12u1. As part of the Data Protection Team I responded to various inbound queries to that team, both from project members and those external to the project. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions as well as our panel at DebConf23. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.26, 2023.06.29, 2023.09.10, 2023.09.24 + 2023.12.24.

Linux I had a few minor patches accepted to the kernel this year. A pair of safexcel cleanups (improved error logging for firmware load fail and cleanup on load failure) came out of upgrading the kernel running on my RB5009. The rest were related to my work on repurposing my C.H.I.P.. The AXP209 driver needed extended to support GPIO3 (with associated DT schema update). That allowed Bluetooth to be enabled. Adding the AXP209 internal temperature ADC as an iio-hwmon node means it can be tracked using the normal sensor monitoring framework. And finally I added the pinmux settings for mmc2, which I use to support an external microSD slot on my C.H.I.P.

Personal projects 2023 saw another minor release of onak, 0.6.3, which resulted in a corresponding Debian upload (0.6.3-1). It has a couple of bug fixes (including a particularly annoying, if minor, one around systemd socket activation that felt very satisfying to get to the bottom of), but I still lack the time to do any of the major changes I would like to. I wrote listadmin3 to allow easy manipulation of moderation queues for Mailman3. It s basic, but it s drastically improved my timeliness on dealing with held messages.

26 October 2023

Jonathan McDowell: PSA: OpenPGP key updated in Debian keyring

This is a Public Service Announcement that my new OpenPGP key has now been updated in the active Debian keyring. I believe the only team that needs to be informed about this to manually update their systems is DSA, and I ve filed an RT ticket to give them a heads up. Thanks to all the folk who signed my new key, both at the Debian UK BBQ, and DebConf.

12 October 2023

Jonathan McDowell: Installing Debian on the BananaPi M2 Zero

My previously mentioned C.H.I.P. repurposing has been partly successful; I ve found a use for it (which I still need to write up), but unfortunately it s too useful and the fact it s still a bit flaky has become a problem. I spent a while trying to isolate exactly what the problem is (I m still seeing occasional hard hangs with no obvious debug output in the logs or on the serial console), then realised I should just buy one of the cheap ARM SBC boards currently available. The C.H.I.P. is based on an Allwinner R8, which is a single ARM v7 core (an A8). So it s fairly low power by today s standards and it seemed pretty much any board would probably do. I considered a Pi 2 Zero, but couldn t be bothered trying to find one in stock at a reasonable price (I ve had one on backorder from CPC since May 2022, and yes, I know other places have had them in stock since but I don t need one enough to chase and I m now mostly curious about whether it will ever ship). As the title of this post gives away, I settled on a Banana Pi BPI-M2 Zero, which is based on an Allwinner H3. That s a quad-core ARM v7 (an A7), so a bit more oompfh than the C.H.I.P. All in all it set me back 25, including a set of heatsinks that form a case around it. I started with the vendor provided Debian SD card image, which is based on Debian 9 (stretch) and so somewhat old. I was able to dist-upgrade my way through buster and bullseye, and end up on bookworm. I then discovered the bookworm 6.1 kernel worked just fine out of the box, and even included a suitable DTB. Which got me thinking about whether I could do a completely fresh Debian install with minimal tweaking. First thing, a boot loader. The Allwinner chips are nice in that they ll boot off SD, so I just needed a suitable u-boot image. Rather than go with the vendor image I had a look at mainline and discovered it had support! So let s build a clean image:
noodles@buildhost:~$ mkdir ~/BPI
noodles@buildhost:~$ cd ~/BPI
noodles@buildhost:~/BPI$ ls
noodles@buildhost:~/BPI$ git clone https://source.denx.de/u-boot/u-boot.git
Cloning into 'u-boot'...
remote: Enumerating objects: 935825, done.
remote: Counting objects: 100% (5777/5777), done.
remote: Compressing objects: 100% (1967/1967), done.
remote: Total 935825 (delta 3799), reused 5716 (delta 3769), pack-reused 930048
Receiving objects: 100% (935825/935825), 186.15 MiB   2.21 MiB/s, done.
Resolving deltas: 100% (785671/785671), done.
noodles@buildhost:~/BPI$ mkdir u-boot-build
noodles@buildhost:~/BPI$ cd u-boot
noodles@buildhost:~/BPI/u-boot$ git checkout v2023.07.02
...
HEAD is now at 83cdab8b2c Prepare v2023.07.02
noodles@buildhost:~/BPI/u-boot$ make O=../u-boot-build bananapi_m2_zero_defconfig
  HOSTCC  scripts/basic/fixdep
  GEN     Makefile
  HOSTCC  scripts/kconfig/conf.o
  YACC    scripts/kconfig/zconf.tab.c
  LEX     scripts/kconfig/zconf.lex.c
  HOSTCC  scripts/kconfig/zconf.tab.o
  HOSTLD  scripts/kconfig/conf
#
# configuration written to .config
#
make[1]: Leaving directory '/home/noodles/BPI/u-boot-build'
noodles@buildhost:~/BPI/u-boot$ cd ../u-boot-build/
noodles@buildhost:~/BPI/u-boot-build$ make CROSS_COMPILE=arm-linux-gnueabihf-
  GEN     Makefile
scripts/kconfig/conf  --syncconfig Kconfig
...
  LD      spl/u-boot-spl
  OBJCOPY spl/u-boot-spl-nodtb.bin
  COPY    spl/u-boot-spl.bin
  SYM     spl/u-boot-spl.sym
  MKIMAGE spl/sunxi-spl.bin
  MKIMAGE u-boot.img
  COPY    u-boot.dtb
  MKIMAGE u-boot-dtb.img
  BINMAN  .binman_stamp
  OFCHK   .config
noodles@buildhost:~/BPI/u-boot-build$ ls -l u-boot-sunxi-with-spl.bin
-rw-r--r-- 1 noodles noodles 494900 Aug  8 08:06 u-boot-sunxi-with-spl.bin
I had the advantage here of already having a host setup to cross build armhf binaries, but this was all done on a Debian bookworm host with packages from main. I ve put my build up here in case it s useful to someone - everything else below can be done on a normal x86_64 host. Next I needed a Debian installer. I went for the netboot variant - although I was writing it to SD rather than TFTP booting I wanted as much as possible to come over the network.
noodles@buildhost:~/BPI$ wget https://deb.debian.org/debian/dists/bookworm/main/installer-armhf/20230607%2Bdeb12u1/images/netboot/netboot.tar.gz
...
2023-08-08 10:15:03 (34.5 MB/s) -  netboot.tar.gz  saved [37851404/37851404]
noodles@buildhost:~/BPI$ tar -axf netboot.tar.gz
Then I took a suitable microSD card and set it up with a 500M primary VFAT partition, leaving the rest for Linux proper. I could have got away with a smaller VFAT partition but I d initially thought I might need to put some more installation files on it.
noodles@buildhost:~/BPI$ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): o
Created a new DOS (MBR) disklabel with disk identifier 0x793729b3.
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-60440575, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-60440575, default 60440575): +500M
Created a new partition 1 of type 'Linux' and of size 500 MiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): c
Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.
Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (2-4, default 2):
First sector (1026048-60440575, default 1026048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (534528-60440575, default 60440575):
Created a new partition 2 of type 'Linux' and of size 28.3 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
$ sudo mkfs -t vfat -n BPI-UBOOT /dev/sdb1
mkfs.fat 4.2 (2021-01-31)
The bootloader image gets written 8k into the SD card (our first partition starts at sector 2048, i.e. 1M into the device, so there s plenty of space here):
noodles@buildhost:~/BPI$ sudo dd if=u-boot-build/u-boot-sunxi-with-spl.bin of=/dev/sdb bs=1024 seek=8
483+1 records in
483+1 records out
494900 bytes (495 kB, 483 KiB) copied, 0.0282234 s, 17.5 MB/s
Copy the Debian installer files onto the VFAT partition:
noodles@buildhost:~/BPI$ cp -r debian-installer/ /media/noodles/BPI-UBOOT/
Unmount the SD from the build host, pop it into the M2 Zero, boot it up while connected to the serial console, hit a key to stop autoboot and tell it to boot the installer:
U-Boot SPL 2023.07.02 (Aug 08 2023 - 09:05:44 +0100)
DRAM: 512 MiB
Trying to boot from MMC1
U-Boot 2023.07.02 (Aug 08 2023 - 09:05:44 +0100) Allwinner Technology
CPU:   Allwinner H3 (SUN8I 1680)
Model: Banana Pi BPI-M2-Zero
DRAM:  512 MiB
Core:  60 devices, 17 uclasses, devicetree: separate
WDT:   Not starting watchdog@1c20ca0
MMC:   mmc@1c0f000: 0, mmc@1c10000: 1
Loading Environment from FAT... Unable to read "uboot.env" from mmc0:1...
In:    serial
Out:   serial
Err:   serial
Net:   No ethernet found.
Hit any key to stop autoboot:  0
=> setenv dibase /debian-installer/armhf
=> fatload mmc 0:1 $ kernel_addr_r  $ dibase /vmlinuz
5333504 bytes read in 225 ms (22.6 MiB/s)
=> setenv bootargs "console=ttyS0,115200n8"
=> fatload mmc 0:1 $ fdt_addr_r  $ dibase /dtbs/sun8i-h2-plus-bananapi-m2-zero.dtb
25254 bytes read in 7 ms (3.4 MiB/s)
=> fdt addr $ fdt_addr_r  0x40000
Working FDT set to 43000000
=> fatload mmc 0:1 $ ramdisk_addr_r  $ dibase /initrd.gz
31693887 bytes read in 1312 ms (23 MiB/s)
=> bootz $ kernel_addr_r  $ ramdisk_addr_r :$ filesize  $ fdt_addr_r 
Kernel image @ 0x42000000 [ 0x000000 - 0x516200 ]
## Flattened Device Tree blob at 43000000
   Booting using the fdt blob at 0x43000000
Working FDT set to 43000000
   Loading Ramdisk to 481c6000, end 49fffc3f ... OK
   Loading Device Tree to 48183000, end 481c5fff ... OK
Working FDT set to 48183000
Starting kernel ...
At this point the installer runs and you can do a normal install. Well, except the wifi wasn t detected, I think because the netinst images don t include firmware. I spent a bit of time trying to figure out how to include it but ultimately ended up installing over a USB ethernet dongle, which Just Worked and was less faff. Installing firmware-brcm80211 once installation completed allowed the built-in wifi to work fine. After install you need to configure u-boot to boot without intervention. At the u-boot prompt (i.e. after hitting a key to stop autoboot):
=> setenv bootargs "console=ttyS0,115200n8 root=LABEL=BPI-ROOT ro"
=> setenv bootcmd 'ext4load mmc 0:2 $ fdt_addr_r  /boot/sun8i-h2-plus-bananapi-m2-zero.dtb ; fdt addr $ fdt_addr_r  0x40000 ; ext4load mmc 0:2 $ kernel_addr_r  /boot/vmlinuz ; ext4load mmc 0:2 $ ramdisk_addr_r  /boot/initrd.img ; bootz $ kernel_addr_r  $ ramdisk_addr_r :$ filesize  $ fdt_addr_r '
=> saveenv
Saving Environment to FAT... OK
=> reset
This is assuming you have /boot on partition 2 on the SD - I left the first partition as VFAT (that s where the u-boot environment will be saved) and just used all of the rest as a single ext4 partition. I did have to do an e2label /dev/sdb2 BPI-ROOT to label / appropriately; otherwise I occasionally saw the SD card appear as mmc1 for Linux (I m guessing due to asynchronous boot order with the wifi). You should now find the device boots without intervention.

27 September 2023

Jonathan McDowell: onak 0.6.3 released

Yesterday I tagged a new version of onak, my OpenPGP compatible keyserver. I d spent a bit of time during DebConf doing some minor cleanups, in particular an annoying systemd socket activation issue I d been seeing. That turned out to be due completely failing to compile in the systemd support, even when it was detected. There was also a signature verification issue with certain Ed225519 signatures (thanks Antoine Beaupr for making me dig into that one), along with various code cleanups. I also worked on Stateless OpenPGP CLI support, which is something I talked about when I released 0.6.2. It isn t something that s suitable for release, but it is sufficient to allow running the OpenPGP interoperability test suite verification tests, which I m pleased to say all now pass. For the next release I m hoping the OpenPGP crypto refresh process will have completed, which at the very least will mean adding support for v6 packet types and fingerprints. The PostgreSQL DB backend could also use some love, and I might see if performance with SQLite3 has improved any. Anyway. Available locally or via GitHub.
0.6.3 - 26th September 2023
  • Fix systemd detection + socket activation
  • Add CMake checking for Berkeley DB
  • Minor improvements to keyd logging
  • Fix decoding of signature creation time
  • Relax version check on parsing signature + key packets
  • Improve HTML escaping
  • Handle failed database initialisation more gracefully
  • Fix bug with EDDSA signatures with top 8+ bits unset

21 September 2023

Jonathan McDowell: DebConf23 Writeup

DebConf2023 Logo (I wrote this up for an internal work post, but I figure it s worth sharing more publicly too.) I spent last week at DebConf23, this years instance of the annual Debian conference, which was held in Kochi, India. As usual, DebConf provides a good reason to see a new part of the world; I ve been going since 2004 (Porto Alegre, Brazil), and while I ve missed a few (Mexico, Bosnia, and Switzerland) I ve still managed to make it to instances on 5 continents. This has absolutely nothing to do with work, so I went on my own time + dime, but I figured a brief write-up might prove of interest. I first installed Debian back in 1999 as a machine that was being co-located to operate as a web server / email host. I was attracted by the promise of easy online upgrades (or, at least, upgrades that could be performed without the need to be physically present at the machine, even if they naturally required a reboot at some point). It has mostly delivered on this over the years, and I ve never found a compelling reason to move away. I became a Debian Developer in 2000. As a massively distributed volunteer project DebConf provides an opportunity to find out what s happening in other areas of the project, catch up with team mates, and generally feel more involved and energised to work on Debian stuff. Also, by this point in time, a lot of Debian folk are good friends and it s always nice to catch up with them. On that point, I felt that this year the hallway track was not quite the same as usual. For a number of reasons (COVID, climate change, travel time, we re all getting older) I think fewer core teams are achieving critical mass at DebConf - I was the only member physically present from 2 teams I m involved in, and I d have appreciated the opportunity to sit down with both of them for some in-person discussions. It also means it s harder to use DebConf as a venue for advancing major changes; previously having all the decision makers in the same space for a week has meant it s possible to iron out the major discussion points, smoothing remote implementation after the conference. I m told the mini DebConfs are where it s at for these sorts of meetings now, so perhaps I ll try to attend at least one of those next year. Of course, I also went to a bunch of talks. I have differing levels of comment about each of them, but I ve written up some brief notes below about the ones I remember something about. The comment was made that we perhaps had a lower level of deep technical talks, which is perhaps true but I still think there were a number of high level technical talks that served to pique ones interest about the topic. Finally, this DebConf was the first I m aware of that was accompanied by tragedy; as part of the day trip Abraham Raji, a project member and member of the local team, was involved in a fatal accident.

Talks (videos not yet up for all, but should appear for most)
  • Opening Ceremony
    Not much to say here; welcome to DebConf!
  • Continuous Key-Signing Party introduction
    I ended up running this, as Gunnar couldn t make it. Debian makes heavy use of the OpenPGP web of trust (no mass ability to send out Yubikeys + perform appropriate levels of identity verification), so making sure we re appropriately cross-signed, and linked to local conference organisers, is a dull but important part of the conference. We use a modified keysigning approach where identity verification + fingerprint confirmation happens over the course of the conference, so this session was just to explain how that works and confirm we were all working from the same fingerprint list.
  • State of Stateless - A Talk about Immutability and Reproducibility in Debian
    Stateless OSes seem to be gaining popularity, so I went along to this to see if there was anything of note. It was interesting, but nothing earth shattering - very high level.
  • What s missing so that Debian is finally reproducible?
    Reproducible builds are something I ve been keeping an eye on for a long time, and I continue to be impressed by the work folks are putting into this - both for Debian, and other projects. From a security standpoint reproducible builds provide confidence against trojaned builds, and from a developer standpoint knowing you can build reproducibly helps with not having to keep a whole bunch of binary artefacts around.
  • Hello from keyring-maint
    In the distant past the process of getting your OpenPGP key into the Debian keyring (which is used to authenticate uploads + votes, amongst other things) was a clunky process that was often stalled. This hasn t been the case for at least the past 10 years, but there s still a residual piece of project memory that thinks keyring is a blocker. So as a team we say hi and talk about the fact we do monthly updates and generally are fairly responsive these days.
  • A declarative approach to Linux networking with Netplan
    Debian s /etc/network/interfaces is a fairly basic (if powerful) mechanism for configuring network interfaces. NetworkManager is a better bet for dynamic hosts (i.e. clients), and systemd-network seems to be a good choice for servers (I m gradually moving machines over to it). Netplan tries to provide a unified mechanism for configuring both with a single configuration language. A noble aim, but I don t see a lot of benefit for anything I use - my NetworkManager hosts are highly dynamic (so no need to push shared config) and systemd-network (or /etc/network/interfaces) works just fine on the other hosts. I m told Netplan has more use with more complicated setups, e.g. when OpenVSwitch is involved.
  • Quick peek at ZFS, A too good to be true file system and volume manager.
    People who use ZFS rave about it. I m naturally suspicious of any file system that doesn t come as part of my mainline kernel. But, as a longtime cautious mdraid+lvm+ext4 user I appreciate that there have been advances in the file system space that maybe I should look at, and I ve been trying out btrfs on more machines over the past couple of years. I can t deny ZFS has a bunch of interesting features, but nothing I need/want that I can t get from an mdraid+lvm+btrfs stack (in particular data checksumming + reflinks for dedupe were strong reasons to move to btrfs over ext4).
  • Bits from the DPL
    Exactly what it says on the tin; some bits from the DPL.
  • Adulting
    Enrico is always worth hearing talk; Adulting was no exception. Main takeaway is that we need to avoid trying to run the project on martyrs and instead make sure we build a sustainable project. I ve been trying really hard to accept I just don t have time to take on additional responsibilities, no matter how interesting or relevant they might seem, so this resonated.
  • My life in git, after subversion, after CVS.
    Putting all of your home directory in revision control. I ve never made this leap; I ve got some Ansible playbooks that push out my core pieces of configuration, which is held in git, but I don t actually check this out directly on hosts I have accounts on. Interesting, but not for me.
  • EU Legislation BoF - Cyber Resilience Act, Product Liability Directive and CSAM Regulation
    The CRA seems to be a piece of ill informed legislation that I m going to have to find time to read properly. Discussion was a bit more alarmist than I personally feel is warranted, but it was a short session, had a bunch of folk in it, and even when I removed my mask it was hard to make myself understood.
  • What s new in the Linux kernel (and what s missing in Debian)
    An update from Ben about new kernel features. I m paying less attention to such things these days, so nice to get a quick overview of it all.
  • Intro to SecureDrop, a sort-of Linux distro
    Actually based on Ubuntu, but lots of overlap with Debian as a result, and highly customised anyway. Notable, to me, for using OpenPGP as some of the backend crypto support. I managed to talk to Kunal separately about some of the pain points around that, which was an interesting discussion - they re trying to move from GnuPG to Sequoia, primarily because of the much easier integration and lack of requirement for the more complicated GnuPG features that sometimes get in the way.
  • The Docker(.io) ecosystem in Debian
    I hate Docker. I m sure it s fine if you accept it wants to take over the host machine entirely, but when I ve played around with it that s not been the case. This talk was more about the difficulty of trying to keep a fast moving upstream with lots of external dependencies properly up to date in a stable release. Vendoring the deps and trying to get a stable release exception seems like the least bad solution, but it s a problem that affects a growing number of projects.
  • Chiselled containers
    This was kinda of interesting, but I think I missed the piece about why more granular packaging wasn t an option. The premise is you can take an existing .deb and chisel it into smaller components, which then helps separate out dependencies rather than pulling in as much as the original .deb would. This was touted as being useful, in particular, for building targeted containers. Definitely appealing over custom built userspaces for containers, but in an ideal world I think we d want the information in the main packaging and it becomes a lot of work.
  • Debian Contributors shake-up
    Debian Contributors is a great site for massaging your ego around contributions to Debian; it s also a useful point of reference from a data protection viewpoint in terms of information the project holds about contributors - everything is already public, but the Contributors website provides folk with an easy way to find their own information (with various configurable options about whether that s made public or not). T ssia is working on improving the various data feeds into the site, but realistically this is the responsibility of every Debian service owner.
  • New Member BOF
    I m part of the teams that help get new folk into Debian - primarily as a member of the New Member Front Desk, but also as a mostly inactive Application Manager. It s been a while since we did one of these sessions so the Front Desk/Debian Account Managers that were present did a panel session. Nothing earth shattering came out of it; like keyring-maint this is a team that has historically had problems, but is currently running smoothly.

6 September 2023

Jonathan McDowell: DebConf 23 Key Signing + setting up a new key

I ve just finalised the OpenPGP key list for the DebConf 23 Keysigning party. This will follow the new style approach of being a continuous keysigning throughout the course of the conference, with an introduction session up front to confirm no one s fingerprint is corrupted and that we all calculated the same hash of the file. Participants will then verify each other s identities over the conference week, hopefully being able to build up a better level of verification than a one shot key signing session. Those paying attention will note that my key details have changed this year; I am finally make a concerted effort to migrate to an elliptic curve based key. I managed to bootstrap it sufficiently at OMGWTFBBQ, but I m keen to ensure it s well integrated into the web of trust, so please do come talk to me at DebConf so we can exchange fingerprints!
pub   ed25519 2023-08-19 [C] [expires: 2025-08-18]
      419F B4B6 567E 6EF7 DEAF  80A0 9026 108F B942 BEA4
uid           [ultimate] Jonathan McDowell <noodles@earth.li>
(I ve no reason to suspect problems with my old key and will be making a graceful changeover in the Debian keyring at some point in October after I ve performed the September keyring update; that ll give things a couple of months to catch up before it s my turn to do an update again.)

28 August 2023

Jonathan McDowell: OMGWTFBBQ 2023

A person wearing an OMGWTFBBQ Catering t-shirt standing in front of a BBQ. Various uncooked burgers are visible. As is traditional for the UK August Bank Holiday weekend I made my way to Cambridge for the Debian UK BBQ. As was pointed out we ve been doing this for more than 20 years now, and it s always good to catch up with old friends and meet new folk. Thanks to Collabora, Codethink, and Andy for sponsoring a bunch of tasty refreshments. And, of course, thanks to Steve for hosting us all.

14 August 2023

Jonathan McDowell: listadmin3: An imperfect replacement for listadmin on Mailman 3

One of the annoyances I had when I upgraded from Buster to Bullseye (yes, I m talking about an upgrade I did at the end of 2021) is that I ended up moving from Mailman 2 to Mailman 3. Which is fine, I guess, but it meant I could no longer use listadmin to deal with messages held for moderation. At the time I looked around, couldn t find anything, shrugged, and became incredibly bad at performing my list moderation duties. Last week I finally accepted what I should have done at least a year ago and wrote some hopefully not too bad Python to web scrape the Mailman 3 admin interface. It then presents a list of subscribers and held messages that might need approved or discarded. It s heavily inspired by listadmin, but not a faithful copy (partly because it s been so long since I used it that I m no longer familiar with its interface). Despite that I ve called it listadmin3. It currently meets the bar of extremely useful to me so I ve tagged v0.1. You can get it on Github. I d be interested in knowing if it actually works for / is useful to anyone else (I suspect it won t be happy with interfaces configured to not be in English, but that should be solvable). Comment here or reply to my Fediverse announcement. Example usage, cribbed directly from the README:
$ listadmin3
fetching data for partypeople@example.org ... 200 messages
(1/200) 5303: omgitsspam@example.org / March 31, 2023, 6:39 a.m.:
  The message is not from a list member: TOP PICK
(a)ccept, (d)iscard, (b)ody, (h)eaders, (s)kip, (q)uit? q
Moving on...
fetching data for admins@example.org ... 1 subscription requests
(1/1) "The New Admin" <newadmin@example.org>
(a)ccept, (d)iscard, (r)eject, (s)kip, (q)uit? a
1 messages
(1/1) 6560: anastyspamer@example.org / Aug. 13, 2023, 3:15 p.m.:
  The message is not from a list member: Buy my stuff!
(a)ccept, (d)iscard, (b)ody, (h)eaders, (s)kip, (q)uit? d
0 to accept, 1 to discard, proceed? (y/n) y
fetching data for announce@example.org ... nothing in queue
$
There s Debian packaging in the repository (dpkg-buildpackage -uc -us -b will spit you out a .deb) but I m holding off on filing an ITP + actually uploading until I know if it s useful enough for others before doing so. You only really need the listadmin3 file and to ensure you have Python3 + MechanicalSoup installed. (Yes, I still run mailing lists. Hell, I still run a Usenet server.)

25 June 2023

Jonathan McDowell: Figuring out the right card for foreign currency transactions

While travel these days is much reduced I still end up in Dublin regularly enough (though less so now I m not working directly with folk there), and have the occasional US trip. Given that I live and work in the UK, and thus get paid in GBP ( ), this leads to the question of what to do about USD ($) and EUR ( ) transaction. USD turns out to be easy; I still have a US account from when I lived there and keeping it active has, so far, not proved to be a problem. EUR is tricker. In an ideal world I d find a fee-free account that would allow normal Eurozone transfers and provide me with a suitable debit + credit card for use while travelling. I ve found options with a fee, but I don t have enough Euro transactions to make it worthwhile. A friend pointed me at the HSBC Global Money debit card, which claims no fees for currency conversions from and competitive rates. As I already have an account with HSBC it was easy to sign up to try it out. And, having recently been on trip to CenterParcs, I had the perfect opportunity to compare it to my (and my wife s) existing cards to see how the rates played out. In particular I ve always meant to figure out if the cards that don t charge a loading rate end up with the same exchange rate as the cards that do have a separate per transaction charge. So, armed with the HSBC Global Money card, a Monzo debit card, a Nationwide debit card, and a co-operative debit card, we engage on a week of eating out in the interests of data! The results will probably shock no one who s properly looked into this themselves. All the cards were on the VISA network, they all ended up at around 1 1.15. According to Xe that was roughly the mid-market rate for the week I was away, so it seems like trusting VISA to do the currency conversion is a reasonable thing to do. The problem comes with the range of charges:
Card Exchange rate Charge %
Co-Op 1.15 2.65%
HSBC 1.15 0.00%
Monzo 1.15 0.00%
Nationwide 1.15 2.99%
This matches what Nationwide claims for its foreign transaction fees, though Co-Op claims 2.75% and the numbers looked more like 2.65% on the small transactions we incurred. So, the HSBC Global Money card is definitely a decent option. However Monzo is just a good, and has the advantage that it s a normal current account rather than something slightly different you have to funnel money into. Both are let down by the fact you need to use an app to check your balance etc - Monzo seems to only have extremely basic web access to any of their accounts and HSBC don t expose the Global Money account at all via web banking, only the phone app. Finally, what I really want is to be able to do this with a credit card, especially for things like online purchases or staying at hotels. My current credit card suffers from a per-transaction charge, but while looking up the Nationwide debit charge %age to confirm it matched what I saw I discovered that Nationwide credit cards do not charge a fee. So I think I ll be investigating that for my next trip.

24 May 2023

Jonathan McDowell: RIP Brenda McDowell

My mother died earlier this month. She d been diagnosed with cancer back in February 2022 and had been through major surgery and a couple of rounds of chemotherapy, so it wasn t a complete surprise even if it was faster at the end than expected. That doesn t make it easy, but I m glad to be able to say that her immediate family were all with her at home at the end. I was touched by the number of people who turned up, both to the wake and the subsequent funeral ceremony. Mum had done a lot throughout her life and was settled in Newry, and it was nice to see how many folk wanted to pay their respects. It was also lovely to hear from some old school friends who had fond memories of her. There are many things I could say about her, but I don t feel that here is the place to do so. My father and brother did excellent jobs at eulogies at the funeral. However, while I blog less about life things than I did in the past, I did not want it to go unmarked here. She was my Mum, I loved her, and I am sad she is gone.

Next.