January was a slow month, I only did three uploads to Debian unstable:
xdg-desktop-portal-wlr updated to 0.8.1-1
swayimg updated to 4.7-1
usbguard updated to 1.1.4+ds-2, which closed #1122733
I was very happy to see the new
dfsg-new-queue and that there are more hands
now processing the NEW queue. I also finally got one of the packages accepted
that I uploaded after the Trixie release:
wayback which I uploaded last August.
There has been another release since then, I ll try to upload that in the next
few days.
There was a bug report for carl
asking for Windows support. carl used the xdg
create for looking up the XDG directories, but xdg does not support
windows systems (and it seems this will not
change)
The reporter also provided a PR to replace the dependency with the
directories crate which more system
agnostic. I adapted the PR a bit and merged it and released version
0.6.0 of carl.
At my dayjob I refactored
django-grouper.
django-grouper is a package we use to find duplicate objects in our data. Our
users often work with datasets of thousands of historical persons, places and
institutions and in projects that run over years and ingest data from multiple sources,
it happens that entries are created several times.
I wrote the initial app in 2024, but was never really happy about the approach
I used back then. It was based on this blog
post
that describes how to group spreadsheet text cells. It uses sklearns
TfidfVectorizer
with a custom analyzer and the library
sparse_dot_topn for creating the
matrix. All in all the module to calculate the clusters was 80 lines and with
sparse_dot_topn it pulled in a rather niche Python library. I was pretty sure
that this functionality could also be implemented with basic sklearn
functionality and it was: we are now using
DictVectorizer
because in a Django app we are working with objects that can be mapped to dicts
anyway. And for clustering the data, the app now uses the
DBSCAN
algorithm (with the manhattan distance as metric). The module is now only half
the size and the whole app lost one dependency! I released those changes as
version
0.3.0 of the
app.
At the end of January together with friends I went to Brussels to attend
FOSDEM. We took the night train but there were a couple of
broken down trains so the ride took 26 hours instead of one night. It is a good
thing we had a one day buffer and FOSDEM only started on Saturday. As usual
there were too many talks to visit, so I ll have to watch some of the
recordings in the next few weeks.
Some examples of talks I found interesting so far:
Posted on February 3, 2026
Tags: madeof:atoms, madeof:bits
Today I had a day off. Some of it went great. Some less so.
I woke up, went out to pay our tribute to NotOurCat, and it was snowing!
yay! And I had a day off, so if it had snowed enough that shovelling was
needed, I had time to do it (it didn t, it started to rain soon
afterwards, but still, YAY snow!).
Then I had breakfast, with the fruit rye bread I had baked yesterday,
and I treated myself to some of the strong Irish tea I have left,
instead of the milder ones I want to finish before buying more of the
Irish.
And then, I bought myself a fancy new expensive fountain pen. One that
costs 16 ! more than three times as much as my usual ones! I hope it
will work as well, but I m quite confident it should. I ll find out when
it arrives from Germany (together with a few ink samples that will
result in a future blog post with some SCIENCE).
I decided to try and use bank transfers instead of my visa debit card
when buying from online shops that give the option to do so: it s a tiny
bit more effort, but it means I m paying 0.25 to my bank1
rather than the seller having to pay some unknown amount to an US based
payment provider. Unluckily, the fountain pen website offered a huge
number of payment methods, but not bank transfers. sigh.
And then, I could start working a bit on the connecting wires for the
LED strips for our living room: I soldered two pieces, six wires each
(it s one RGB strip, 4 pins, and a warm white one requiring two more),
then did a bit of tests, including writing some micropython code to add
a test mode that lights up each colour in sequence, and the morning was
almost gone. For some reason this project, as simple as it is, is taking
forever. But it is showing progress.
There was a break, when the postman delivered a package of chemicals2 for a future project or two. There will be blog posts!
After lunch I spent some time finishing eyelets on the outfit I wanted
to wear this evening, as I had not been able to finish it during fosdem.
This one will result in two blog posts!
Meanwhile, in the morning I didn t remember the name of the program I
used to load software on micropython boards such as the one that will
control the LED strips (that s thonny), and while searching for it in
the documentation, I found that there is also a command line program I
can use, mpremote, and that s a much better fit for my preferences!
I mentioned it in an xmpp room full of nerds, and one of them mentioned
that he could try it on his Inkplate, when he had time, and I was
nerd-sniped into trying it on mine, which had been sitting unused
showing the temperatures in our old house on the last day it spent there
and needs to be updated for the sensors in the new house.
And that lead to the writing of some notes on how to set it up from the
command line
(good), and to the opening on one upstream issue
(bad), because I have an old model, and the board-specific library isn t
working. at all.
And that s when I realized that it was 17:00, I still had to cook the
bread I had been working on since yesterday evening (ciabatta, one of my
favourites, but it needs almost one hour in the oven), the outfit I
wanted to wear in the evening was still not wearable, the table needed
cleaning and some panicking was due. Thankfully, my mother was cooking
dinner, so I didn t have to do that too.
I turned the oven on, sewed the shoulder seams of the bodice while
spraying water on the bread every 5 minutes, and then while it was
cooking on its own, started to attach a closure to the skirt, decided
that a safety pin was a perfectly reasonable closure for the first day
an outfit is worn, took care of the table, took care of the bread, used
some twine to close the bodice, because I still haven t worked out what
to use for laces, realized my bodkin is still misplaced, used a long
and sharp and big needle meant for sewing mattresses instead of a
bodkin, managed not to stab myself, and less than half an hour late we
could have dinner.
There was bread, there was Swedish crispbread, there were spreads (tuna,
and beans), and vegetables, and then there was the cake that caused my
mother to panic when she added her last honey to the milk and it curdled
(my SO and I tried it, it had no odd taste, we decided it could be used)
and it was good, although I had to get a second slice just to be 100%
sure of it.
And now I m exhausted, and I ve only done half of the things I had
planned to do, but I d still say I ve had quite a good day.
Banca Etica, so one that avoids any investment in weapons and
a number of other problematic things.
I'm going to FOSDEM 2026!
I'm presenting in the Containers dev room. My talk is Java Memory Management
in Containers
and it's scheduled as the first talk on the first day. I'm the warm-up act!
The Java devroom has been a stalwart at FOSDEM since 2004 (sometimes in other
forms), but sadly there's no Java devroom this year. There's a story about
that, but it's not mine to tell.
Please recommend to me any interesting talks! Here's a few that caught my eye:
Debian/related:
Given we ve entered a new year it s time for my annual recap of my Free Software activities for the previous calendar year. For previous years see 2019, 2020, 2021, 2022, 2023 + 2024.
Conferences
My first conference of the year was FOSDEM. I d submitted a talk proposal about system attestation in production environments for the attestation devroom, but they had a lot of good submissions and mine was a bit more this is how we do it rather than here s some neat Free Software that does it . I m still trying to work out how to make some of the bits we do more open, but the problem is a lot of the neat stuff is about taking internal knowledge about what should be running and making sure that s the case, and what you end up with if you abstract that is a toolkit that still needs a lot of work to get something useful.
I d more luck at DebConf25 where I gave a talk (Don t fear the TPM) trying to explain how TPMs could be useful in a Debian context. Naturally the comments section descended into a discussion about UEFI Secure Boot, which is a separate, if related, thing. DebConf also featured the usual catch up with fellow team members, hanging out with folk I hadn t seen in ages, and generally feeling a bit more invigorated about Debian.
Other conferences I considered, but couldn t justify, were All Systems Go! and the Linux Plumbers Conference. I ve no doubt both would have had a bunch of interesting and relevant talks + discussions, but not enough this year.
I m going to have to miss FOSDEM this year, due to travel later in the month, and I m uncertain if I m going to make DebConf (for a variety of reasons). That means I don t have a Free Software conference planned for 2026. Ironically FOSSY moving away from Portland makes it a less appealing option (I have Portland friends it would be good to visit). Other than potential Debian MiniConfs, anything else European I should consider?
Debian
I continue to try and keep RetroArch in shape, with 1.22.2+dfsg-1 (and, shortly after, 1.22.2+dfsg-2 - git-buildpackage in trixie seems more strict about Build-Depends existing in the outside environment, and I keep forgetting I need Build-Depends-Arch and Build-Depends-Indep to be pretty much the same with a minimal Build-Depends that just has enough for the clean target) getting uploaded in December, and 1.20.0+dfsg-1, 1.20+dfsg-2 + 1.20+dfsg-3 all being uploaded earlier in the year. retroarch-assets had 1.20.0+dfsg-1 uploaded back in April. I need to find some time to get 1.22.0 packaged. libretro-snes9x got updated to 1.63+dfsg-1.
sdcc saw 4.5.0+dfsg-1, 4.5.0+dfsg-2, 4.5.0+dfsg-3 (I love major GCC upgrades) and 4.5.0-dfsg-4 uploads. There s an outstanding bug around a LaTeX error building the manual, but this turns out to be a bug in the 2.5 RC for LyX. Huge credit to Tobias Quathamer for engaging with this, and Pavel Sanda + J rgen Spitzm ller from the LyX upstream for figuring out the issue + a fix.
Pulseview saw 0.4.2-4 uploaded to fix issues with the GCC 15 + CMake upgrades. I should probably chase the sigrok upstream about new releases; I think there are a bunch of devices that have gained support in git without seeing a tagged release yet.
I did an Electronics Team upload for gputils 1.5.2-2 to fix compilation with GCC 15.
While I don t do a lot with storage devices these days if I can help it I still pay a little bit of attention to sg3-utils. That resulted in 1.48-2 and 1.48-3 uploads in 2025.
libcli got a 1.10.7-3 upload to deal with the libcrypt-dev split out.
Finally I got more up-to-date versions of libtorrent (0.15.7-1) and rtorrent (also 0.15.7-1) uploaded to experimental. There s a ppc64el build failure in libtorrent, but having asked on debian-powerpc this looks like a flaky test/code and I should probably go ahead and upload to unstable.
I sponsored some uploads for Michel Lind - the initial uploads of plymouth-theme-hot-dog, and the separated out pykdumpfile package.
Recognising the fact I wasn t contributing in a useful fashion to the Data Protection Team I set about trying to resign in an orderly fashion - see Andreas call for volunteers that went out in the last week. Shout out to Enrico for pointing out in the past that we should gracefully step down from things we re not actually managing to do, to avoid the perception it s all fine and no one else needs to step up. Took me too long to act on it.
The Debian keyring team continues to operate smoothly, maintaining our monthly release cadence with a 3 month rotation ensuring all team members stay familiar with the process, and ensure their setups are still operational (especially important after Debian releases). I handled the 2025.03.23, 2025.06.24, 2025.06.27, 2025.09.18, 2025.12.08 + 2025.12.26 pushes.
Linux
TPM related fixes were the theme of my kernel contributions in 2025, all within a work context. Somewerejust cleanups, but severalfixedrealissues that were causing us issues. I ve also tried to be more proactive about reviewing diffs in the TPM subsystem; it feels like a useful way to contribute, as well as making me more actively pay attention to what s going on there.
Personal projects
I did some work on onak, my OpenPGP keyserver. That resulted in a 0.6.4 release, mainly driven by fixes for building with more recent CMake + GCC versions in Debian. I ve got a set of changes that should add RFC9580 (v6) support, but there s not a lot of test keys out there at present for making sure I m handling things properly. Equally there s a plan to remove Berkeley DB from Debian, which I m completely down with, but that means I need a new primary backend. I ve got a draft of LMDB support to replace that, but I need to go back and confirm I ve got all the important bits implemented before publishing it and committing to a DB layout. I d also like to add sqlite support as an option, but that needs some thought about trying to take proper advantage of its features, rather than just treating it as a key-value store.
(I know everyone likes to hate on OpenPGP these days, but I continue to be interested by the whole web-of-trust piece of it, which nothing else I m aware of offers.)
That about wraps up 2025. Nothing particularly earth shaking in there, more a case of continuing to tread water on the various things I m involved. I highly doubt 2026 will be much different, but I think that s ok. I scratch my own itches, and if that helps out other folk too then that s lovely, but not the primary goal.
Same as last year, this is a summary of what I ve been up to throughout the year.
See also the recap/retrospection published by my friends (antiz, jvoisin, orhun).
Uploaded 467 packages to Arch Linux
Most of them being reproducible, meaning I provably didn t abuse my position of compiling the binaries
35 of them are signal-desktop
29 of them are metasploit
Made 53 uploads to Debian
All of them being related to my work in the debian-rust team, that I ve been a part of since 2018
Also applied for Debian Developer status (with 4 Debian Developers advocating for me)
Made 14 commits in Alpine Linux aports
13 of them being package releases
Made 2 commits in NixOS nixpkgs
Also joined their Github org
Made 4 commits in homebrew-core
With special focus on polishing the Rust development experience for the RP2040 microcontroller
Lost Onion, my cat of 13 years, to inoperable cancer. He has been with me throughout my entire open source journey (sometimes being credited as co-author) and who looked after me for my entire adult life. You won t be forgotten.
Developed 6 hand-held games with embedded Rust, most of them being birthday gifts for people close to me
embedded-graphics-colorcast a library I developed so I can keep using embedded-mono-img on ST7789/ILI9486 screens - I used tinybmp in one project but it was fairly slow
ch32v003-demo to demo and document the lowend ch32v003 RISC-V microcontroller, with devboards that are commonly sold for 0.50-0.70 on AliExpress (it s cute but lacks the required 5.1k resistors on the USB-C configuration pins that tell the host to provide 5V, so it won t work with many USB-C chargers, which is quite annoying)
embedded-savegame an atomic/transactional savegame library, with powerfail-safety and wear-leveling, optimized for flash and EEPROM storage
djb2 a very lightweight non-cryptographic checksum algorithm that replaced my use of CRC32 in the embedded-savegame library, to make it more suitable for the ch32v003
This was the first year I attended Kernel
Recipes and I have nothing but say how
much I enjoyed it and how grateful I m for the opportunity to talk more about
kworkflow to very experienced kernel developers. What
I mostly like about Kernel Recipes is its intimate format, with only one track
and many moments to get closer to experts and people that you commonly talk
online during your whole year.
In the beginning of this year, I gave the talk Don t let your motivation go,
save time with kworkflow at
FOSDEM,
introducing kworkflow to a more diversified audience, with different levels of
involvement in the Linux kernel development.
At this year s Kernel Recipes I presented
the second talk of the first day: Kworkflow - mix & match kernel recipes end-to-end.
The Kernel Recipes audience is a bit different from FOSDEM, with mostly
long-term kernel developers, so I decided to just go directly to the point. I
showed kworkflow being part of the daily life of a typical kernel developer
from the local setup to install a custom kernel in different target machines to
the point of sending and applying patches to/from the mailing list. In short, I
showed how to mix and match kernel workflow recipes end-to-end.
As I was a bit fast when showing some features during my presentation, in this
blog post I explain each slide from my speaker notes. You can see a summary of
this presentation in the Kernel Recipe Live Blog Day 1: morning.
Introduction
Hi, I m Melissa Wen from Igalia. As we already started sharing kernel recipes
and even more is coming in the next three days, in this presentation I ll talk
about kworkflow: a cookbook to mix & match kernel recipes end-to-end.
This is my first time attending Kernel Recipes, so lemme introduce myself
briefly.
As I said, I work for Igalia, I work mostly on kernel GPU drivers in the DRM
subsystem.
In the past, I co-maintained VKMS and the v3d driver. Nowadays I focus on the
AMD display driver, mostly for the Steam Deck.
Besides code, I contribute to the Linux kernel by mentoring several newcomers
in Outreachy, Google Summer of Code and Igalia Coding Experience. Also, by
documenting and tooling the kernel.
And what s this cookbook called kworkflow?
Kworkflow (kw)
Kworkflow is a tool created by Rodrigo Siqueira, my colleague at Igalia. It s a
single platform that combines software and tools to:
optimize your kernel development workflow;
reduce time spent in repetitive tasks;
standardize best practices;
ensure that deployment data flows smoothly and reliably between different
kernel workflows;
It s mostly done by volunteers, kernel developers using their spare time. Its
features cover real use cases according to kernel developer needs.
Basically it s mixing and matching the daily life of a typical kernel developer
with kernel workflow recipes with some secret sauces.
First recipe: A good GPU driver for my AMD laptop
So, it s time to start the first recipe: A good GPU driver for my AMD laptop.
Before starting any recipe we need to check the necessary ingredients and
tools. So, let s check what you have at home.
With kworkflow, you can use:
kw device: to get information about the target machine, such as: CPU model,
kernel version, distribution, GPU model,
kw remote: to set the address of this machine for remote access
kw config: you can configure kw with kw config. With this command you can
basically select the tools, flags and preferences that kw will use to build
and deploy a custom kernel in a target machine. You can also define recipients
of your patches when sending it using kw send-patch. I ll explain more about
each feature later in this presentation.
kw kernel-config manager (or just kw k): to fetch the kernel .config file
from a given machine, store multiple .config files, list and retrieve them
according to your needs.
Now, with all ingredients and tools selected and well portioned, follow the
right steps to prepare your custom kernel!
First step: Mix ingredients with kw build or just kw b
kw b and its options wrap many routines of compiling a custom kernel.
You can run kw b -i to check the name and kernel version and the number
of modules that will be compiled and kw b --menu to change kernel
configurations.
You can also pre-configure compiling preferences in kw config regarding
kernel building. For example, target architecture, the name of the
generated kernel image, if you need to cross-compile this kernel for a
different system and which tool to use for it, setting different warning
levels, compiling with CFlags, etc.
Then you can just run kw b to compile the custom kernel for a target
machine.
Second step: Bake it with kw deploy or just kw d
After compiling the custom kernel, we want to install it in the target machine.
Check the name of the custom kernel built: 6.17.0-rc6 and with kw s SSH
access the target machine and see it s running the kernel from the Debian
distribution 6.16.7+deb14-amd64.
As with building settings, you can also pre-configure some deployment settings,
such as compression type, path to device tree binaries, target machine (remote,
local, vm), if you want to reboot the target machine just after deploying your
custom kernel, and if you want to boot in the custom kernel when restarting the
system after deployment.
If you didn t pre-configured some options, you can still customize as a command
option, for example: kw d --reboot will reboot the system after deployment,
even if I didn t set this in my preference.
With just running kw d --reboot I have installed the kernel in a given target
machine and rebooted it. So when accessing the system again I can see it was
booted in my custom kernel.
Third step: Time to taste with kw debug
kw debug wraps many tools for validating a kernel in a target machine. We
can log basic dmesg info but also tracking events and ftrace.
With kw debug --dmesg --history we can grab the full dmesg log from a
remote machine, if you use the --follow option, you will monitor dmesg
outputs. You can also run a command with kw debug --dmesg --cmd="<my
command>" and just collect the dmesg output related to this specific execution
period.
In the example, I ll just unload the amdgpu driver. I use kw drm
--gui-off to drop the graphical interface and release the amdgpu for
unloading it. So I run kw debug --dmesg --cmd="modprobe -r amdgpu" to unload
the amdgpu driver, but it fails and I couldn t unload it.
Cooking Problems
Oh no! That custom kernel isn t tasting good. Don t worry, as in many recipes
preparations, we can search on the internet to find suggestions on how to make
it tasteful, alternative ingredients and other flavours according to your
taste.
With kw patch-hub you can search on the lore kernel mailing list for possible
patches that can fix your kernel issue. You can navigate in the mailing lists,
check series, bookmark it if you find it relevant and apply it in your local
kernel tree, creating a different branch for tasting oops, for testing. In
this example, I m opening the amd-gfx mailing list where I can find
contributions related to the AMD GPU driver, bookmark and/or just apply the
series to my work tree and with kw bd I can compile & install the custom kernel
with this possible bug fix in one shot.
As I changed my kw config to reboot after deployment, I just need to wait for
the system to boot to try again unloading the amdgpu driver with kw debug
--dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for
this command, the driver was unloaded, the problem is fixed by this series and
the kernel tastes good now.
If I m satisfied with the solution, I can even use kw patch-hub to access the
bookmarked series and marking the checkbox that will reply the patch thread
with a Reviewed-by tag for me.
Second Recipe: Raspberry Pi 4 with Upstream Kernel
As in all recipes, we need ingredients and tools, but with kworkflow you can
get everything set as when changing scenarios in a TV show. We can use kw env
to change to a different environment with all kw and kernel configuration set
and also with the latest compiled kernel cached.
I was preparing the first recipe for a x86 AMD laptop and with kw env --use
RPI_64 I use the same worktree but moved to a different kernel workflow, now
for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+
is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I
just built&deployed. kw build settings are also different, now I m targeting
a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu-
cross-compilation tool and my kernel image calls kernel8 now.
If you didn t plan for this recipe in advance, don t worry. You can create a
new environment with kw env --create RPI_64_V2 and run kw init --template
to start preparing your kernel recipe with the mirepoix ready.
I mean, with the basic ingredients already cut
I mean, with the kw configuration set from a template.
And you can use kw remote to set the IP address of your target machine and
kw kernel-config-manager to fetch/retrieve the .config file from your target
machine. So just run kw bd to compile and install a upstream kernel for
Raspberry Pi 4.
Third Recipe: The Mainline Kernel Ringing on my Steam Deck (Live Demo)
Let s show you how easy is to build, install and test a custom kernel for Steam
Deck with Kworkflow. It s a live demo, but I also recorded it because I know
the risks I m exposed to and something can go very wrong just because of
reasons :)
Report: how was the live demo
For this live demo, I took my OLED Steam Deck to the stage. I explained that,
if I boot mainline kernel on this device, there is no audio. So I turned it on
and booted the mainline kernel I ve installed beforehand. It was clear that
there was no typical Steam Deck startup audio when the system was loaded.
As I started the demo in the kw environment for Raspberry Pi 4, I first moved
to another environment previously used for Steam Deck. In this STEAMDECK
environment, the mainline kernel was already compiled and cached, and all
settings for accessing the target machine, compiling and installing a custom
kernel were retrieved automatically.
My live demo followed these steps:
With kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
With kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
Run kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
Run kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
Using git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
With kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
Run kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
After finishing deployment, the Steam Deck will reboot on the new custom
kernel version and made a clear resonant or vibrating sound. [Hopefully]
Finally, I showed to the audience that, if I wanted to send this patch
upstream, I just needed to run kw send-patch and kw would automatically add
subsystem maintainers, reviewers and mailing lists for the affected files as
recipients, and send the patch to the upstream community assessment. As I
didn t want to create unnecessary noise, I just did a dry-run with kw
send-patch -s --simulate to explain how it looks.
What else can kworkflow already mix & match?
In this presentation, I showed that kworkflow supported different kernel
development workflows, i.e., multiple distributions, different bootloaders and
architectures, different target machines, different debugging tools and
automatize your kernel development routines best practices, from development
environment setup and verifying a custom kernel in bare-metal to sending
contributions upstream following the contributions-by-e-mail structure. I
exemplified it with three different target machines: my ordinary x86 AMD laptop
with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the
Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions,
Kworkflow also supports Ubuntu, Fedora and PopOS.
Now it s your turn: Do you have any secret recipes to share? Please share
with us via kworkflow.
I sometimes use Vagrant to deploy my VM&aposs and recently when I tried to deploy one for Trixie, I could see one available. So I checked the official Debian images on Vagrant cloud at https://portal.cloud.hashicorp.com/vagrant/discover/debian and could not find an image for trixie on Vagrant cloud. Also looked at other cloud image sources like Docker hub, and I could see an image their for Trixie. So I looked into how I can generate a Vagrant image locally for Debian to use.
this will install some dependency packages, will ask for sudo password if need to install something not already installed. Let&aposs call make help
$ make help
To run this makefile, run:
make <DIST>-<CLOUD>-<ARCH>
WHERE <DIST> is bullseye, buster, stretch, sid or testing
And <CLOUD> is azure, ec2, gce, generic, genericcloud, nocloud, vagrant, vagrantcontrib
And <ARCH> is amd64, arm64, ppc64el
Set DESTDIR= to write images to given directory.
$ make trixie-vagrant-amd64
umask 022; \
./bin/debian-cloud-images build \
trixie vagrant amd64 \
--build-id vagrant-cloud-images-master \
--build-type official
usage: debian-cloud-images build
debian-cloud-images build: error: argument RELEASE: invalid value: trixie
make: *** [Makefile:22: trixie-vagrant-amd64] Error 2
As you can see, trixie is not even in the available options and it is not building as well. Before trying to look at updating the codebase, I looked at the pending MR&aposs on Salsa and found Michael Ablassmeier&aposs pending merge request at https://salsa.debian.org/cloud-team/debian-vagrant-images/-/merge_requests/18So let me test that commit and see if I can build trixie locally from Michael&aposs MR
$ make help
To run this makefile, run:
make <DIST>-<CLOUD>-<ARCH>
WHERE <DIST> is bullseye, buster, stretch, sid or testing
And <CLOUD> is azure, ec2, gce, generic, genericcloud, nocloud, vagrant, vagrantcontrib
And <ARCH> is amd64, arm64, ppc64el
Set DESTDIR= to write images to given directory.
$ make trixie-vagrant-amd64
umask 022; \
./bin/debian-cloud-images build \
trixie vagrant amd64 \
--build-id vagrant-cloud-images-master \
--build-type official
2025-09-17 00:36:25,919 INFO Adding class DEBIAN
2025-09-17 00:36:25,919 INFO Adding class CLOUD
2025-09-17 00:36:25,919 INFO Adding class TRIXIE
2025-09-17 00:36:25,920 INFO Adding class VAGRANT
2025-09-17 00:36:25,920 INFO Adding class AMD64
2025-09-17 00:36:25,920 INFO Adding class LINUX_IMAGE_BASE
2025-09-17 00:36:25,920 INFO Adding class GRUB_PC
2025-09-17 00:36:25,920 INFO Adding class LAST
2025-09-17 00:36:25,921 INFO Running FAI: sudo env PYTHONPATH=/home/rajudev/dev/salsa/michael/debian-vagrant-images/src/debian_cloud_images/build/../.. CLOUD_BUILD_DATA=/home/rajudev/dev/salsa/michael/debian-vagrant-images/src/debian_cloud_images/data CLOUD_BUILD_INFO= "type": "official", "release": "trixie", "release_id": "13", "release_baseid": "13", "vendor": "vagrant", "arch": "amd64", "build_id": "vagrant-cloud-images-master", "version": "20250917-1" CLOUD_BUILD_NAME=debian-trixie-vagrant-amd64-official-20250917-1 CLOUD_BUILD_OUTPUT_DIR=/home/rajudev/dev/salsa/michael/debian-vagrant-images CLOUD_RELEASE_ID=vagrant CLOUD_RELEASE_VERSION=20250917-1 fai-diskimage --verbose --hostname debian --class DEBIAN,CLOUD,TRIXIE,VAGRANT,AMD64,LINUX_IMAGE_BASE,GRUB_PC,LAST --size 100G --cspace /home/rajudev/dev/salsa/michael/debian-vagrant-images/src/debian_cloud_images/build/fai_config debian-trixie-vagrant-amd64-official-20250917-1.raw
..... continued
Although we can now build the images, we just don&apost see an option for it in the help text, not even for bookworm. Just the text in Makefile is outdated, but I can build and trixie Vagrant box now. Thanks to Michael for the fix.
In my post yesterday, ARM is great, ARM is terrible (and so is RISC-V), I described my desire to find ARM hardware with AES instructions to support full-disk encryption, and the poor state of the OS ecosystem around the newer ARM boards.
I was anticipating buying either a newer ARM SBC or an x86 mini PC of some sort.
More-efficient AES alternatives
Always one to think, what if I didn t have to actually buy something , I decided to research whether it was possible to use encryption algorithms that are more performant on the Raspberry Pi 4 I already have.
The answer was yes. From cryptsetup benchmark:
root@mccoy:~# cryptsetup benchmark --cipher=xchacha12,aes-adiantum-plain64
# Tests are approximate using memory only (no storage IO).
# Algorithm Key Encryption Decryption
xchacha12,aes-adiantum 256b 159.7 MiB/s 160.0 MiB/s
xchacha20,aes-adiantum 256b 116.7 MiB/s 169.1 MiB/s
aes-xts 256b 52.5 MiB/s 52.6 MiB/s
With best-case reads from my SD card at 45MB/s (with dd if=/dev/mmcblk0 of=/dev/null bs=1048576 status=progress), either of the ChaCha-based algorithms will be fast enough. Great, I thought. Now I can just solve this problem without spending a dollar.
But not so fast.
Serial terminals vs. serial consoles
My primary use case for this device is to drive my actual old DEC vt510 terminal. I have long been able to do that by running a getty for my FTDI-based USB-to-serial converter on /dev/ttyUSB0. This gets me a login prompt, and I can do whatever I need from there.
This does not get me a serial console, however. The serial console would show kernel messages and could be used to interact with the pre-multiuser stages of the system that is, everything before the loging prompt. You can use it to access an emergency shell for repair, etc.
Although I have long booted that kernel with console=tty0 console=ttyUSB0,57600, the serial console has never worked but I d never bothered investigating because the text terminal was sufficient.
You might be seeing where this is going: to have root on an encrypted LUKS volume, you have to enter the decryption password in the pre-multiuser environment (which happens to be on the initramfs).
So I started looking. First, I extracted the initrd with cpio and noticed that the ftdi_sio and usbserial modules weren t present. Added them to /etc/initramfs-tools/modules and rebooted; no better.
So I found the kernel s serial console guide, which explicitly notes To use a serial port as console you need to compile the support into your kernel . Well, I have no desire to custom-build a kernel on a Raspberry Pi with MicroSD storage every time a new kernel comes out.
I thought well I don t stricly need the kernel to know about the console on /dev/ttyUSB0 for this; I just need the password prompt which comes from userspace to know about it.
So I looked at the initramfs code, and wouldn t you know it, it uses /dev/console. Looking at /proc/consoles on that system, indeed it doesn t show ttyUSB0. So even though it is possible to load the USB serial driver in the initramfs, there is no way to make the initramfs use it, because it only uses whatever the kernel recognizes as a console, and the kernel won t recognize this. So there is no way to use a USB-to-serial adapter to enter a password for an encrypted root filesystem.
Drat.
The on-board UARTs?
I can hear you know: The Pi already has on-board serial support! Why not use that?
Ah yes, the reason I don t want to use that is because it is difficult to use that, particularly if you want to have RTS/CTS hardware flow control (or DTR/DSR on these old terminals, but that s another story, and I built a custom cable to map it to RTS/CTS anyhow).
Since you asked, I ll take you down this unpleasant path.
The GPIO typically has only 2 pins for serial communication: 8 and 10, for TX and RX, respectively.
But dive in and you get into a confusing maze of UARTs. The mini UART , the one we are mostly familiar with on the Pi, does not support hardware flow control. The PL011 does. So the natural question is: how do we switch to the PL011, and what pins does it use? Great questions, and the answer is undocumented, at least for the Pi 4.
According to that page, for the Pi 4, the primary UART is UART1, UART1 is the mini UART, the secondary UART is not normally present on the GPIO connector and might be used by Bluetooth anyway, and there is no documented pin for RTS/CTS anyhow. (Let alone some of the other lines modems use) There are supposed to be /dev/ttyAMA* devices, but I don t have those. There s an enable_uart kernel parameter, which does things like stop the mini UART from changing baud rates every time the VPU changes clock frequency (I am not making this up!), but doesn t seem to control the PL011 UART selection. This page has a program to do it, and map some GPIO pins to RTS/CTS, in theory.
Even if you get all that working, you still have the problem that the Pi UARTs (all of them of every type) is 3.3V and RS-232 is 5V, so unless you get a converter, you will fry your Pi the moment you connect it to something useful. So, you re probably looking at some soldering and such just to build a cable that will work with an iffy stack.
So, I could probably make it work given enough time, but I don t have that time to spare working with weird Pi serial problems, so I have always used USB converters when I need serial from a Pi.
Conclusion
I bought a fanless x86 micro PC with a N100 chip and all the ports I might want: a couple of DB-9 serial ports, some Ethernet ports, HDMI and VGA ports, and built-in wifi. Done.
I m something of a filesystem geek, I guess. I first wrote about ZFS on Linux 14 years ago, and even before I used ZFS, I had used ext2/3/4, jfs, reiserfs, xfs, and no doubt some others.
I ve also used btrfs. I last posted about it in 2014, when I noted it has some advantages over ZFS, but also some drawbacks, including a lot of kernel panics.
Since that comparison, ZFS has gained trim support and btrfs has stabilized. The btrfs status page gives you an accurate idea of what is good to use on btrfs.
Background: Moving towards ZFS and btrfs
I have been trying to move everything away from ext4 and onto either ZFS or btrfs. There are generally several reasons for that:
The checksums for every block help detect potential silent data corruption
Instant snapshots make consistent backups of live systems a lot easier, and without the hassle and wasted space of LVM snapshots
Transparent compression and dedup can save a lot of space in storage-constrained environments
For any machine with at least 32GB of RAM (plus my backup server, which has only 8GB), I run ZFS. While it lacks some of the flexibility of btrfs, it has polish. zfs list -o space shows a useful space accounting. zvols can be behind VMs. With my project simplesnap, I can easily send hourly backups with ZFS, and I choose to send them over NNCP in most cases.
I have a few VMs in the cloud (running Debian, of course) that I use to host things like this blog, my website, my gopher site, the quux NNCP public relay, and various other things.
In these environments, storage space can be expensive. For that matter, so can RAM. ZFS is RAM-hungry, so that rules out ZFS. I ve been running btrfs in those environments for a few years now, and it s worked out well. I do async dedup, lzo or zstd compression depending on the needs, and the occasional balance and defrag.
Filesystems on the Raspberry Pi
I run Debian trixie on all my Raspberry Pis; not Raspbian or Raspberry Pi OS for a number of reasons. My 8-yr-old uses a Raspberry Pi 400 as her primary computer and loves it! She doesn t do web browsing, but plays Tuxpaint, some old DOS games like Math Blaster via dosbox, and uses Thunderbird for a locked-down email account.
But it was SLOW. Just really, glacially, slow, especially for Thunderbird.
My first step to address that was to get a faster MicroSD card to hold the OS. That was a dramatic improvement. It s still slow, but a lot faster.
Then, I thought, maybe I could use btrfs with LZO compression to reduce the amount of I/O and speed things up further? Analysis showed things were mostly slow due to I/O, not CPU, constraints.
The conversion
Rather than use the btrfs in-place conversion from ext4, I opted to dar it up (like tar), run mkfs.btrfs on the SD card, then unpack the archive back onto it. Easy enough, right?
Well, not so fast. The MicroSD card is 128GB, and the entire filesystem is 6.2GB. But after unpacking 100MB onto it, I got an out of space error.
btrfs has this notion of block groups. By default, each block group is dedicated to either data or metadata. btrfs fi df and btrfs fi usage will show you details about the block groups.
btrfs allocates block groups greedily (the ssd_spread mount option I use may have exacerbated this). What happened was it allocated almost the entire drive to data block groups, trying to spread the data across it. It so happened that dar archived some larger files first (maybe /boot), so btrfs was allocating data and metadata blockgroups assuming few large files. But then it started unpacking one of the directories in /usr with lots of small files (maybe /usr/share/locale). It quickly filled up the metadata block group, and since the entire SD card had been allocated to different block groups, I got ENOSPC.
Deleting a few files and running btrfs balance resolved it; now it allocated 1GB to metadata, which was plenty. I re-ran the dar extract and now everything was fine. See more details on btrfs balance and block groups.
This was the only btrfs problem I encountered.
Benchmarks
I timed two things prior to switching to btrfs: how long it takes to boot (measured from the moment I turn on the power until the moment the XFCE login box is displayed), and how long it takes to start Thunderbird.
After switching to btrfs with LZO compression, somewhat to my surprise, both measures were exactly the same!
Why might this be?
It turns out that SD cards are understood to be pathologically bad with random read performance. Boot and Thunderbird both are likely doing a lot of small random reads, not large streaming reads. Therefore, it may be that even though I have reduced the total I/O needed, the impact is unsubstantial because the real bottleneck is the seeks across the disk.
Still, I gain the better backup support and silent data corruption prevention, so I kept btrfs.
SSD mount options and MicroSD endurance
btrfs has several mount options specifically relevant to SSDs. Aside from the obvious trim support, they are ssd and ssd_spread. The documentation on this is vague and my attempts to learn more about it found a lot of information that was outdated or unsubstantiated folklore.
Some reports suggest that older SSDs will benefit from ssd_spread, but that it may have no effect or even a harmful effect on newer ones, and can at times cause fragmentation or write amplification. I could find nothing to back this up, though. And it seems particularly difficult to figure out what kind of wear leveling SSD firmware does. MicroSD firmware is likely to be on the less-advanced side, but still, I have no idea what it might do. In any case, with btrfs not updating blocks in-place, it should be better than ext4 in the most naive case (no wear leveling at all) but may have somewhat more write traffic for the pathological worst case (frequent updates of small portions of large files).
One anecdotal report I read and can t find anymore, somehow was from a person that had set up a sort of torture test for SD cards, with reports that ext4 lasted a few weeks or months before the MicroSDs failed, while btrfs lasted years.
If you are looking for a MicroSD card, by the way, The Great MicroSD Card Survey is a nice place to start.
For longevity: I mount all my filesystems with noatime already, so I continue to recommend that. You can also consider limiting the log size in /etc/systemd/journald.conf, running daily fstrim (which may be more successful than live trims in all filesystems).
Conclusion
I ve been pretty pleased with btrfs. The concerns I have today relate to block groups and maintenance (periodic balance and maybe a periodic defrag). I m not sure I d be ready to say put btrfs on the computer you send to someone that isn t Linux-savvy because the chances of running into issues are higher than with ext4. Still, for people that have some tech savvy, btrfs can improve reliability and performance in other ways.
Posted on August 9, 2025
Tags: madeof:atoms
I ve collected some more Standard Compliant
stickers.
Some went on my laptop, of course, but some were selected for another
tool I use relatively often: more pattern weights like the ones I
blogged about in February.
And of course the sources:
I couldn t find the sources for the blue mobian sticker, only for other variants.
I have enough washers to make two more weights, and even more stickers,
but the printer is currently not in use, so I guess they will happen a
few months or so in the future.
Another short status update of what happened on my side last month.
Notable might be the Cell Broadcast support for Qualcomm SoCs, the
rest is smaller fixes and QoL improvements.
phosh
2025 was my first year at FOSDEM, and I can say it was an incredible experience
where I met many colleagues from Igalia who live around
the world, and also many friends from the Linux display stack who are part of
my daily work and contributions to DRM/KMS. In addition, I met new faces and
recognized others with whom I had interacted on some online forums and we had
good and long conversations.
During FOSDEM 2025 I had the opportunity to present
about kworkflow in the kernel devroom. Kworkflow is a
set of tools that help kernel developers with their routine tasks and it is the
tool I use for my development tasks. In short, every contribution I make to the
Linux kernel is assisted by kworkflow.
The goal of my presentation was to spread the word about kworkflow. I aimed to
show how the suite consolidates good practices and recommendations of the
kernel workflow in short commands. These commands are easily configurable and
memorized for your current work setup, or for your multiple setups.
For me, Kworkflow is a tool that accommodates the needs of different agents in
the Linux kernel community. Active developers and maintainers are the main
target audience for kworkflow, but it is also inviting for users and user-space
developers who just want to report a problem and validate a solution without
needing to know every detail of the kernel development workflow.
Something I didn t emphasize during the presentation but would like to correct
this flaw here is that the main author and developer of kworkflow is my
colleague at Igalia, Rodrigo Siqueira. Being honest,
my contributions are mostly on requesting and validating new features, fixing
bugs, and sharing scripts to increase feature coverage.
So, the video and slide deck of my FOSDEM presentation are available for
download
here.
And, as usual, you will find in this blog post the script of this presentation
and more detailed explanation of the demo presented there.
Kworkflow at FOSDEM 2025: Speaker Notes and Demo
Hi, I m Melissa, a GPU kernel driver developer at Igalia and today I ll be
giving a very inclusive talk to not let your motivation go by saving time with
kworkflow.
So, you re a kernel developer, or you want to be a kernel developer, or you
don t want to be a kernel developer. But you re all united by a single need:
you need to validate a custom kernel with just one change, and you need to
verify that it fixes or improves something in the kernel.
And that s a given change for a given distribution, or for a given device, or
for a given subsystem
Look to this diagram and try to figure out the number of subsystems and related
work trees you can handle in the kernel.
So, whether you are a kernel developer or not, at some point you may come
across this type of situation:
There is a userspace developer who wants to report a kernel issue and says:
Oh, there is a problem in your driver that can only be reproduced by running this specific distribution.
And the kernel developer asks:
Oh, have you checked if this issue is still present in the latest kernel version of this branch?
But the userspace developer has never compiled and installed a custom kernel
before. So they have to read a lot of tutorials and kernel documentation to
create a kernel compilation and deployment script. Finally, the reporter
managed to compile and deploy a custom kernel and reports:
Sorry for the delay, this is the first time I have installed a custom kernel.
I am not sure if I did it right, but the issue is still present in the kernel
of the branch you pointed out.
And then, the kernel developer needs to reproduce this issue on their side, but
they have never worked with this distribution, so they just created a new
script, but the same script created by the reporter.
What s the problem of this situation? The problem is that you keep creating new
scripts!
Every time you change distribution, change architecture, change hardware,
change project - even in the same company - the development setup may change
when you switch to a different project, you create another script for your new
kernel development workflow!
You know, you have a lot of babies, you have a collection of my precious
scripts , like Sm agol (Lord of the Rings) with the precious ring.
Instead of creating and accumulating scripts, save yourself time with
kworkflow. Here is a typical script that many of you may have. This is a
Raspberry Pi 4 script and contains everything you need to memorize to compile
and deploy a kernel on your Raspberry Pi 4.
With kworkflow, you only need to memorize two commands, and those commands are
not specific to Raspberry Pi. They are the same commands to different
architecture, kernel configuration, target device.
What is kworkflow?
Kworkflow is a collection of tools and software combined to:
Optimize Linux kernel development workflow.
Reduce time spent on repetitive tasks, since we are spending our lives
compiling kernels.
Standardize best practices.
Ensure reliable data exchange across kernel workflow. For example: two people
describe the same setup, but they are not seeing the same thing, kworkflow
can ensure both are actually with the same kernel, modules and options enabled.
I don t know if you will get this analogy, but kworkflow is for me a megazord
of scripts. You are combining all of your scripts to create a very powerful
tool.
What is the main feature of kworflow?
There are many, but these are the most important for me:
Build & deploy custom kernels across devices & distros.
Handle cross-compilation seamlessly.
Manage multiple architecture, settings and target devices in the same work tree.
Organize kernel configuration files.
Facilitate remote debugging & code inspection.
Standardize Linux kernel patch submission guidelines. You don t need to
double check documentantion neither Greg needs to tell you that you are not
following Linux kernel guidelines.
Upcoming: Interface to bookmark, apply and reviewed-by patches from
mailing lists (lore.kernel.org).
This is the list of commands you can run with kworkflow.
The first subset is to configure your tool for various situations you may face
in your daily tasks.
We have some tools to manage and interact with target machines.
# Manage and interact with target machines
kw ssh (s) - SSH support
kw remote (r) - Manage machines available via ssh
kw vm - QEMU support
To inspect and debug a kernel.
# Inspect and debug
kw device - Show basic hardware information
kw explore (e) - Explore string patterns in the work tree and git logs
kw debug - Linux kernel debug utilities
kw drm - Set of commands to work with DRM drivers
To automatize best practices for patch submission like codestyle, maintainers
and the correct list of recipients and mailing lists of this change, to ensure
we are sending the patch to who is interested in it.
# Automatize best practices for patch submission
kw codestyle (c) - Check code style
kw maintainers (m) - Get maintainers/mailing list
kw send-patch - Send patches via email
And the last one, the upcoming patch hub.
# Upcoming
kw patch-hub - Interact with patches (lore.kernel.org)
How can you save time with Kworkflow?
So how can you save time building and deploying a custom kernel?
First, you need a .config file.
Without kworkflow: You may be manually extracting and managing .config
files from different targets and saving them with different suffixes to link
the kernel to the target device or distribution, or any descriptive suffix to
help identify which is which. Or even copying and pasting from somewhere.
With kworkflow: you can use the kernel-config-manager command, or simply
kw k, to store, describe and retrieve a specific .config file very easily,
according to your current needs.
Then you want to build the kernel:
Without kworkflow: You are probably now memorizing a combination of
commands and options.
With kworkflow: you just need kw b (kw build) to build the kernel with
the correct settings for cross-compilation, compilation warnings, cflags,
etc. It also shows some information about the kernel, like number of modules.
Finally, to deploy the kernel in a target machine.
Without kworkflow: You might be doing things like: SSH connecting to the
remote machine, copying and removing files according to distributions and
architecture, and manually updating the bootloader for the target distribution.
With kworkflow: you just need kw d which does a lot of things for you,
like: deploying the kernel, preparing the target machine for the new
installation, listing available kernels and uninstall them, creating a tarball,
rebooting the machine after deploying the kernel, etc.
You can also save time on debugging kernels locally or remotely.
Without kworkflow: you do: ssh, manual setup and traces enablement,
copy&paste logs.
With kworkflow: more straighforward access to debug utilities: events,
trace, dmesg.
You can save time on managing multiple kernel images in the same work tree.
Without kworkflow: now you can be cloning multiple times the same
repository so you don t lose compiled files when changing kernel
configuration or compilation options and manually managing build and deployment
scripts.
With kworkflow: you can use kw env to isolate multiple contexts in the
same worktree as environments, so you can keep different configurations in
the same worktree and switch between them easily without losing anything from
the last time you worked in a specific context.
Finally, you can save time when submitting kernel patches. In kworkflow, you
can find everything you need to wrap your changes in patch format and submit
them to the right list of recipients, those who can review, comment on, and
accept your changes.
This is a demo that the lead developer of the kw patch-hub feature sent me.
With this feature, you will be able to check out a series on a specific mailing
list, bookmark those patches in the kernel for validation, and when you are
satisfied with the proposed changes, you can automatically submit a reviewed-by
for that whole series to the mailing list.
Demo
Now a demo of how to use kw environment to deal with different devices,
architectures and distributions in the same work tree without losing compiled
files, build and deploy settings, .config file, remote access configuration and
other settings specific for those three devices that I have.
Setup
Three devices:
laptop (debian
x86
intel
local)
SteamDeck (steamos
x86
amd
remote)
RaspberryPi 4 (raspbian
arm64
broadcomm
remote)
Goal: To validate a change on DRM/VKMS using a single kernel tree.
Kworkflow commands:
kw env
kw d
kw bd
kw device
kw debug
kw drm
Demo script
In the same terminal and worktree.
First target device: Laptop (debian x86 intel local)
$ kw env --list # list environments available in this work tree
$ kw env --use LOCAL # select the environment of local machine (laptop) to use: loading pre-compiled files, kernel and kworkflow settings.
$ kw device # show device information
$ sudo modinfo vkms # show VKMS module information before applying kernel changes.
$ <open VKMS file and change module info>
$ kw bd # compile and install kernel with the given change
$ sudo modinfo vkms # show VKMS module information after kernel changes.
$ git checkout -- drivers
Second target device: RaspberryPi 4 (raspbian arm64 broadcomm remote)
$ kw env --use RPI_64 # move to the environment for a different target device.
$ kw device # show device information and kernel image name
$ kw drm --gui-off-after-reboot # set the system to not load graphical layer after reboot
$ kw b # build the kernel with the VKMS change
$ kw d --reboot # deploy the custom kernel in a Raspberry Pi 4 with Raspbian 64, and reboot
$ kw s # connect with the target machine via ssh and check the kernel image name
$ exit
Third target device: SteamDeck (steamos x86 amd remote)
$ kw env --use STEAMDECK # move to the environment for a different target device
$ kw device # show device information
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output
$ kw debug --dmesg --follow --history --cmd="modprobe -r vkms" # run a command and show the related dmesg output
$ <add a printk with a random msg to appear on dmesg log>
$ kw bd # deploy and install custom kernel to the target device
$ kw debug --dmesg --follow --history --cmd="modprobe vkms" # run a command and show the related dmesg output after build and deploy the kernel change
Q&A
Most of the questions raised at the end of the presentation were actually
suggestions and additions of new features to kworkflow.
The first participant, that is also a kernel maintainer, asked about two
features: (1) automatize getting patches from patchwork (or lore) and
triggering the process of building, deploying and validating them using the
existing workflow, (2) bisecting support. They are both very interesting
features. The first one fits well the patch-hub subproject, that is
under-development, and I ve actually made a similar
request a couple of weeks
before the talk. The second is an already existing
request in kworkflow github
project.
Another request was to use kexec and avoid rebooting the kernel for testing.
Reviewing my presentation I realized I wasn t very clear that kworkflow doesn t
support kexec. As I replied, what it does is to install the modules and you can
load/unload them for validations, but for built-in parts, you need to reboot
the kernel.
Another two questions: one about Android Debug Bridge (ADB) support instead of
SSH and another about support to alternative ways of booting when the custom
kernel ended up broken but you only have one kernel image there. Kworkflow
doesn t manage it yet, but I agree this is a very useful feature for embedded
devices. On Raspberry Pi 4, kworkflow mitigates this issue by preserving the
distro kernel image and using config.txt file to set a custom kernel for
booting. For ADB, there is no support too, and as I don t see currently users
of KW working with Android, I don t think we will have this support any time
soon, except if we find new volunteers and increase the pool of contributors.
The last two questions were regarding the status of b4 integration, that is
under development, and other debugging features that the tool doesn t support
yet.
Finally, when Andrea and I were changing turn on the stage, he suggested to add
support for virtme-ng to kworkflow. So I
opened an issue for
tracking this feature request in the project github.
With all these questions and requests, I could see the general need for a tool
that integrates the variety of kernel developer workflows, as proposed by
kworflow. Also, there are still many cases to be covered by kworkflow.
Despite the high demand, this is a completely voluntary project and it is
unlikely that we will be able to meet these needs given the limited resources.
We will keep trying our best in the hope we can increase the pool of users and
contributors too.
Dear Debian community,
this is bits from DPL for March (sorry for the delay, I was waiting
for some additional input).
Conferences
In March, I attended two conferences, each with a distinct motivation.
I joined FOSSASIA to address the imbalance in geographical developer
representation. Encouraging more developers from Asia to contribute to
Free Software is an important goal for me, and FOSSASIA provided a
valuable opportunity to work towards this.
I also attended Chemnitzer Linux-Tage, a conference I have been part of
for over 20 years. To me, it remains a key gathering for the German Free
Software community a place where contributors meet, collaborate, and
exchange ideas.
I have a remark about submitting an event proposal to both FOSDEM and
FOSSASIA:
Cross distribution experience exchange
As Debian Project Leader, I have often reflected on how other Free
Software distributions address challenges we all face. I am interested
in discussing how we can learn from each other to improve our work and
better serve our users. Recognizing my limited understanding of other
distributions, I aim to bridge this gap through open knowledge exchange.
My hope is to foster a constructive dialogue that benefits the
broader Free Software ecosystem. Representatives of other distributions
are encouraged to participate in this BoF whether as contributors or
official co-speakers. My intention is not to drive the discussion from a
Debian-centric perspective but to ensure that all distributions have an
equal voice in the conversation.
This event proposal was part of my commitment from my 2024 DPL platform,
specifically under the section "Reaching Out to Learn". Had it been
accepted, I would have also attended FOSDEM. However, both FOSDEM and
FOSSASIA rejected the proposal.
In hindsight, reaching out to other distribution contributors beforehand
might have improved its chances. I may take this approach in the future
if a similar opportunity arises. That said, rejecting an
interdistribution discussion without any feedback is, in my view, a
missed opportunity for collaboration.
FOSSASIA Summit
The 14th FOSSASIA Summit took place in Bangkok. As a leading
open-source technology conference in Asia, it brings together
developers, startups, and tech enthusiasts to collaborate on projects in
AI, cloud computing, IoT, and more.
With a strong focus on open innovation, the event features hands-on
workshops, keynote speeches, and community-driven discussions,
emphasizing open-source software, hardware, and digital freedom. It
fosters a diverse, inclusive environment and highlights Asia's growing
role in the global FOSS ecosystem.
I presented a talk on Debian as a Global Project and led a
packaging workshop. Additionally, to further support attendees
interested in packaging, I hosted an extra self-organized workshop at a
hacker caf , initiated by participants eager to deepen their skills.
There was another Debian related talk given by Ananthu titled
"The Herculean Task of OS Maintenance - The Debian Way!"
To further my goal of increasing diversity within Debian particularly
by encouraging more non-male contributors I actively engaged with
attendees, seeking opportunities to involve new people in the project.
Whether through discussions, mentoring, or hands-on sessions, I aimed to
make Debian more approachable for those who might not yet see themselves
as contributors. I was fortunate to have the support of Debian
enthusiasts from India and China, who ran the Debian booth and helped
create a welcoming environment for these conversations. Strengthening
diversity in Free Software is a collective effort, and I hope these
interactions will inspire more people to get involved.
Chemnitzer Linuxtage
The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and
longest-running community-driven Linux and open-source conferences, held
annually in Chemnitz since 2000. It has been my favorite conference in
Germany, and I have tried to attend every year.
Focusing on Free Software, Linux, and digital sovereignty, CLT offers a
mix of expert talks, workshops, and exhibitions, attracting hobbyists,
professionals, and businesses alike. With a strong grassroots ethos, it
emphasizes hands-on learning, privacy, and open-source advocacy while
fostering a welcoming environment for both newcomers and experienced
Linux users.
Despite my appreciation for the diverse and high-quality talks at CLT,
my main focus was on connecting with people who share the goal of
attracting more newcomers to Debian. Engaging with both longtime
contributors and potential new participants remains one of the most
valuable aspects of the event for me.
I was fortunate to be joined by Debian enthusiasts staffing the Debian
booth, where I found myself among both experienced booth volunteers who
have attended many previous CLT events and young newcomers. This was
particularly reassuring, as I certainly can't answer every detailed
question at the booth. I greatly appreciate the knowledgeable people who
represent Debian at this event and help make it more accessible to
visitors.
As a small point of comparison while FOSSASIA and CLT are fundamentally
different events the gender ratio stood out. FOSSASIA had a noticeably
higher proportion of women compared to Chemnitz. This contrast
highlighted the ongoing need to foster more diversity within Free
Software communities in Europe.
At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand
Volunteers, One Goal), which was video recorded. It took
place in the grand auditorium and attracted a mix of long-term
contributors and newcomers, making for an engaging and rewarding
experience.
Kind regards
Andreas.
This year I was at FOSDEM 2025, and it was the fifth edition in a row that I participated in person (before it was in 2019, 2020, 2023 and 2024). The event took place on February 1st and 2nd, as always at the ULB campus in Brussels.
We arrived on Friday at lunchtime and went straight to the hotel to drop off our bags. This time we stayed at Ibis in the city center, very close to the hustle and bustle. The price was good and the location was really good for us to be able to go out in the city center and come back late at night. We found a Japanese restaurant near the hotel and it was definitely worth having lunch there because of the all-you-can-eat price. After taking a nap, we went out for a walk. Since January 31st is the last day of the winter sales in the city, the streets in the city center were crowded, there were lots of people in the stores, and the prices were discounted. We concluded that if we have the opportunity to go to Brussels again at this time, it would be better wait to buy clothes for cold weather there.
Unlike in 2023 and 2024, the FOSDEM organization did not approve my request for the Translations DevRoom,so my goal was to participate in the event and collaborate at the Debian booth. And also as I always do, I volunteered to operate the broadcast camera in the main auditorium on both days, for two hours each.
The Debian booth:
Me in the auditorium helping with the broadcast:
2 weeks before the event, the organization put out a call for interested people to request a room for their community s BoF (Birds of a Feather), and I requested a room for Debian and it was approved :-)
It was great to see that people were really interested in participating at the BoF and the room was packed! As the host of the discussions, I tried to leave the space open for anyone who wanted to talk about any subject related to Debian. We started with a talk from MiniDebConf25 organizers, that will be taking place this year in France. Then other topics followed with people talking, asking and answering questions, etc. It was worth organizing this BoF. Who knows, the idea will remain in 2026.
Carlos (a.k.a Charles), Athos, Ma ra and Melissa talked at Fosdem, and Kanashiro was one for organizers of Distributions DevRoom
During the two days of the event, it didn t rain or get too cold. The days were sunny (and people celebrated the weather in Brussels). But I have to admit that it would have been nice to see snow like I did in 2019. Unlike last year, this time I felt more motivated to stay at the event the whole time.
Deixo meu agradecimento especial para o Andreas Tille, atual L der do Debian que aprovou o meu pedido de passagens para que eu pudesse participar dos FOSDEM 2025. Como sempre, essa ajuda foi essencial para viabilizar a minha viagem para Bruxelas.
I would like to give my special thanks to Andreas Tille, the current Debian Leader, who approved my request for flight tickets so that I could join FOSDEM 2025. As always, this help was essential in making my trip to Brussels possible.
And once again Jandira was with me on this adventure. On Monday we went for a walk around Brussels and we also traveled to visit Bruges again. The visit to this city is really worth it because walking through the historic streets is like going back in time. This time we even took a boat trip through the canals, which was really cool.
Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Table of contents:
Reproducible Builds at FOSDEM 2025
Similar to last year s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew J drzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: It s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different. The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn t exist any monitoring of Nixpkgs as a whole. In this talk I ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache. Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon s talk describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research. The slides for the talk are available, as is the full video (23m17s).
Reproducible Builds at PyCascades 2025
Vagrant Cascadian presented at this year s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant s talk, entitled Re-Py-Ducible Builds caught the audience s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing or even more important, if someone else builds it, they get the exact same thing too.
reproduce.debian.net updates
The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.
Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.
Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script specifically to fix the handling the Rules-Requires-Root header in Debian source packages.
Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild package to:
Obey requests from the user/developer for a different temporary directory.
Use the root/superuser for some values of Rules-Requires-Root.
Don t pass --root-owner-group to old versions of dpkg.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
go (clear GOROOT for func ldShared when -trimpath is used)
Distribution work
There as been the usual work in various distributions this month, such as:
In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew J drzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project s work on unprivileged and reproducible builds continued this month. Notable fixes include:
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.
Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
diffoscope & strip-nondeterminismdiffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:
Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) []
Catch a CalledProcessError when calling html2text. []
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 [][] and 288 [][] as well as submitted a patch to update to 289 []. Vagrant also fixed an issue that was breaking reprotest on Guix [][].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.
Website updates
There were a large number of improvements made to our website this month, including:
Holger Levsen clarified the name of a link to our old Wiki pages on the History page [] and added a number of new links to the Talks & Resources page [][].
James Addison update the website s own README file to document a couple of additional dependencies [][], as well as did more work on a future Getting Started guide page [][].
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
Fix /etc/cron.d and /etc/logrotate.d permissions for Jenkins nodes. []
Add support for riscv64 architecture nodes. [][]
Grant Jochen Sprickerhof access to the o4 node. []
Disable the janitor-setup-worker. [][]
In addition:
kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. []
James Addison updated reproduce.debian.net to display the so-called bad reasons hyperlink inline [] and merged the Categorized issues links into the Reproduced builds column [].
Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package [] as well as updating some documentation [].
Roland Clobus continued their work on reproducible live images for Debian, making changes related to new clustering of jobs in openQA. []
And finally, both Holger Levsen [][][] and Vagrant Cascadian performed significant node maintenance. [][][][][]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
In today s digital landscape, social media is more than just a communication tool it is the primary medium for global discourse. Heads of state, corporate leaders and cultural influencers now broadcast their statements directly to the world, shaping public opinion in real time. However, the dominance of a few centralized platforms X/Twitter, Facebook and YouTube raises critical concerns about control, censorship and the monopolization of information. Those who control these networks effectively wield significant power over public discourse.
In response, a new wave of distributed social media platforms has emerged, each built on different decentralized protocols designed to provide greater autonomy, censorship resistance and user control. While Wikipedia maintains a comprehensive list of distributed social networking software and protocols, it does not cover recent blockchain-based systems, nor does it highlight which have the most potential for mainstream adoption.
This post explores the leading decentralized social media platforms and the protocols they are based on: Mastodon (ActivityPub), Bluesky (AT Protocol), Warpcast (Farcaster), Hey (Lens) and Primal (Nostr).
Comparison of architecture and mainstream adoption potential
1. Mastodon (ActivityPub)
Mastodon was created in 2016 by Eugen Rochko, a German software developer who sought to provide a decentralized and user-controlled alternative to Twitter. It was built on the ActivityPub protocol, now standardized by W3C Social Web Working Group, to allow users to join independent servers while still communicating across the broader Mastodon network.
Mastodon operates on a federated model, where multiple independently run servers communicate via ActivityPub. Each server sets its own moderation policies, leading to a decentralized but fragmented experience. The servers can alternatively be called instances, relays or nodes, depending on what vocabulary a protocol has standardized on.
Identity: User identity is tied to the instance where they registered, represented as @username@instance.tld.
Storage: Data is stored on individual instances, which federate messages to other instances based on their configurations.
Cost: Free to use, but relies on instance operators willing to run the servers.
Servers communicate across different platforms by publishing activities to their followers or forwarding activities between servers. Standard HTTPS is used between servers for communication, and the messages use JSON-LD for data representation. The WebFinger protocol is used for user discovery. There is however no neat way for home server discovery yet. This means that if you are browsing e.g. Fosstodon and want to follow a user and press Follow, a dialog will pop up asking you to enter your own home server (e.g. mastodon.social) to redirect you there for actually executing the Follow action on with your account.
Mastodon is open source under the AGPL at github.com/mastodon/mastodon. Anyone can operate their own instance. It just requires to run your own server and some skills to maintain a Ruby on Rails app with a PostgreSQL database backend, and basic understanding of the protocol to configure federation with other ActivityPub instances.
Popularity: Already established, but will it grow more?
Mastodon has seen steady growth, especially after Twitter s acquisition in 2022, with some estimates stating it peaked at 10 million users across thousands of instances. However, its fragmented user experience and the complexity of choosing instances have hindered mainstream adoption. Still, it remains the most established decentralized alternative to Twitter.
Note that Donald Trump s Truth Social is based on the Mastodon software but does not federate with the ActivityPub network.
The ActivityPub protocol is the most widely used of its kind. One of the other most popular services is the Lemmy link sharing service, similar to Reddit. The larger ecosystem of ActivityPub is called Fediverse, and estimates put the total active user count around 6 million.
2. Bluesky (AT Protocol)
Interestingly, Bluesky was conceived within Twitter in 2019 by Twitter founder Jack Dorsey. After being incubated as a Twitter-funded project, it spun off as an independent Public Benefit LLC in February 2022 and launched its public beta in February 2023.
Bluesky runs on top of the Authenticated Transfer (AT) Protocol published at https://github.com/bluesky-social/atproto. The protocol enables portable identities and data ownership, meaning users can migrate between platforms while keeping their identity and content intact. In practice, however, there is only one popular server at the moment, which is Bluesky itself.
Identity: Usernames are domain-based (e.g., @user.bsky.social).
Storage: Content is theoretically federated among various servers.
Cost: Free to use, but relies on instance operators willing to run the servers.
Popularity: Hybrid approach may have business benefits?
Bluesky reported over 3 million users by 2024, probably getting traction due to its Twitter-like interface and Jack Dorsey s involvement. Its hybrid approach decentralized identity with centralized components could make it a strong candidate for mainstream adoption, assuming it can scale effectively.
3. Warpcast (Farcaster Network)
Farcaster was launched in 2021 by Dan Romero and Varun Srinivasan, both former crypto exchange Coinbase executives, to create a decentralized but user-friendly social network. Built on the Ethereum blockchain, it could potentially offer a very attack-resistant communication medium.
However, in my own testing, Farcaster does not seem to fully leverage what Ethereum could offer. First of all, there is no diversity in programs implementing the protocol as at the moment there is only Warpcast. In Warpcast the signup requires an initial 5 USD fee that is not payable in ETH, and users need to create a new wallet address on the Ethereum layer 2 network Base instead of simply reusing their existing Ethereum wallet address or ENS name.
Despite this, I can understand why Farcaster may have decided to start out like this. Having a single client program may be the best strategy initially. One of the decentralized chat protocol Matrix founders, Matthew Hodgson, shared in his FOSDEM 2025 talk that he slightly regrets focusing too much on developing the protocol instead of making sure the app to use it is attractive to end users. So it may be sensible to ensure Warpcast gets popular first, before attempting to make the Farcaster protocol widely used.
As a protocol Farcaster s hybrid approach makes it more scalable than fully on-chain networks, giving it a higher chance of mainstream adoption if it integrates seamlessly with broader Web3 ecosystems.
Identity: ENS (Ethereum Name Service) domains are used as usernames.
Storage: Messages are stored in off-chain hubs, while identity is on-chain.
Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.
Popularity: Decentralized social media + decentralized payments a winning combo?
Ethereum founder Vitalik Buterin (warpcast.com/vbuterin) and many core developers are active on the platform. Warpcast, the main client for Farcaster, has seen increasing adoption, especially among Ethereum developers and Web3 enthusiasts. I too have an profile at warpcast.com/ottok. However, the numbers are still very low and far from reaching network effects to really take off.
Blockchain-based social media networks, particularly those built on Ethereum, are compelling because they leverage existing user wallets and persistent identities while enabling native payment functionality. When combined with decentralized content funding through micropayments, these blockchain-backed social networks could offer unique advantages that centralized platforms may find difficult to replicate, being decentralized both as a technical network and in a funding mechanism.
4. Hey.xyz (Lens Network)
The Lens Protocol was developed by decentralized finance (DeFi) team Aave and launched in May 2022 to provide a user-owned social media network. While initially built on Polygon, it has since launched its own Layer 2 network called the Lens Network in February 2024. Lens is currently the main competitor to Farcaster.
Lens stores profile ownership and references on-chain, while content is stored on IPFS/Arweave, enabling composability with DeFi and NFTs.
Identity: Profile ownership is tied to NFTs on the Polygon blockchain.
Storage: Content is on-chain and integrates with IPFS/Arweave (like NFTs).
Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.
Popularity: Probably not as social media site, but maybe as protocol?
The social media side of Lens is mainly the Hey.xyz website, which seems to have fewer users than Warpcast, and is even further away from reaching critical mass for network effects. The Lens protocol however has a lot of advanced features and it may gain adoption as the building block for many Web3 apps.
5. Primal.net (Nostr Network)
Nostr (Notes and Other Stuff Transmitted by Relays) was conceptualized in 2020 by an anonymous developer known as fiatjaf. One of the primary design tenets was to be a censorship-resistant protocol and it is popular among Bitcoin enthusiasts, with Jack Dorsey being one of the public supporters. Unlike the Farcaster and Lens protocols, Nostr is not blockchain-based but just a network of relay servers for message distribution. If does however use public key cryptography for identities, similar to how wallets work in crypto.
Popularity: If Jack Dorsey and Bitcoiners promote it enough?
Primal.net as a web app is pretty solid, but it does not stand out much. While Jack Dorsey has shown support by donating $1.5 million to the protocol development in December 2021, its success likely depends on broader adoption by the Bitcoin community.
Will any of these replace X/Twitter?
As usage patterns vary, the statistics are not fully comparable, but this overview of the situation in March 2025 gives a decent overview.
Mastodon and Bluesky have already reached millions of users, while Lens and Farcaster are growing within crypto communities. It is however clear that none of these are anywhere close to how popular X/Twitter is. In particular, Mastodon had a huge influx of users in the fall of 2022 when Twitter was acquired, but to challenge the incumbents the growth would need to significantly accelerate. We can all accelerate this development by embracing decentralized social media now alongside existing dominant platforms.
Who knows, given the right circumstances maybe X.com leadership decides to change the operating model and start federating contents to break out from a walled garden model. The likelyhood of such development would increase if decentralized networks get popular, and the encumbents feel they need to participate to not lose out.
Past and future
The idea of decentralized social media is not new. One early pioneer identi.ca launched in 2008, only two years after Twitter, using the OStatus protocol to promote decentralization. A few years later it evolved into pump.io with the ActivityPump protocol, and also forked into GNU Social that continued with OStatus. I remember when these happened, and that in 2010 also Diaspora launched with fairly large publicity. Surprisingly both of these still operate (I can still post both on identi.ca and diasp.org), but the activity fizzled out years ago. The protocol however survived partially and evolved into ActivityPub, which is now the backbone of the Fediverse.
The evolution of decentralized social media over the next decade will likely parallel developments in democracy, freedom of speech and public discourse. While the early 2010s emphasized maximum independence and freedom, the late 2010s saw growing support for content moderation to combat misinformation. The AI era introduces new challenges, potentially requiring proof-of-humanity verification for content authenticity.
Key factors that will determine success:
User experience and ease of onboarding
Network effects and critical mass of users
Integration with existing web3 infrastructure
Balance between decentralization and usability
Sustainable economic models for infrastructure
This is clearly an area of development worth monitoring closely, as the next few years may determine which protocol becomes the de facto standard for decentralized social communication.
I can t remember exactly the joke I was making at the time in my
work s slack instance (I m sure it wasn t particularly
funny, though; and not even worth re-reading the thread to work out), but it
wound up with me writing a UEFI binary for the punchline. Not to spoil the
ending but it worked - no pesky kernel, no messing around with userland . I
guess the only part of this you really need to know for the setup here is that
it was a Severance joke,
which is some fantastic TV. If you haven t seen it, this post will seem perhaps
weirder than it actually is. I promise I haven t joined any new cults. For
those who have seen it, the payoff to my joke is that I wanted my machine to
boot directly to an image of
Kier Eagan.
As for how to do it I figured I d give the uefi
crate a shot, and see how it is to use,
since this is a low stakes way of trying it out. In general, this isn t the
sort of thing I d usually post about except this wound up being easier and
way cleaner than I thought it would be. That alone is worth sharing, in the
hopes someome comes across this in the future and feels like they, too, can
write something fun targeting the UEFI.
First thing s first gotta create a rust project (I ll leave that part to you
depending on your life choices), and to add the uefi crate to your
Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi = version = "0.33", features = ["panic_handler", "alloc", "global_allocator"]
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml with one (or both) of the UEFI
targets we re interested in:
Unfortunately, I wasn t able to use the
image crate,
since it won t build against the uefi target. This looks like it s
because rustc had no way to compile the required floating point operations
within the image crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm usually, so this isnt
entirely shocking given we re nostd for a non-hardfloat target.
So-called softening requires a software floating point implementation that
the compiler can use to polyfill (feels weird to use the term polyfill here,
but I guess it s spiritually right?) the lack of hardware floating point
operations, which rust hasn t implemented for this target yet. As a result, I
changed tactics, and figured I d use ImageMagick to pre-compute the pixels
from a jpg, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result but it s entirely manageable.
This will take our input file (kier.jpg), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg to a flat array of 4 byte RGBA pixels. Critically, it s also
important to remember that the size of the kier.full.jpg file may not actually
be the requested size it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg file.
Last step with the image is to compile it into our Rust bianary, since we
don t want to struggle with trying to read this off disk, which is thankfully
real easy to do.
Remember to use the width and height from the final kier.full.jpg file as the
values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg image winds up shorter than the requested height
(which is also qemu s default resolution for me) which means we ll get a
semi-annoying black band under the image when we go to run it but it ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and
write the rest of the code to handle moving bytes around from in-memory
as a flat block if pixels, and request that they be displayed using the
UEFI GOP. We ll just need to hack up a container
for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// image::RgbImage , but we can associate the size of
/// the image along with the flat buffer of pixels.
structRgbImage/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
impl RgbImage
/// Create a new RgbImage .
fnnew(width: usize, height: usize) -> Self
RgbImage
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fnwrite(&self, gop: &mut GraphicsOutput) -> Result
gop.blt(BltOp::BufferToVideo
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
)
impl Index<(usize, usize)>for RgbImage
typeOutput= BltPixel;
fnindex(&self, idx: (usize, usize)) -> &BltPixellet (x, y) = idx;
&self.inner[y * self.size.0+ x]
impl IndexMut<(usize, usize)>for RgbImage
fnindex_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel
let (x, y) = idx;
&mut self.inner[y * self.size.0+ x]
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution so we need to do
some capping to ensure that we don t write more pixels than the display can
handle. Writing fewer than the display s maximum seems fine, though.
fnpraise() -> Result
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
letmut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
letmut buffer = RgbImage::new(width, height);
for y in0..height
for x in0..width
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel =&mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r +1];
pixel.blue = KIER[idx_r +2];
buffer.write(&mut gop)?;
Ok(())
Not so bad! A bit tedious we could solve some of this by turning
KIER into an RgbImage at compile-time using some clever Cow and
const tricks and implement blitting a sub-image of the image but this
will do for now. This is a joke, after all, let s not go nuts. All that s
left with our code is for us to write our main function and try and boot
the thing!
#[entry]fnmain() -> Status
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
If you re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo (as is our tradition) by targeting
the UEFI platform.
Testing the UEFI Blob
While I can definitely get my machine to boot these blobs to test, I figured
I d save myself some time by using QEMU to test without a full boot.
If you ve not done this sort of thing before, we ll need two packages,
qemu and ovmf. It s a bit different than most invocations of qemu you
may see out there so I figured it d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it ll create us an EFI partition as a drive and
attach it to the VM off a local directory so let s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven t done this before, and are only interested in running this in a
VM, don t worry too much about it, a lot of it is convention and this layout
should work for you.
With all this in place, we can kick off qemu, booting it in UEFI mode using
the ovmf firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
If all goes well, soon you ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try it all went so boringly
normal. Truly, kudos to the uefi crate maintainers, it s incredibly
well done.
Booting a live system
Sure, we could stop here, but anyone can open up an app window and see a
picture of Kier Eagan, so I knew I needed to finish the job and boot a real
machine up with this. In order to do that, we need to format a USB stick.
BE SURE /dev/sda IS CORRECT IF YOU RE COPY AND PASTING. All my drives
are NVMe, so BE CAREFUL if you use SATA, it may very well be your
hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you ll need to work
out depending on how you ve decided to manage your MOK.
I figured I d leave a signed copy of boot2kier at
/boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I m sure there is a way to do it using
efibootmgr, but I wasn t smart enough to do that quickly. I let er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though but lucky for me, I
have a Minisforum Z83-F sitting around (which, until a few weeks ago was running
the annual http server to control my christmas tree
) so I grabbed it out of the christmas bin, wired it up to a video capture
card I have sitting around, and figured I d grab a video of me booting a
physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted
system which just means our real machine has a larger GOP display
resolution than qemu, which makes sense! We could write some fancy resize code
(sounds annoying), center the image (can t be assed but should be the easy way
out here) or resize the original image (pretty hardware specific workaround).
Additionally, you can make out the image being written to the display before us
(the Minisforum logo) behind Kier, which is really cool stuff. If we were real
fancy we could write blank pixels to the display before blitting Kier, but,
again, I don t think I care to do that much work.
But now I must away
If I wanted to keep this joke going, I d likely try and find a copy of the
original
video when Helly 100%s her file
and boot into that or maybe play a terrible midi PC speaker rendition of
Kier, Chosen One, Kier after
rendering the image. I, unfortunately, don t have any friends involved with
production (yet?), so I reckon all that s out for now. I ll likely stop playing
with this the joke was done and I m only writing this post because of how
great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into but like, good, though, and it s a nice reminder of both how
fun this stuff can be, and how far we ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can t believe how good the uefi crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful .
I'm going to FOSDEM 2025!
As usual, I'll be in the Java Devroom for most of that day, which this
time around is Saturday.
Please recommend me any talks!
This is my shortlist so far:
Posted on February 4, 2025
Tags: madeof:atoms, madeof:bits
A few ago I may have accidentally bought a ring of 12 RGB LEDs; I soldered
temporary leads on it, connected it to a CircuitPython supported board
and played around for a while.
They we had a couple of friends come over to remote FOSDEM together, and
I had talked with one of them about WS2812 / NeoPixels, so I brought
them to the living room, in case there was a chance to show them in
sort-of-use.
Then I was dealing with playing the various streams as we moved from one
room to the next, which lead to me being called video team , which lead
to me wearing a video team shirt (from an old DebConf, not FOSDEM, but
still video team), which lead to somebody asking me whether I also had
the sheet with the countdown to the end of the talk, and the answer was
sort-of-yes (I should have the ones we used to use for our Linux Day),
but not handy.
But I had a thing with twelve things in a clock-like circle.
A bit of fiddling on the CircuitPython REPL resulted, if I remember
correctly, in something like:
import board
import neopixel
import time
num_pixels = 12
pixels = neopixel.NeoPixel(board.GP0, num_pixels)
pixels.brightness = 0.1
def end(min):
pixels.fill((0, 0, 0))
for i in range(12):
pixels[i] = (127 + 10 * i, 8 * (12 - i), 0)
pixels[i-1] = (0, 0, 0)
time.sleep(min * 5) # min * 60 / 12
Now, I wasn t very consistent in running end, especially since I
wasn t sure whether I wanted to run it at the beginning of the talk with
the full duration or just in the last 5 - 10 minutes depending of the
length of the slot, but I ve had at least one person agree that the
general idea has potential, so I m taking these notes to be able to work
on it in the future.
One thing that needs to be fixed is the fact that with the ring just
attached with temporary wires and left on the table it isn t clear which
LED is number 0, so it will need a bit of a case or something, but
that s something that can be dealt with before the next fosdem.
And I should probably add some input interface, so that it is
self-contained and not tethered to a computer and run from the REPL.
(And then I may also have a vague idea for putting that ring into some
wearable thing: good thing that I actually bought two :D )