Melissa Wen: Kworkflow at Kernel Recipes 2025
This was the first year I attended Kernel
Recipes and I have nothing but say how
much I enjoyed it and how grateful I m for the opportunity to talk more about
kworkflow to very experienced kernel developers. What
I mostly like about Kernel Recipes is its intimate format, with only one track
and many moments to get closer to experts and people that you commonly talk
online during your whole year.
In the beginning of this year, I gave the talk Don t let your motivation go,
save time with kworkflow at
FOSDEM,
introducing kworkflow to a more diversified audience, with different levels of
involvement in the Linux kernel development.
At this year s Kernel Recipes I presented
the second talk of the first day: Kworkflow - mix & match kernel recipes end-to-end.
The Kernel Recipes audience is a bit different from FOSDEM, with mostly
long-term kernel developers, so I decided to just go directly to the point. I
showed kworkflow being part of the daily life of a typical kernel developer
from the local setup to install a custom kernel in different target machines to
the point of sending and applying patches to/from the mailing list. In short, I
showed how to mix and match kernel workflow recipes end-to-end.
As I was a bit fast when showing some features during my presentation, in this
blog post I explain each slide from my speaker notes. You can see a summary of
this presentation in the Kernel Recipe Live Blog Day 1: morning.
Introduction
Hi, I m Melissa Wen from Igalia. As we already started sharing kernel recipes
and even more is coming in the next three days, in this presentation I ll talk
about kworkflow: a cookbook to mix & match kernel recipes end-to-end.
This is my first time attending Kernel Recipes, so lemme introduce myself
briefly.
- As I said, I work for Igalia, I work mostly on kernel GPU drivers in the DRM
subsystem.
- In the past, I co-maintained VKMS and the v3d driver. Nowadays I focus on the
AMD display driver, mostly for the Steam Deck.
- Besides code, I contribute to the Linux kernel by mentoring several newcomers
in Outreachy, Google Summer of Code and Igalia Coding Experience. Also, by
documenting and tooling the kernel.
And what s this cookbook called kworkflow?
Kworkflow (kw)
Kworkflow is a tool created by Rodrigo Siqueira, my colleague at Igalia. It s a
single platform that combines software and tools to:
- optimize your kernel development workflow;
- reduce time spent in repetitive tasks;
- standardize best practices;
- ensure that deployment data flows smoothly and reliably between different
kernel workflows;
It s mostly done by volunteers, kernel developers using their spare time. Its
features cover real use cases according to kernel developer needs.
Basically it s mixing and matching the daily life of a typical kernel developer
with kernel workflow recipes with some secret sauces.
First recipe: A good GPU driver for my AMD laptop
So, it s time to start the first recipe: A good GPU driver for my AMD laptop.
Before starting any recipe we need to check the necessary ingredients and
tools. So, let s check what you have at home.
With kworkflow, you can use:
-
kw device: to get information about the target machine, such as: CPU model,
kernel version, distribution, GPU model,
-
kw remote: to set the address of this machine for remote access
kw config: you can configure kw with kw config. With this command you can
basically select the tools, flags and preferences that kw will use to build
and deploy a custom kernel in a target machine. You can also define recipients
of your patches when sending it using kw send-patch. I ll explain more about
each feature later in this presentation.
kw kernel-config manager (or just kw k): to fetch the kernel .config file
from a given machine, store multiple .config files, list and retrieve them
according to your needs.
Now, with all ingredients and tools selected and well portioned, follow the
right steps to prepare your custom kernel!
First step: Mix ingredients with kw build or just kw b
kw b and its options wrap many routines of compiling a custom kernel.
- You can run
kw b -i to check the name and kernel version and the number
of modules that will be compiled and kw b --menu to change kernel
configurations.
- You can also pre-configure compiling preferences in kw config regarding
kernel building. For example, target architecture, the name of the
generated kernel image, if you need to cross-compile this kernel for a
different system and which tool to use for it, setting different warning
levels, compiling with CFlags, etc.
- Then you can just run
kw b to compile the custom kernel for a target
machine.
Second step: Bake it with kw deploy or just kw d
After compiling the custom kernel, we want to install it in the target machine.
Check the name of the custom kernel built: 6.17.0-rc6 and with kw s SSH
access the target machine and see it s running the kernel from the Debian
distribution 6.16.7+deb14-amd64.
As with building settings, you can also pre-configure some deployment settings,
such as compression type, path to device tree binaries, target machine (remote,
local, vm), if you want to reboot the target machine just after deploying your
custom kernel, and if you want to boot in the custom kernel when restarting the
system after deployment.
If you didn t pre-configured some options, you can still customize as a command
option, for example: kw d --reboot will reboot the system after deployment,
even if I didn t set this in my preference.
With just running kw d --reboot I have installed the kernel in a given target
machine and rebooted it. So when accessing the system again I can see it was
booted in my custom kernel.
Third step: Time to taste with kw debug
kw debug wraps many tools for validating a kernel in a target machine. We
can log basic dmesg info but also tracking events and ftrace.
- With
kw debug --dmesg --history we can grab the full dmesg log from a
remote machine, if you use the --follow option, you will monitor dmesg
outputs. You can also run a command with kw debug --dmesg --cmd="<my
command>" and just collect the dmesg output related to this specific execution
period.
- In the example, I ll just unload the amdgpu driver. I use
kw drm
--gui-off to drop the graphical interface and release the amdgpu for
unloading it. So I run kw debug --dmesg --cmd="modprobe -r amdgpu" to unload
the amdgpu driver, but it fails and I couldn t unload it.
Cooking Problems
Oh no! That custom kernel isn t tasting good. Don t worry, as in many recipes
preparations, we can search on the internet to find suggestions on how to make
it tasteful, alternative ingredients and other flavours according to your
taste.
With kw patch-hub you can search on the lore kernel mailing list for possible
patches that can fix your kernel issue. You can navigate in the mailing lists,
check series, bookmark it if you find it relevant and apply it in your local
kernel tree, creating a different branch for tasting oops, for testing. In
this example, I m opening the amd-gfx mailing list where I can find
contributions related to the AMD GPU driver, bookmark and/or just apply the
series to my work tree and with kw bd I can compile & install the custom kernel
with this possible bug fix in one shot.
As I changed my kw config to reboot after deployment, I just need to wait for
the system to boot to try again unloading the amdgpu driver with kw debug
--dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for
this command, the driver was unloaded, the problem is fixed by this series and
the kernel tastes good now.
If I m satisfied with the solution, I can even use kw patch-hub to access the
bookmarked series and marking the checkbox that will reply the patch thread
with a Reviewed-by tag for me.
Second Recipe: Raspberry Pi 4 with Upstream Kernel
As in all recipes, we need ingredients and tools, but with kworkflow you can
get everything set as when changing scenarios in a TV show. We can use kw env
to change to a different environment with all kw and kernel configuration set
and also with the latest compiled kernel cached.
I was preparing the first recipe for a x86 AMD laptop and with kw env --use
RPI_64 I use the same worktree but moved to a different kernel workflow, now
for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+
is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I
just built&deployed. kw build settings are also different, now I m targeting
a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu-
cross-compilation tool and my kernel image calls kernel8 now.
If you didn t plan for this recipe in advance, don t worry. You can create a
new environment with kw env --create RPI_64_V2 and run kw init --template
to start preparing your kernel recipe with the mirepoix ready.
I mean, with the basic ingredients already cut
I mean, with the kw configuration set from a template.
And you can use kw remote to set the IP address of your target machine and
kw kernel-config-manager to fetch/retrieve the .config file from your target
machine. So just run kw bd to compile and install a upstream kernel for
Raspberry Pi 4.
Third Recipe: The Mainline Kernel Ringing on my Steam Deck (Live Demo)
Let s show you how easy is to build, install and test a custom kernel for Steam
Deck with Kworkflow. It s a live demo, but I also recorded it because I know
the risks I m exposed to and something can go very wrong just because of
reasons :)
Report: how was the live demo
For this live demo, I took my OLED Steam Deck to the stage. I explained that,
if I boot mainline kernel on this device, there is no audio. So I turned it on
and booted the mainline kernel I ve installed beforehand. It was clear that
there was no typical Steam Deck startup audio when the system was loaded.
As I started the demo in the kw environment for Raspberry Pi 4, I first moved
to another environment previously used for Steam Deck. In this STEAMDECK
environment, the mainline kernel was already compiled and cached, and all
settings for accessing the target machine, compiling and installing a custom
kernel were retrieved automatically.
My live demo followed these steps:
-
With
kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
-
With
kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
-
Run
kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
-
Run
kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
-
Using
git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
-
With
kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
-
Run
kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
-
After finishing deployment, the Steam Deck will reboot on the new custom
kernel version and made a clear resonant or vibrating sound. [Hopefully]
Finally, I showed to the audience that, if I wanted to send this patch
upstream, I just needed to run kw send-patch and kw would automatically add
subsystem maintainers, reviewers and mailing lists for the affected files as
recipients, and send the patch to the upstream community assessment. As I
didn t want to create unnecessary noise, I just did a dry-run with kw
send-patch -s --simulate to explain how it looks.
What else can kworkflow already mix & match?
In this presentation, I showed that kworkflow supported different kernel
development workflows, i.e., multiple distributions, different bootloaders and
architectures, different target machines, different debugging tools and
automatize your kernel development routines best practices, from development
environment setup and verifying a custom kernel in bare-metal to sending
contributions upstream following the contributions-by-e-mail structure. I
exemplified it with three different target machines: my ordinary x86 AMD laptop
with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the
Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions,
Kworkflow also supports Ubuntu, Fedora and PopOS.
Now it s your turn: Do you have any secret recipes to share? Please share
with us via kworkflow.
Useful links
Kworkflow is a tool created by Rodrigo Siqueira, my colleague at Igalia. It s a
single platform that combines software and tools to:
- optimize your kernel development workflow;
- reduce time spent in repetitive tasks;
- standardize best practices;
- ensure that deployment data flows smoothly and reliably between different kernel workflows;
It s mostly done by volunteers, kernel developers using their spare time. Its
features cover real use cases according to kernel developer needs.
Basically it s mixing and matching the daily life of a typical kernel developer
with kernel workflow recipes with some secret sauces.
First recipe: A good GPU driver for my AMD laptop
So, it s time to start the first recipe: A good GPU driver for my AMD laptop.
Before starting any recipe we need to check the necessary ingredients and
tools. So, let s check what you have at home.
With kworkflow, you can use:
-
kw device: to get information about the target machine, such as: CPU model,
kernel version, distribution, GPU model,
-
kw remote: to set the address of this machine for remote access
kw config: you can configure kw with kw config. With this command you can
basically select the tools, flags and preferences that kw will use to build
and deploy a custom kernel in a target machine. You can also define recipients
of your patches when sending it using kw send-patch. I ll explain more about
each feature later in this presentation.
kw kernel-config manager (or just kw k): to fetch the kernel .config file
from a given machine, store multiple .config files, list and retrieve them
according to your needs.
Now, with all ingredients and tools selected and well portioned, follow the
right steps to prepare your custom kernel!
First step: Mix ingredients with kw build or just kw b
kw b and its options wrap many routines of compiling a custom kernel.
- You can run
kw b -i to check the name and kernel version and the number
of modules that will be compiled and kw b --menu to change kernel
configurations.
- You can also pre-configure compiling preferences in kw config regarding
kernel building. For example, target architecture, the name of the
generated kernel image, if you need to cross-compile this kernel for a
different system and which tool to use for it, setting different warning
levels, compiling with CFlags, etc.
- Then you can just run
kw b to compile the custom kernel for a target
machine.
Second step: Bake it with kw deploy or just kw d
After compiling the custom kernel, we want to install it in the target machine.
Check the name of the custom kernel built: 6.17.0-rc6 and with kw s SSH
access the target machine and see it s running the kernel from the Debian
distribution 6.16.7+deb14-amd64.
As with building settings, you can also pre-configure some deployment settings,
such as compression type, path to device tree binaries, target machine (remote,
local, vm), if you want to reboot the target machine just after deploying your
custom kernel, and if you want to boot in the custom kernel when restarting the
system after deployment.
If you didn t pre-configured some options, you can still customize as a command
option, for example: kw d --reboot will reboot the system after deployment,
even if I didn t set this in my preference.
With just running kw d --reboot I have installed the kernel in a given target
machine and rebooted it. So when accessing the system again I can see it was
booted in my custom kernel.
Third step: Time to taste with kw debug
kw debug wraps many tools for validating a kernel in a target machine. We
can log basic dmesg info but also tracking events and ftrace.
- With
kw debug --dmesg --history we can grab the full dmesg log from a
remote machine, if you use the --follow option, you will monitor dmesg
outputs. You can also run a command with kw debug --dmesg --cmd="<my
command>" and just collect the dmesg output related to this specific execution
period.
- In the example, I ll just unload the amdgpu driver. I use
kw drm
--gui-off to drop the graphical interface and release the amdgpu for
unloading it. So I run kw debug --dmesg --cmd="modprobe -r amdgpu" to unload
the amdgpu driver, but it fails and I couldn t unload it.
Cooking Problems
Oh no! That custom kernel isn t tasting good. Don t worry, as in many recipes
preparations, we can search on the internet to find suggestions on how to make
it tasteful, alternative ingredients and other flavours according to your
taste.
With kw patch-hub you can search on the lore kernel mailing list for possible
patches that can fix your kernel issue. You can navigate in the mailing lists,
check series, bookmark it if you find it relevant and apply it in your local
kernel tree, creating a different branch for tasting oops, for testing. In
this example, I m opening the amd-gfx mailing list where I can find
contributions related to the AMD GPU driver, bookmark and/or just apply the
series to my work tree and with kw bd I can compile & install the custom kernel
with this possible bug fix in one shot.
As I changed my kw config to reboot after deployment, I just need to wait for
the system to boot to try again unloading the amdgpu driver with kw debug
--dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for
this command, the driver was unloaded, the problem is fixed by this series and
the kernel tastes good now.
If I m satisfied with the solution, I can even use kw patch-hub to access the
bookmarked series and marking the checkbox that will reply the patch thread
with a Reviewed-by tag for me.
Second Recipe: Raspberry Pi 4 with Upstream Kernel
As in all recipes, we need ingredients and tools, but with kworkflow you can
get everything set as when changing scenarios in a TV show. We can use kw env
to change to a different environment with all kw and kernel configuration set
and also with the latest compiled kernel cached.
I was preparing the first recipe for a x86 AMD laptop and with kw env --use
RPI_64 I use the same worktree but moved to a different kernel workflow, now
for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+
is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I
just built&deployed. kw build settings are also different, now I m targeting
a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu-
cross-compilation tool and my kernel image calls kernel8 now.
If you didn t plan for this recipe in advance, don t worry. You can create a
new environment with kw env --create RPI_64_V2 and run kw init --template
to start preparing your kernel recipe with the mirepoix ready.
I mean, with the basic ingredients already cut
I mean, with the kw configuration set from a template.
And you can use kw remote to set the IP address of your target machine and
kw kernel-config-manager to fetch/retrieve the .config file from your target
machine. So just run kw bd to compile and install a upstream kernel for
Raspberry Pi 4.
Third Recipe: The Mainline Kernel Ringing on my Steam Deck (Live Demo)
Let s show you how easy is to build, install and test a custom kernel for Steam
Deck with Kworkflow. It s a live demo, but I also recorded it because I know
the risks I m exposed to and something can go very wrong just because of
reasons :)
Report: how was the live demo
For this live demo, I took my OLED Steam Deck to the stage. I explained that,
if I boot mainline kernel on this device, there is no audio. So I turned it on
and booted the mainline kernel I ve installed beforehand. It was clear that
there was no typical Steam Deck startup audio when the system was loaded.
As I started the demo in the kw environment for Raspberry Pi 4, I first moved
to another environment previously used for Steam Deck. In this STEAMDECK
environment, the mainline kernel was already compiled and cached, and all
settings for accessing the target machine, compiling and installing a custom
kernel were retrieved automatically.
My live demo followed these steps:
-
With
kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
-
With
kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
-
Run
kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
-
Run
kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
-
Using
git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
-
With
kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
-
Run
kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
-
After finishing deployment, the Steam Deck will reboot on the new custom
kernel version and made a clear resonant or vibrating sound. [Hopefully]
Finally, I showed to the audience that, if I wanted to send this patch
upstream, I just needed to run kw send-patch and kw would automatically add
subsystem maintainers, reviewers and mailing lists for the affected files as
recipients, and send the patch to the upstream community assessment. As I
didn t want to create unnecessary noise, I just did a dry-run with kw
send-patch -s --simulate to explain how it looks.
What else can kworkflow already mix & match?
In this presentation, I showed that kworkflow supported different kernel
development workflows, i.e., multiple distributions, different bootloaders and
architectures, different target machines, different debugging tools and
automatize your kernel development routines best practices, from development
environment setup and verifying a custom kernel in bare-metal to sending
contributions upstream following the contributions-by-e-mail structure. I
exemplified it with three different target machines: my ordinary x86 AMD laptop
with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the
Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions,
Kworkflow also supports Ubuntu, Fedora and PopOS.
Now it s your turn: Do you have any secret recipes to share? Please share
with us via kworkflow.
Useful links
kw device: to get information about the target machine, such as: CPU model,
kernel version, distribution, GPU model,
kw remote: to set the address of this machine for remote access
kw config: you can configure kw with kw config. With this command you can
basically select the tools, flags and preferences that kw will use to build
and deploy a custom kernel in a target machine. You can also define recipients
of your patches when sending it using kw send-patch. I ll explain more about
each feature later in this presentation.kw kernel-config manager (or just kw k): to fetch the kernel .config file
from a given machine, store multiple .config files, list and retrieve them
according to your needs.kw b and its options wrap many routines of compiling a custom kernel.
- You can run
kw b -ito check the name and kernel version and the number of modules that will be compiled andkw b --menuto change kernel configurations. - You can also pre-configure compiling preferences in kw config regarding kernel building. For example, target architecture, the name of the generated kernel image, if you need to cross-compile this kernel for a different system and which tool to use for it, setting different warning levels, compiling with CFlags, etc.
- Then you can just run
kw bto compile the custom kernel for a target machine.
kw debug wraps many tools for validating a kernel in a target machine. We
can log basic dmesg info but also tracking events and ftrace.
- With
kw debug --dmesg --historywe can grab the full dmesg log from a remote machine, if you use the--followoption, you will monitor dmesg outputs. You can also run a command withkw debug --dmesg --cmd="<my command>"and just collect the dmesg output related to this specific execution period. - In the example, I ll just unload the amdgpu driver. I use
kw drm --gui-offto drop the graphical interface and release the amdgpu for unloading it. So I runkw debug --dmesg --cmd="modprobe -r amdgpu"to unload the amdgpu driver, but it fails and I couldn t unload it.
Oh no! That custom kernel isn t tasting good. Don t worry, as in many recipes
preparations, we can search on the internet to find suggestions on how to make
it tasteful, alternative ingredients and other flavours according to your
taste.
With kw patch-hub you can search on the lore kernel mailing list for possible
patches that can fix your kernel issue. You can navigate in the mailing lists,
check series, bookmark it if you find it relevant and apply it in your local
kernel tree, creating a different branch for tasting oops, for testing. In
this example, I m opening the amd-gfx mailing list where I can find
contributions related to the AMD GPU driver, bookmark and/or just apply the
series to my work tree and with kw bd I can compile & install the custom kernel
with this possible bug fix in one shot.
As I changed my kw config to reboot after deployment, I just need to wait for
the system to boot to try again unloading the amdgpu driver with kw debug
--dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for
this command, the driver was unloaded, the problem is fixed by this series and
the kernel tastes good now.
If I m satisfied with the solution, I can even use kw patch-hub to access the
bookmarked series and marking the checkbox that will reply the patch thread
with a Reviewed-by tag for me.
Second Recipe: Raspberry Pi 4 with Upstream Kernel
As in all recipes, we need ingredients and tools, but with kworkflow you can
get everything set as when changing scenarios in a TV show. We can use kw env
to change to a different environment with all kw and kernel configuration set
and also with the latest compiled kernel cached.
I was preparing the first recipe for a x86 AMD laptop and with kw env --use
RPI_64 I use the same worktree but moved to a different kernel workflow, now
for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+
is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I
just built&deployed. kw build settings are also different, now I m targeting
a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu-
cross-compilation tool and my kernel image calls kernel8 now.
If you didn t plan for this recipe in advance, don t worry. You can create a
new environment with kw env --create RPI_64_V2 and run kw init --template
to start preparing your kernel recipe with the mirepoix ready.
I mean, with the basic ingredients already cut
I mean, with the kw configuration set from a template.
And you can use kw remote to set the IP address of your target machine and
kw kernel-config-manager to fetch/retrieve the .config file from your target
machine. So just run kw bd to compile and install a upstream kernel for
Raspberry Pi 4.
Third Recipe: The Mainline Kernel Ringing on my Steam Deck (Live Demo)
Let s show you how easy is to build, install and test a custom kernel for Steam
Deck with Kworkflow. It s a live demo, but I also recorded it because I know
the risks I m exposed to and something can go very wrong just because of
reasons :)
Report: how was the live demo
For this live demo, I took my OLED Steam Deck to the stage. I explained that,
if I boot mainline kernel on this device, there is no audio. So I turned it on
and booted the mainline kernel I ve installed beforehand. It was clear that
there was no typical Steam Deck startup audio when the system was loaded.
As I started the demo in the kw environment for Raspberry Pi 4, I first moved
to another environment previously used for Steam Deck. In this STEAMDECK
environment, the mainline kernel was already compiled and cached, and all
settings for accessing the target machine, compiling and installing a custom
kernel were retrieved automatically.
My live demo followed these steps:
-
With
kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
-
With
kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
-
Run
kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
-
Run
kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
-
Using
git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
-
With
kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
-
Run
kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
-
After finishing deployment, the Steam Deck will reboot on the new custom
kernel version and made a clear resonant or vibrating sound. [Hopefully]
Finally, I showed to the audience that, if I wanted to send this patch
upstream, I just needed to run kw send-patch and kw would automatically add
subsystem maintainers, reviewers and mailing lists for the affected files as
recipients, and send the patch to the upstream community assessment. As I
didn t want to create unnecessary noise, I just did a dry-run with kw
send-patch -s --simulate to explain how it looks.
What else can kworkflow already mix & match?
In this presentation, I showed that kworkflow supported different kernel
development workflows, i.e., multiple distributions, different bootloaders and
architectures, different target machines, different debugging tools and
automatize your kernel development routines best practices, from development
environment setup and verifying a custom kernel in bare-metal to sending
contributions upstream following the contributions-by-e-mail structure. I
exemplified it with three different target machines: my ordinary x86 AMD laptop
with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the
Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions,
Kworkflow also supports Ubuntu, Fedora and PopOS.
Now it s your turn: Do you have any secret recipes to share? Please share
with us via kworkflow.
Useful links
Let s show you how easy is to build, install and test a custom kernel for Steam
Deck with Kworkflow. It s a live demo, but I also recorded it because I know
the risks I m exposed to and something can go very wrong just because of
reasons :)
Report: how was the live demo
For this live demo, I took my OLED Steam Deck to the stage. I explained that,
if I boot mainline kernel on this device, there is no audio. So I turned it on
and booted the mainline kernel I ve installed beforehand. It was clear that
there was no typical Steam Deck startup audio when the system was loaded.
As I started the demo in the kw environment for Raspberry Pi 4, I first moved
to another environment previously used for Steam Deck. In this STEAMDECK
environment, the mainline kernel was already compiled and cached, and all
settings for accessing the target machine, compiling and installing a custom
kernel were retrieved automatically.
My live demo followed these steps:
-
With
kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
-
With
kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
-
Run
kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
-
Run
kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
-
Using
git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
-
With
kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
-
Run
kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
-
After finishing deployment, the Steam Deck will reboot on the new custom
kernel version and made a clear resonant or vibrating sound. [Hopefully]
Finally, I showed to the audience that, if I wanted to send this patch
upstream, I just needed to run kw send-patch and kw would automatically add
subsystem maintainers, reviewers and mailing lists for the affected files as
recipients, and send the patch to the upstream community assessment. As I
didn t want to create unnecessary noise, I just did a dry-run with kw
send-patch -s --simulate to explain how it looks.
What else can kworkflow already mix & match?
In this presentation, I showed that kworkflow supported different kernel
development workflows, i.e., multiple distributions, different bootloaders and
architectures, different target machines, different debugging tools and
automatize your kernel development routines best practices, from development
environment setup and verifying a custom kernel in bare-metal to sending
contributions upstream following the contributions-by-e-mail structure. I
exemplified it with three different target machines: my ordinary x86 AMD laptop
with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the
Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions,
Kworkflow also supports Ubuntu, Fedora and PopOS.
Now it s your turn: Do you have any secret recipes to share? Please share
with us via kworkflow.
Useful links
kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
Hi, I m Melissa, a GPU kernel driver developer at Igalia and today I ll be
giving a very inclusive talk to not let your motivation go by saving time with
kworkflow.
So, you re a kernel developer, or you want to be a kernel developer, or you
don t want to be a kernel developer. But you re all united by a single need:
you need to validate a custom kernel with just one change, and you need to
verify that it fixes or improves something in the kernel.
And that s a given change for a given distribution, or for a given device, or
for a given subsystem
Look to this diagram and try to figure out the number of subsystems and related
work trees you can handle in the kernel.
So, whether you are a kernel developer or not, at some point you may come
across this type of situation:
There is a userspace developer who wants to report a kernel issue and says:
What s the problem of this situation? The problem is that you keep creating new
scripts!
Every time you change distribution, change architecture, change hardware,
change project - even in the same company - the development setup may change
when you switch to a different project, you create another script for your new
kernel development workflow!
You know, you have a lot of babies, you have a collection of my precious
scripts , like Sm agol (Lord of the Rings) with the precious ring.
Instead of creating and accumulating scripts, save yourself time with
kworkflow. Here is a typical script that many of you may have. This is a
Raspberry Pi 4 script and contains everything you need to memorize to compile
and deploy a kernel on your Raspberry Pi 4.
With kworkflow, you only need to memorize two commands, and those commands are
not specific to Raspberry Pi. They are the same commands to different
architecture, kernel configuration, target device.
I don t know if you will get this analogy, but kworkflow is for me a megazord
of scripts. You are combining all of your scripts to create a very powerful
tool.
This is the list of commands you can run with kworkflow.
The first subset is to configure your tool for various situations you may face
in your daily tasks.
So how can you save time building and deploying a custom kernel?
First, you need a .config file.
Then you want to build the kernel:
Finally, to deploy the kernel in a target machine.
You can also save time on debugging kernels locally or remotely.
You can save time on managing multiple kernel images in the same work tree.
Finally, you can save time when submitting kernel patches. In kworkflow, you
can find everything you need to wrap your changes in patch format and submit
them to the right list of recipients, those who can review, comment on, and
accept your changes.
This is a demo that the lead developer of the kw patch-hub feature sent me.
With this feature, you will be able to check out a series on a specific mailing
list, bookmark those patches in the kernel for validation, and when you are
satisfied with the proposed changes, you can automatically submit a reviewed-by
for that whole series to the mailing list.



















This post serves as a report from my attendance to Kubecon and CloudNativeCon 2023 Europe that took place in
Amsterdam in April 2023. It was my second time physically attending this conference, the first one was in
Austin, Texas (USA) in 2017. I also attended once in a virtual fashion.
The content here is mostly generated for the sake of my own recollection and learnings, and is written from
the notes I took during the event.
The very first session was the opening keynote, which reunited the whole crowd to bootstrap the event and
share the excitement about the days ahead. Some astonishing numbers were announced: there were more than
10.000 people attending, and apparently it could confidently be said that it was the largest open source
technology conference taking place in Europe in recent times.
It was also communicated that the next couple iteration of the event will be run in China in September 2023
and Paris in March 2024.
More numbers, the CNCF was hosting about 159 projects, involving 1300 maintainers and about 200.000
contributors. The cloud-native community is ever-increasing, and there seems to be a strong trend in the
industry for cloud-native technology adoption and all-things related to PaaS and IaaS.
The event program had different tracks, and in each one there was an interesting mix of low-level and higher
level talks for a variety of audience. On many occasions I found that reading the talk title alone was not
enough to know in advance if a talk was a 101 kind of thing or for experienced engineers. But unlike in
previous editions, I didn t have the feeling that the purpose of the conference was to try selling me
anything. Obviously, speakers would make sure to mention, or highlight in a subtle way, the involvement of a
given company in a given solution or piece of the ecosystem. But it was non-invasive and fair enough for me.
On a different note, I found the breakout rooms to be often small. I think there were only a couple of rooms
that could accommodate more than 500 people, which is a fairly small allowance for 10k attendees. I realized
with frustration that the more interesting talks were immediately fully booked, with people waiting in line
some 45 minutes before the session time. Because of this, I missed a few important sessions that I ll
hopefully watch online later.
Finally, on a more technical side, I ve learned many things, that instead of grouping by session I ll group
by topic, given how some subjects were mentioned in several talks.
On gitops and CI/CD pipelines
Most of the mentions went to
On etcd, performance and resource management
I attended a talk focused on etcd performance tuning that was very encouraging. They were basically talking
about the
On jobs
I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed
by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning
models and other fancy AI projects.
It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some
limitations that are now gone, or at least greatly improved. Some new functionalities have been added as
well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger
batch of data based on that index. It would allow for full grid-like features like sequential (or again,
indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that
something we would like to enable in
### The blockade has ended! For some introduction..
Back in 2016, Telmex Mexico's foremost communications provider and,
through the brands grouped under the *Am rica M vil* brand, one of
Latin America's most important ISPs set up rules to block connecitons
to (at least) seven of Tor's *directory authorities* (*DirAuths*). We
believe they might have blocked all of them, in an attempt to block
connections from Tor from anywhere in their networks, but Tor is much
more resourceful than that so, the measure was not too effective.
Only... _Some_ blocking did hurt Telmex's users: The ability to play
an active role in Tor. The ability to host Tor relays at home. Why?
Because the *consensus protocol* requires relays to be reachable in
order to be measured from the network's *DirAuths*.
### Technical work to prove the blocking
We dug into the issue as part of the work we carried out in the
project I was happy to lead between 2018 and 2019, *UNAM/DGAPA/PAPIME
PE102718*. In March 2019, I presented a paper titled [Distributed
Detection of Tor Directory Authorities Censorship in
Mexico](https://www.thinkmind.org/index.php?view=article&articleid=icn_2019_6_20_38010)
([alternative download](http://ru.iiec.unam.mx/4538/) in the [Topic on
Internet Censorship and Surveillance (TICS) track](https://tics.site/)
of the XVIII International Conference on Networks.
Then... We had many talks inside our group, but nothing seemed to move
for several months. We did successfully push for increasing the number
of Tor relays in Mexico (we managed to go from two to eleven stable
relays not much in absolute terms, but quite good relatively, even
more considering most users were not technically able to run one!)







I've been in A Coru a for this year's GUADEC since Tuesday night, and it
rocked. I did a late registration after my first week at
I came one day early to participate, as Debian's representative, at the
yearly GNOME Advisory Board meeting, for the first time. It was a positive
experience, which helped me get a grasp of the big picture of what the GNOME
Foundation does. I also had the pleasure of visiting
