Search Results: "enrico"

31 August 2022

Raphaël Hertzog: Freexian s report about Debian Long Term Support, July 2022

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding No any major updates on running projects.
Two 1, 2 projects are in the pipeline now.
Tryton project is in a review phase. Gradle projects is still fighting in work. In July, we put aside 2389 EUR to fund Debian projects. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In July, 14 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In July, we have released 3 DLAs. July was the period, when the Debian Stretch had already ELTS status, but Debian Buster was still in the hands of security team. Many member of LTS used this time to update internal infrastructure, documentation and some internal tickets. Now we are ready to take the next release in our hands: Buster! Thanks to our sponsors Sponsors that joined recently are in bold.

26 July 2022

Raphaël Hertzog: Freexian s report about Debian Long Term Support, June 2022

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding No any major updates on running projects.
Two 1, 2 projects are in the pipeline now.
Tryton project is in a review phase. Gradle projects is still fighting in work. In June, we put aside 2254 EUR to fund Debian projects. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In June, 15 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In June we released 27 DLAs.

This is a special month, where we have two releases (stretch and jessie) as ELTS and NO release as LTS. Buster is still handled by the security team and will probably be given in LTS hands at the beginning of the August. During this month we are updating the infrastructure, documentation and improve our internal processes to switch to a new release.
Many developers have just returned back from Debconf22, hold in Prizren, Kosovo! Many (E)LTS members could meet face-to-face and discuss some technical and social topics! Also LTS BoF took place, where the project was introduced (link to video).
Thanks to our sponsors Sponsors that joined recently are in bold. We are pleased to welcome Alter Way where their support of Debian is publicly acknowledged at the higher level, see this French quote of Alterway s CEO.

20 July 2022

Enrico Zini: Deconstruction of the DAM hat

Further reading Talk notes Intro Debian Account Managers Responsibility for official membership What DAM is not Unexpected responsibilities DAM warnings DAM warnings? House rules Interpreting house rules Governance by bullying How about the Community Team? How about DAM? How about the DPL? Concentrating responsibility Empowering developers What needs to happen

23 June 2022

Raphaël Hertzog: Freexian s report about Debian Long Term Support, May 2022

A Debian LTS logo
Like each month, have a look at the work funded by Freexian s Debian LTS offering. Debian project funding Two [1, 2] projects are in the pipeline now. Tryton project is in a final phase. Gradle projects is fighting with technical difficulties. In May, we put aside 2233 EUR to fund Debian projects. We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article. Debian LTS contributors In May, 14 contributors have been paid to work on Debian LTS, their reports are available: Evolution of the situation In May we released 49 DLAs. The security tracker currently lists 71 packages with a known CVE and the dla-needed.txt file has 65 packages needing an update. The number of paid contributors increased significantly, we are pleased to welcome our latest team members: Andreas R nnquist, Dominik George, Enrico Zini and Stefano Rivera. It is worth pointing out that we are getting close to the end of the LTS period for Debian 9. After June 30th, no new security updates will be made available on We are preparing to overtake Debian 10 Buster for the next two years and to make this process as smooth as possible. But Freexian and its team of paid Debian contributors will continue to maintain Debian 9 going forward for the customers of the Extended LTS offer. If you have Debian 9 servers to keep secure, it s time to subscribe! You might not have noticed, but Freexian formalized a mission statement where we explain that our purpose is to help improve Debian. For this, we want to fund work time for the Debian developers that recently joined Freexian as collaborators. The Extended LTS and the PHP LTS offers are built following a model that will help us to achieve this if we manage to have enough customers for those offers. So consider subscribing: you help your organization but you also help Debian! Thanks to our sponsors Sponsors that joined recently are in bold.

9 June 2022

Enrico Zini: Updating cbqt for bullseye

Back in 2017 I did work to setup a cross-building toolchain for QT Creator, that takes advantage of Debian's packaging for all the dependency ecosystem. It ended with cbqt which is a little script that sets up a chroot to hold cross-build-dependencies, to avoid conflicting with packages in the host system, and sets up a qmake alternative to make use of them. Today I'm dusting off that work, to ensure it works on Debian bullseye. Resetting QT Creator To make things reproducible, I wanted to reset QT Creator's configuration. Besides purging and reinstalling the package, one needs to manually remove: Updating cbqt Easy start, change the distribution for the chroot:
-DIST_CODENAME = "stretch"
+DIST_CODENAME = "bullseye"
Adding LIBDIR Something else does not work:
Test$ qmake-armhf -makefile
Info: creating stash file  /Test/.qmake.stash
Test$ make
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link, /armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link, /armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link, /armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o    /armhf/usr/lib/arm-linux-gnueabihf/  /armhf/usr/lib/arm-linux-gnueabihf/  /armhf/usr/lib/arm-linux-gnueabihf/ -lGLESv2 -lpthread
/usr/lib/gcc-cross/arm-linux-gnueabihf/10/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lGLESv2
collect2: error: ld returned 1 exit status
make: *** [Makefile:146: Test] Error 1
I figured that now I also need to set QMAKE_LIBDIR and not just QMAKE_RPATHLINKDIR:
--- a/cbqt
+++ b/cbqt
@@ -241,18 +241,21 @@ include(../common/linux.conf)
+QMAKE_LIBDIR +=  chroot.abspath /lib/arm-linux-gnueabihf
+QMAKE_LIBDIR +=  chroot.abspath /usr/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR +=  chroot.abspath /usr/lib/
 QMAKE_RPATHLINKDIR +=  chroot.abspath /lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR +=  chroot.abspath /usr/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR +=  chroot.abspath /usr/lib/
Now it links again:
Test$ qmake-armhf -makefile
Test$ make
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link, /armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link, /armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link, /armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o   -L /armhf/lib/arm-linux-gnueabihf -L /armhf/usr/lib/arm-linux-gnueabihf -L /armhf/usr/lib/  /armhf/usr/lib/arm-linux-gnueabihf/  /armhf/usr/lib/arm-linux-gnueabihf/  /armhf/usr/lib/arm-linux-gnueabihf/ -lGLESv2 -lpthread
Making it work in Qt Creator Time to try it in Qt Creator, and sadly it fails:
 /armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
QMAKE_CXX.COMPILER_MACROS is not defined I traced it to this bit in armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf (nonrelevant bits deleted):
isEmpty($$ target_prefix .COMPILER_MACROS)  
      else: gcc ghs  
        vars = $$qtVariablesFromGCC($$QMAKE_CXX)
    for (v, vars)  
        $$ target_prefix .COMPILER_MACROS += $$v
    cache($$ target_prefix .COMPILER_MACROS, set stash)
It turns out that qmake is not able to realise that the compiler is gcc, so vars does not get set, nothing is set in COMPILER_MACROS, and qmake fails. Reproducing it on the command line When run manually, however, qmake-armhf worked, so it would be good to know how Qt Creator is actually running qmake. Since it frustratingly does not show what commands it runs, I'll have to strace it:
strace -e trace=execve --string-limit=123456 -o qtcreator.trace -f qtcreator
And there it is:
$ grep qmake- qtcreator.trace
1015841 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "-query"], 0x56096e923040 /* 54 vars */) = 0
1015865 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", " /Test/", "-spec", "arm-linux-gnueabihf", "CONFIG+=debug", "CONFIG+=qml_debug"], 0x7f5cb4023e20 /* 55 vars */) = 0
I run the command manually and indeed I reproduce the problem:
$ /usr/local/bin/qmake-armhf -spec arm-linux-gnueabihf CONFIG+=debug CONFIG+=qml_debug
 /armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
I try removing options until I find the one that breaks it and... now it's always broken! Even manually running qmake-armhf, like I did earlier, stopped working:
$ rm .qmake.stash
$ qmake-armhf -makefile
 /armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.
Debugging toolchain.prf I tried purging and reinstalling qtcreator, and recreating the chroot, but qmake-armhf is staying broken. I'll let that be, and try to debug toolchain.prf. By grepping gcc in the mkspecs directory, I managed to figure out that: Sadly, I failed to find reference documentation for QMAKE_COMPILER's syntax and behaviour. I also failed to find why qmake-armhf worked earlier, and I am also failing to restore the system to a situation where it works again. Maybe I dreamt that it worked? I had some manual change laying around from some previous fiddling with things? Anyway at least now I have the fix:
--- a/cbqt
+++ b/cbqt
@@ -248,7 +248,7 @@ QMAKE_RPATHLINKDIR +=  chroot.abspath /lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR +=  chroot.abspath /usr/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR +=  chroot.abspath /usr/lib/
-QMAKE_COMPILER          =  chroot.arch_triplet -gcc
+QMAKE_COMPILER          = gcc  chroot.arch_triplet -gcc
 QMAKE_CC                = /usr/bin/ chroot.arch_triplet -gcc
Fixing a compiler mismatch warning In setting up the kit, Qt Creator also complained that the compiler from qmake did not match the one configured in the kit. That was easy to fix, by pointing at the host system cross-compiler in qmake.conf:
 QMAKE_COMPILER          =  chroot.arch_triplet -gcc
-QMAKE_CC                =  chroot.arch_triplet -gcc
+QMAKE_CC                = /usr/bin/ chroot.arch_triplet -gcc
 QMAKE_LINK_C            = $$QMAKE_CC
-QMAKE_CXX               =  chroot.arch_triplet -g++
+QMAKE_CXX               = /usr/bin/ chroot.arch_triplet -g++
 QMAKE_LINK              = $$QMAKE_CXX
Updated setup instructions Create an armhf environment:
sudo cbqt ./armhf --create --verbose
Create a qmake wrapper that builds with this environment:
sudo ./cbqt ./armhf --qmake -o /usr/local/bin/qmake-armhf
Install the build-dependencies that you need:
# Note: :arch is added automatically to package names if no arch is explicitly specified
sudo ./cbqt ./armhf --install libqt5svg5-dev libmosquittopp-dev qtwebengine5-dev
Build with qmake Use qmake-armhf instead of qmake and it works perfectly:
qmake-armhf -makefile
Set up Qt Creator Configure a new Kit in Qt Creator:
  1. Tools/Options, then Kits, then Add
  2. Name: armhf (or anything you like)
  3. In the Qt Versions tab, click Add then set the path of the new Qt to /usr/local/bin/qmake-armhf. Click Apply.
  4. Back in the Kits, select the Qt version you just created in the Qt version field
  5. In Compilers, select the ARM versions of GCC. If they do not appear, install crossbuild-essential-armhf, then in the Compilers tab click Re-detect and then Apply to make them available for selection
  6. Dismiss the dialog with "OK": the new kit is ready
Now you can choose the default kit to build and run locally, and the armhf kit for remote cross-development. I tried looking at sdktool to automate this step, and it requires a nontrivial amount of work to do it reliably, so these manual instructions will have to do. Credits This has been done as part of my work with Truelite.

18 March 2022

Enrico Zini: Context-dependent logger in Python

This is a common logging pattern in Python, to have loggers related to module names:
import logging
log = logging.getLogger(__name__)
class Bill:
    def load_bill(self, filename: str):"%s: loading file", filename)
I often however find myself wanting to have loggers related to something context-dependent, like the kind of file that is being processed. For example, I'd like to log loading of bill loading when done by the expenses module, and not when done by the printing module. I came up with a little hack that keeps the same API as before, and allows to propagate a context dependent logger to the code called:
# Call this file
from __future__ import annotations
import contextlib
import contextvars
import logging
_log: contextvars.ContextVar[logging.Logger] = contextvars.ContextVar('log', default=logging.getLogger())
def logger(name: str):
    Set a default logger for the duration of this context manager
    old = _log.set(logging.getLogger(name))
def debug(*args, **kw):
    _log.get().debug(*args, **kw)
def info(*args, **kw):
    _log.get().info(*args, **kw)
def warning(*args, **kw):
    _log.get().warning(*args, **kw)
def error(*args, **kw):
    _log.get().error(*args, **kw)
And now I can do this:
from . import log
    with log.logger("expenses"):
        bill = load_bill(filename)
# This code did not change!
class Bill:
    def load_bill(self, filename: str):"%s: loading file", filename)

3 March 2022

Enrico Zini: Migrating from procmail to sieve

Anarcat's "procmail considered harmful" post convinced me to get my act together and finally migrate my venerable procmail based setup to sieve. My setup was nontrivial, so I migrated with an intermediate step in which sieve scripts would by default pipe everything to procmail, which allowed me to slowly move rules from procmailrc to sieve until nothing remained in procmailrc. Here's what I did. Literature review has a guide quite aligned with current Debian, and could be a starting point to get an idea of the work to do. is way more terse, but more aligned with my intentions. Reading the former helped me in understanding the latter. has the full Sieve syntax. has the list of Sieve features supported by Dovecot. has the reference on Dovecot's sieve implementation. is the hard to find full reference for the functions introduced by the extprograms plugin. Debugging tools: Backup of all mails processed One thing I did with procmail was to generate a monthly mailbox with all incoming email, with something like this:
BACKUP="/srv/backupts/test- date +%Y-%m-d .mbox"
I did not find an obvious way in sieve to create montly mailboxes, so I redesigned that system using Postfix's always_bcc feature, piping everything to an archive user. I'll then recreate the monthly archiving using a chewmail script that I can simply run via cron. Configure dovecot
apt install dovecot-sieve dovecot-lmtpd
I added this to the local dovecot configuration:
service lmtp  
  unix_listener /var/spool/postfix/private/dovecot-lmtp  
    user = postfix
    group = postfix
    mode = 0666
protocol lmtp  
  mail_plugins = $mail_plugins sieve
  sieve = file:~/.sieve;active=~/.dovecot.sieve
This makes Dovecot ready to receive mail from Postfix via a lmtp unix socket created in Postfix's private chroot. It also activates the sieve plugin, and uses ~/.sieve as a sieve script. The script can be a file or a directory; if it is a directory, ~/.dovecot.sieve will be a symlink pointing to the .sieve file to run. This is a feature I'm not yet using, but if one day I want to try enabling UIs to edit sieve scripts, that part is ready. Delegate to procmail To make sieve scripts that delegate to procmail, I enabled the sieve_extprograms plugin:
   sieve = file:~/.sieve;active=~/.dovecot.sieve
+  sieve_plugins = sieve_extprograms
+  sieve_extensions +vnd.dovecot.pipe
+  sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe
+  sieve_trace_dir = ~/.sieve-trace
+  sieve_trace_level = matching
+  sieve_trace_debug = yes
and then created a script for it:
mkdir -p /usr/local/lib/dovecot/sieve-pipe/
(echo "#!/bin/sh'; echo "exec /usr/bin/procmail") > /usr/local/lib/dovecot/sieve-pipe/procmail
chmod 0755 /usr/local/lib/dovecot/sieve-pipe/procmail
And I can have a sieve script that delegates processing to procmail:
require "vnd.dovecot.pipe";
pipe "procmail";
Activate the postfix side These changes switched local delivery over to Dovecot:
--- a/roles/mailserver/templates/dovecot.conf
+++ b/roles/mailserver/templates/dovecot.conf
@@ -25,6 +25,8 @@
+auth_username_format = %Ln
diff --git a/roles/mailserver/templates/ b/roles/mailserver/templates/
index d2c515a..d35537c 100644
--- a/roles/mailserver/templates/
+++ b/roles/mailserver/templates/
@@ -64,8 +64,7 @@ virtual_alias_domains =
-mailbox_command = procmail -a "$EXTENSION"
-mailbox_size_limit = 0
+mailbox_transport = lmtp:unix:private/dovecot-lmtp
Without auth_username_format = %Ln dovecot won't be able to understand usernames sent by postfix in my specific setup. Moving rules over to sieve This is mostly straightforward, with the luxury of being able to do it a bit at a time. The last tricky bit was how to call spamc from sieve, as in some situations I reduce system load by running the spamfilter only on a prefiltered selection of incoming emails. For this I enabled the filter directive in sieve:
   sieve = file:~/.sieve;active=~/.dovecot.sieve
   sieve_plugins = sieve_extprograms
-  sieve_extensions +vnd.dovecot.pipe
+  sieve_extensions +vnd.dovecot.pipe +vnd.dovecot.filter
   sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe
+  sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve-filter
   sieve_trace_dir = ~/.sieve-trace
   sieve_trace_level = matching
   sieve_trace_debug = yes
Then I created a filter script:
mkdir -p /usr/local/lib/dovecot/sieve-filter/"
(echo "#!/bin/sh'; echo "exec /usr/bin/spamc") > /usr/local/lib/dovecot/sieve-filter/spamc
chmod 0755 /usr/local/lib/dovecot/sieve-filter/spamc
And now what was previously:
:0 fw
* ^X-Spam-Status: Yes
Can become:
require "vnd.dovecot.filter";
require "fileinto";
filter "spamc";
if header :contains "x-spam-level" "**************"  
  elsif header :matches "X-Spam-Status" "Yes,*"  
    fileinto "spam";
Updates Ansgar mentioned that it's possible to replicate the monthly mailbox using the variables and date extensions, with a hacky trick from the extensions' RFC:
require "date"
require "variables"
if currentdate :matches "month" "*"   set "month" "$ 1 ";  
if currentdate :matches "year" "*"   set "year" "$ 1 ";  
fileinto :create "$ month -$ year ";

2 March 2022

Antoine Beaupr : procmail considered harmful

TL;DR: procmail is a security liability and has been abandoned upstream for the last two decades. If you are still using it, you should probably drop everything and at least remove its SUID flag. There are plenty of alternatives to chose from, and conversion is a one-time, acceptable trade-off.

Procmail is unmaintained procmail is unmaintained. The "Final release", according to Wikipedia, dates back to September 10, 2001 (3.22). That release was shipped in Debian since then, all the way back from Debian 3.0 "woody", twenty years ago. Debian also ships 25 uploads on top of this, with 3.22-21 shipping the "3.23pre" release that has been rumored since at least the November 2001, according to debian/changelog at least:
procmail (3.22-1) unstable; urgency=low
  * New upstream release, which uses the  standard' format for Maildir
    filenames and retries on name collision. It also contains some
    bug fixes from the 3.23pre snapshot dated 2001-09-13.
  * Removed  sendmail' from the Recommends field, since we already
    have  exim' (the default Debian MTA) and  mail-transport-agent'.
  * Removed suidmanager support. Conflicts: suidmanager (<< 0.50).
  * Added support for DEB_BUILD_OPTIONS in the source package.
  * README.Maildir: Do not use locking on the example recipe,
    since it's wrong to do so in this case.
 -- Santiago Vila <>  Wed, 21 Nov 2001 09:40:20 +0100
All Debian suites from buster onwards ship the 3.22-26 release, although the maintainer just pushed a 3.22-27 release to fix a seven year old null pointer dereference, after this article was drafted. Procmail is also shipped in all major distributions: Fedora and its derivatives, Debian derivatives, Gentoo, Arch, FreeBSD, OpenBSD. We all seem to be ignoring this problem. The upstream website ( has been down since about 2015, according to Debian bug #805864, with no change since. In effect, every distribution is currently maintaining its fork of this dead program. Note that, after filing a bug to keep Debian from shipping procmail in a stable release again, I was told that the Debian maintainer is apparently in contact with the upstream. And, surprise! they still plan to release that fabled 3.23 release, which has been now in "pre-release" for all those twenty years. In fact, it turns out that 3.23 is considered released already, and that the procmail author actually pushed a 3.24 release, codenamed "Two decades of fixes". That amounts to 25 commits since 3.23pre some of which address serious security issues, but none of which address fundamental issues with the code base.

Procmail is insecure By default, procmail is installed SUID root:mail in Debian. There's no debconf or pre-seed setting that can change this. There has been two bug reports against the Debian to make this configurable (298058, 264011), but both were closed to say that, basically, you should use dpkg-statoverride to change the permissions on the binary. So if anything, you should immediately run this command on any host that you have procmail installed on:
dpkg-statoverride --update --add root root 0755 /usr/bin/procmail
Note that this might break email delivery. It might also not work at all, thanks to usrmerge. Not sure. Yes, everything is on fire. This is fine. In my opinion, even assuming we keep procmail in Debian, that default should be reversed. It should be up to people installing procmail to assign it those dangerous permissions, after careful consideration of the risk involved. The last maintainer of procmail explicitly advised us (in that null pointer dereference bug) and other projects (e.g. OpenBSD, in [2]) to stop shipping it, back in 2014. Quote:
Executive summary: delete the procmail port; the code is not safe and should not be used as a basis for any further work.
I just read some of the code again this morning, after the original author claimed that procmail was active again. It's still littered with bizarre macros like:
#define bit_set(name,which,value) \
  (value?(name[bit_index(which)] =bit_mask(which)):\
... from regexp.c, line 66 (yes, that's a custom regex engine). Or this one:
#define jj  (
It uses insecure functions like strcpy extensively. malloc() is thrown around gotos like it's 1984 all over again. (To be fair, it has been feeling like 1984 a lot lately, but that's another matter entirely.) That null pointer deref bug? It's fixed upstream now, in this commit merged a few hours ago, which I presume might be in response to my request to remove procmail from Debian. So while that's nice, this is the just tip of the iceberg. I speculate that one could easily find an exploitable crash in procmail if only by running it through a fuzzer. But I don't need to speculate: procmail had, for years, serious security issues that could possibly lead to root privilege escalation, remotely exploitable if procmail is (as it's designed to do) exposed to the network. Maybe I'm overreacting. Maybe the procmail author will go through the code base and do a proper rewrite. But I don't think that's what is in the cards right now. What I expect will happen next is that people will start fuzzing procmail, throw an uncountable number of bug reports at it which will get fixed in a trickle while never fixing the underlying, serious design flaws behind procmail.

Procmail has better alternatives The reason this is so frustrating is that there are plenty of modern alternatives to procmail which do not suffer from those problems. Alternatives to procmail(1) itself are typically part of mail servers. For example, Dovecot has its own LDA which implements the standard Sieve language (RFC 5228). (Interestingly, Sieve was published as RFC 3028 in 2001, before procmail was formally abandoned.) Courier also has "maildrop" which has its own filtering mechanism, and there is fdm (2007) which is a fetchmail and procmail replacement. Update: there's also mailprocessing, which is not an LDA, but processing an existing folder. It was, however, specifically designed to replace complex Procmail rules. But procmail, of course, doesn't just ship procmail; that would just be too easy. It ships mailstat(1) which we could probably ignore because it only parses procmail log files. But more importantly, it also ships:
  • lockfile(1) - conditional semaphore-file creator
  • formail(1) - mail (re)formatter
lockfile(1) already has a somewhat acceptable replacement in the form of flock(1), part of util-linux (which is Essential, so installed on any normal Debian system). It might not be a direct drop-in replacement, but it should be close enough. formail(1) is similar: the courier maildrop package ships reformail(1) which is, presumably, a rewrite of formail. It's unclear if it's a drop-in replacement, but it should probably possible to port uses of formail to it easily.
Update: the maildrop package ships a SUID root binary (two, even). So if you want only reformail(1), you might want to disable that with:
dpkg-statoverride --update --add root root 0755 /usr/bin/lockmail.maildrop 
dpkg-statoverride --update --add root root 0755 /usr/bin/maildrop
It would be perhaps better to have reformail(1) as a separate package, see bug 1006903 for that discussion.
The real challenge is, of course, migrating those old .procmailrc recipes to Sieve (basically). I added a few examples in the appendix below. You might notice the Sieve examples are easier to read, which is a nice added bonus.

Conclusion There is really, absolutely, no reason to keep procmail in Debian, nor should it be used anywhere at this point. It's a great part of our computing history. May it be kept forever in our museums and historical archives, but not in Debian, and certainly not in actual release. It's just a bomb waiting to go off. It is irresponsible for distributions to keep shipping obsolete and insecure software like this for unsuspecting users. Note that I am grateful to the author, I really am: I used procmail for decades and it served me well. But now, it's time to move, not bring it back from the dead.


Previous work It's really weird to have to write this blog post. Back in 2016, I rebuilt my mail setup at home and, to my horror, discovered that procmail had been abandoned for 15 years at that point, thanks to that LWN article from 2010. I would have thought that I was the only weirdo still running procmail after all those years and felt kind of embarrassed to only "now" switch to the more modern (and, honestly, awesome) Sieve language. But no. Since then, Debian shipped three major releases (stretch, buster, and bullseye), all with the same vulnerable procmail release. Then, in early 2022, I found that, at work, we actually had procmail installed everywhere, possibly because userdir-ldap was using it for lockfile until 2019. I sent a patch to fix that and scrambled to remove get rid of procmail everywhere. That took about a day. But many other sites are now in that situation, possibly not imagining they have this glaring security hole in their infrastructure.

Procmail to Sieve recipes I'll collect a few Sieve equivalents to procmail recipes here. If you have any additions, do contact me. All Sieve examples below assume you drop the file in ~/.dovecot.sieve.

deliver mail to "plus" extension folder Say you want to deliver to the folder foo. You might write something like this in procmail:
EXTENSION=$1            # Need to rename it - ?? does not like $1 nor 1
* EXTENSION ?? [a-zA-Z0-9]+
That, in sieve language, would be:
require ["variables", "envelope", "fileinto", "subaddress"];
# wildcard +extension
if envelope :matches :detail "to" "*"  
  # Save name in $ name  in all lowercase
  set :lower "name" "$ 1 ";
  fileinto "$ name ";

Subject into folder This would file all mails with a Subject: line having FreshPorts in it into the freshports folder, and mails from mailing lists into the alternc folder:
## mailing list freshports
* ^Subject.*FreshPorts.*
## mailing list alternc
* ^List-Post.*mailto:.**
Equivalent Sieve:
if header :contains "subject" "FreshPorts"  
    fileinto "freshports";
  elsif header :contains "List-Id" ""  
    fileinto "alternc";

Mail sent to root to a reports folder This double rule:
* ^Subject: Cron
* ^From: .*root@
Would look something like this in Sieve:
if header :comparator "i;octet" :contains "Subject" "Cron"  
  if header :regex :comparator "i;octet"  "From" ".*root@"  
        fileinto "rapports";
Note that this is what the automated converted does (below). It's not very readable, but it works.

Bulk email I didn't have an equivalent of this in procmail, but that's something I did in Sieve:
if header :contains "Precedence" "bulk"  
    fileinto "bulk";

Any mailing list This is another rule I didn't have in procmail but I found handy and easy to do in Sieve:
if exists "List-Id"  
    fileinto "lists";

This or that I wouldn't remember how to do this in procmail either, but that's an easy one in Sieve:
if anyof (header :contains "from" "",
           header :contains ["to", "cc"] "")  
    fileinto "example";
You can even pile up a bunch of options together to have one big rule with multiple patterns:
if anyof (exists "X-Cron-Env",
          header :contains ["subject"] ["security run output",
                                        "monthly run output",
                                        "daily run output",
                                        "weekly run output",
                                        "Debian Package Updates",
                                        "Debian package update",
                                        "daily mail stats",
                                        "Anacron job",
                                        "changes report",
                                        "run output",
                                        "Undelivered mail",
                                        "Postfix SMTP server: errors from",
                                        "DenyHosts report",
                                        "Debian security status",
           header :contains "Auto-Submitted" "auto-generated",
           envelope :contains "from" ["nagios@",
    fileinto "rapports";

Automated script There is a script floating around, and mentioned in the dovecot documentation. It didn't work very well for me: I could use it for small things, but I mostly wrote the sieve file from scratch.

Progressive migration Enrico Zini has progressively migrated his procmail setup to Sieve using a clever way: he hooked procmail inside sieve so that he could deliver to the Dovecot LDA and progressively migrate rules one by one, without having a "flag day". See this explanatory blog post for the details, which also shows how to configure Dovecot as an LMTP server with Postfix.

Other examples The Dovecot sieve examples are numerous and also quite useful. At the time of writing, they include virus scanning and spam filtering, vacation auto-replies, includes, archival, and flags.

Harmful considered harmful I am aware that the "considered harmful" title has a long and controversial history, being considered harmful in itself (by some people who are obviously not afraid of contradictions). I have nevertheless deliberately chosen that title, partly to make sure this article gets maximum visibility, but more specifically because I do not have doubts at this moment that procmail is, clearly, a bad idea at this moment in history.

Developing story I must also add that, incredibly, this story has changed while writing it. This article is derived from this bug I filed in Debian to, quite frankly, kick procmail out of Debian. But filing the bug had the interesting effect of pushing the upstream into action: as mentioned above, they have apparently made a new release and merged a bunch of patches in a new git repository. This doesn't change much of the above, at this moment. If anything significant comes out of this effort, I will try to update this article to reflect the situation. I am actually happy to retract the claims in this article if it turns out that procmail is a stellar example of defensive programming and survives fuzzing attacks. But at this moment, I'm pretty confident that will not happen, at least not in scope of the next Debian release cycle.

23 November 2021

Enrico Zini: Really lossy compression of JPEG

Suppose you have a tool that archives images, or scientific data, and it has a test suite. It would be good to collect sample files for the test suite, but they are often so big one can't really bloat the repository with them. But does the test suite need everything that is in those files? Not necesarily. For example, if one's testing code that reads EXIF metadata, one doesn't care about what is in the image. That technique works extemely well. I can take GRIB files that are several megabytes in size, zero out their data payload, and get nice 1Kb samples for the test suite. I've started to collect and organise the little hacks I use for this into a tool I called mktestsample:
$ mktestsample -v samples1/*
2021-11-23 20:16:32 INFO common samples1/cosmo_2d+0.grib: size went from 335168b to 120b
2021-11-23 20:16:32 INFO common samples1/grib2_ifs.arkimet: size went from 4993448b to 39393b
2021-11-23 20:16:32 INFO common samples1/polenta.jpg: size went from 3191475b to 94517b
2021-11-23 20:16:32 INFO common samples1/test-ifs.grib: size went from 1986469b to 4860b
Those are massive savings, but I'm not satisfied about those almost 94Kb of JPEG:
$ ls -la samples1/polenta.jpg
-rw-r--r-- 1 enrico enrico 94517 Nov 23 20:16 samples1/polenta.jpg
$ gzip samples1/polenta.jpg
$ ls -la samples1/polenta.jpg.gz
-rw-r--r-- 1 enrico enrico 745 Nov 23 20:16 samples1/polenta.jpg.gz
I believe I did all I could: completely blank out image data, set quality to zero, maximize subsampling, and tweak quantization to throw everything away. Still, the result is a 94Kb file that can be gzipped down to 745 bytes. Is there something I'm missing? I suppose JPEG is better at storing an image than at storing the lack of an image. I cannot really complain :) I can still commit compressed samples of large images to a git repository, taking very little data indeed. That's really nice!

8 November 2021

Enrico Zini: An educational debugging session

This morning we realised that a test case failed on Fedora 34 only (the link is in Italian) and we set to debugging. The initial analysis This is the initial reproducer:
$ PROJ_DEBUG=3 python test
test_recipe (tests.test_litota3.TestLITOTA3NordArkimetIFS) ... pj_open_lib(proj.db): call fopen(/lib64/../share/proj/proj.db) - succeeded
proj_create: Open of /lib64/../share/proj/proj.db failed
pj_open_lib(proj.db): call fopen(/lib64/../share/proj/proj.db) - succeeded
proj_create: no database context specified
Cannot instantiate source_crs
EXCEPTION in py_coast(): ProjP: cannot create crs to crs from [EPSG:4326] to [+proj=merc +lon_0=0 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +over +units=m +no_defs]
Note that opening /lib64/../share/proj/proj.db sometimes succeeds, sometimes fails. It's some kind of Schr dinger path, which works or not depending on how you observe it:
# ls -lad /lib64
lrwxrwxrwx 1 1000 1000 9 Jan 26  2021 /lib64 -> usr/lib64
$ ls -la /lib64/../share/proj/proj.db
-rw-r--r-- 1 root root 8925184 Jan 28  2021 /lib64/../share/proj/proj.db
$ cd /lib64/../share/proj/
$ cd /lib64
$ cd ..
$ cd share
-bash: cd: share: No such file or directory
And indeed, stat(2) finds it, and sqlite doesn't (the file is a sqlite database):
$ stat /lib64/../share/proj/proj.db
  File: /lib64/../share/proj/proj.db
  Size: 8925184     Blocks: 17432      IO Block: 4096   regular file
Device: 33h/51d Inode: 56907       Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-11-08 14:09:12.334350779 +0100
Modify: 2021-01-28 05:38:11.000000000 +0100
Change: 2021-11-08 13:42:51.758874327 +0100
 Birth: 2021-11-08 13:42:51.710874051 +0100
$ sqlite3 /lib64/../share/proj/proj.db
Error: unable to open database "/lib64/../share/proj/proj.db": unable to open database file
A minimal reproducer Later on we started stripping layers of code towards a minimal reproducer: here it is. It works or doesn't work depending on whether proj is linked explicitly, or via MagPlus:
$ cat
#include <magics/ProjP.h>
int main()  
    magics::ProjP p("EPSG:4326", "+proj=merc +lon_0=0 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +over +units=m +no_defs");
    return 0;
$ g++ -o tc -I/usr/include/magics  -lMagPlus
$ ./tc
proj_create: Open of /lib64/../share/proj/proj.db failed
proj_create: no database context specified
terminate called after throwing an instance of 'magics::MagicsException'
  what():  ProjP: cannot create crs to crs from [EPSG:4326] to [+proj=merc +lon_0=0 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +over +units=m +no_defs]
Aborted (core dumped)
$ g++ -o tc -I/usr/include/magics -lproj  -lMagPlus
$ ./tc
What is going on here? A difference between the two is the path used to link to
$ ldd ./tc   grep proj => /lib64/ (0x00007fd4919fb000)
$ g++ -o tc -I/usr/include/magics   -lMagPlus
$ ldd ./tc   grep proj => /lib64/../lib64/ (0x00007f6d1051b000)
Common sense screams that this should not matter, but we chased an intuition and found that one of the ways proj looks for its database is relative to its shared library. Indeed, gdb in hand, that dladdr call returns /lib64/../lib64/ From /lib64/../lib64/, proj strips two paths from the end, presumably to pass from something like /something/usr/lib/ to /something/usr. So, dladdr returns /lib64/../lib64/, which becomes /lib64/../, which becomes /lib64/../share/proj/proj.db, which exists on the file system and is used as a path to the database. But depending how you look at it, that path might or might not be valid: it passes the stat(2) check that stops the lookup for candidate paths, but sqlite is unable to open it. Why does the other path work? By linking in the other way, dladdr returns /lib64/, which becomes /share/proj/proj.db, which doesn't exist, which triggers a fallback to a PROJ_LIB constant defined at compile time, which is a path that works no matter how you look at it. Why that weird path with libMagPlus? To complete the picture, we found that is packaged with a rpath set, which is known to cause trouble
# readelf -d /usr/lib64/ grep rpath
 0x000000000000000f (RPATH)              Library rpath: [$ORIGIN/../lib64]
The workaround We found that one can set PROJ_LIB in the environment to override the normal proj database lookup. Building on that, we came up with a simple way to override it on Fedora 34 only:
    if distro is not None and distro.linux_distribution()[:2] == ("Fedora", "34") and "PROJ_LIB" not in os.environ:
         self.env_overrides["PROJ_LIB"] = "/usr/share/proj/"
This has been a most edifying and educational debugging session, with only the necessary modicum of curses and swearwords. Working in a team of excellent people really helps.

2 November 2021

Enrico Zini: help2man and subcommands

help2man is quite nice for autogenerating manpages from command line help, making sure that they stay up to date as command line options evolve. It works quite well, except for commands with subcommands, like Python programs that use argparse's add_subparser. So, here's a quick hack that calls help2man for each subcommand, and stitches everything together in a simple manpage.
import re
import shutil
import sys
import subprocess
import tempfile
# TODO: move to argparse
command = sys.argv[1]
# Use to get the program version
res =[sys.executable, "", "--version"], stdout=subprocess.PIPE, text=True, check=True)
version = res.stdout.strip()
# Call the main commandline help to get a list of subcommands
res =[sys.executable, command, "--help"], stdout=subprocess.PIPE, text=True, check=True)
subcommands = re.sub(r'^.+\ (.+)\ .+$', r'\1', res.stdout, flags=re.DOTALL).split(',')
# Generate a help2man --include file with an extra section for each subcommand
with tempfile.NamedTemporaryFile("wt") as tf:
    print("[>DESCRIPTION]", file=tf)
    for subcommand in subcommands:
        res =
                ["help2man", f"--name= command ", "--section=1",
                 "--no-info", "--version-string=dummy", f"./ command   subcommand "],
                stdout=subprocess.PIPE, text=True, check=True)
        subcommand_doc = re.sub(r'^.+.SH DESCRIPTION', '', res.stdout, flags=re.DOTALL)
        print(".SH ", subcommand.upper(), " SUBCOMMAND", file=tf)
    with open(f" command", "rt") as fd:
        shutil.copyfileobj(fd, tf)
    # Call help2man on the main command line help, with the extra include file
    # we just generated
            ["help2man", f"--include= ", f"--name= command ",
             "--section=1", "--no-info", f"--version-string= version ",
             "--output=arkimaps.1", "./arkimaps"],

22 October 2021

Enrico Zini: Scanning for imports in Python scripts

I had to package a nontrivial Python codebase, and I needed to put dependencies in I could do git grep -h import sort -u, then review the output by hand, but I lacked the motivation for it. Much better to take a stab at solving the general problem The result is at One fun part is scanning a directory tree, using ast to find import statements scattered around the code:
class Scanner:
    def __init__(self):
        self.names: Set[str] = set()
    def scan_dir(self, root: str):
        for dirpath, dirnames, filenames, dir_fd in os.fwalk(root):
            for fn in filenames:
                if fn.endswith(".py"):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        self.scan_file(fd, os.path.join(dirpath, fn))
                st = os.stat(fn, dir_fd=dir_fd)
                if st.st_mode & (stat.S_IXUSR   stat.S_IXGRP   stat.S_IXOTH):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                            lead = fd.readline()
                        except UnicodeDecodeError:
                        if re_python_shebang.match(lead):
                            self.scan_file(fd, os.path.join(dirpath, fn))
    def scan_file(self, fd: TextIO, pathname: str):"Reading file %s", pathname)
            tree = ast.parse(, pathname)
        except SyntaxError as e:
            log.warning("%s: file cannot be parsed", pathname, exc_info=e)
    def scan_tree(self, tree: ast.AST):
        for stm in tree.body:
            if isinstance(stm, ast.Import):
                for alias in stm.names:
                    if not isinstance(, str):
                        print("NAME", repr(, stm)
            elif isinstance(stm, ast.ImportFrom):
                if stm.module is not None:
            elif hasattr(stm, "body"):
Another fun part is grouping the imported module names by where in sys.path they have been found:
    scanner = Scanner()
    by_sys_path: Dict[str, List[str]] = collections.defaultdict(list)
    for name in sorted(scanner.names):
        spec = importlib.util.find_spec(name)
        if spec is None or spec.origin is None:
            for sp in sys.path:
                if spec.origin.startswith(sp):
    for sys_path, names in sorted(by_sys_path.items()):
        print(f" sys_path or 'unidentified' :")
        for name in names:
            print(f"   name ")
An example. It's kind of nice how it can at least tell apart stdlib modules so one doesn't need to read through those:
$ ./scan-imports  /himblick
Maybe such a tool already exists and works much better than this? From a quick search I didn't find it, and it was fun to (re)invent it. Updates: Jakub Wilk pointed out to an old python-modules script that finds Debian dependencies. The AST scanning code should be refactored to use ast.NodeVisitor.

10 September 2021

Enrico Zini: A nightmare of confcalls and microphones

I had this nightmare where I had a very, very important confcall. I joined with Chrome. Chrome said Failed to access your microphone - Cannot use microphone for an unknown reason. Could not start audio source. I joined with Firefox. Firefox chose Monitor of Built-in Audio Analog Stereo as a microphone, and did not let me change it. Not in the browser, not in pavucontrol. I joined with the browser on my phone, and the webpage said This meeting needs to use your microphone and camera. Select *Allow* when your browser asks for permissions. But the question never came. I could hear people talking. I had very important things to say. I tried typing them in the chat window, but they weren't seeing it. The meeting ended. I was on the verge of tears.
Tell me, Mr. Anderson, what good is a phone call when you are unable to speak?
Since this nightmare happened for real, including the bit about tears in the end, let's see that it doesn't happen again. I should now have three working systems, which hopefully won't all break again all at the same time. Fixing Chrome I can reproduce this reliably, on Bullseye's standard Chromium 90.0.4430.212-1, just launched on an empty profile, no extensions. The webpage has camera and microphone allowed. Chrome doesn't show up in the recording tab of pulseaudio. Nothing on Chrome's stdout/stderr. JavaScript console has:
Logger.js:154 2021-09-10Txx:xx:xx.xxxZ [features/base/tracks] Failed to create local tracks
DOMException: Could not start audio source
I found the answer here:
I had the similar problem once with chromium. i could solve it by switching in preferences->microphone-> from "default" to "intern analog stereo".
Opening the little popup next to the microphone/mute button allows choosing other microphones, which work. Only "Same as system (Default)" does not work. Fixing Firefox I have firefox-esr 78.13.0esr-1~deb11u1. In Jitsi, microphone selection is disabled on the toolbar and in the settings menu. In pavucontrol, changing the recording device for Firefox has no effect. If for some reason the wrong microphone got chosen, those are not ways of fixing it. What I found works is to click on the camera permission icon, remove microphone permission, then reload the page. At that point Firefox will ask for permission again, and that microphone selection seems to work. Relevant bugs: on Jitsi and on Firefox. Since this is well known (once you find the relevant issues), I'd have appreciated Jitsi at least showing a link to an explanation of workarounds on Firefox, instead of just disabling microphone selection. Fixing Jitsi on the phone side I really don't want to preemptively give camera and microphone permissions to my phone browser. I noticed that there's the Jitsi app on F-Droid and much as I hate to use an app when a website would work, at least in this case it's a way to keep the permission sets separate, so I installed that. Fixing pavucontrol? I tried to find out why I can't change input device for FireFox on pavucontrol. I only managed to find an Ask Ubuntu question with no answer and a Unix StackExchange question with no answer.

20 July 2021

Enrico Zini: Run a webserver for a specific user *only*

I'm creating a program that uses the web browser for its user interface, and I'm reasonably sure I'm not the first person doing this. Normally such a problem would listen to a port on localhost, and tell the browser to connect to it. Bonus points for listening to a randomly allocated free port, so that one does not need to involve some amount of luck to get the program started. However, using a local port still means that any user on the local machine can connect to it, which is generally a security issue. A possible solution would be to use AF_UNIX Unix Domain Sockets, which are supported by various web servers, but as far as I understand not currently by browsers. I checked Firefox and Chrome, and they currently seem to fail to even acknowledge the use case. I'm reasonably sure I'm not the first person doing this, and yes, it's intended as an understatement. So, dear Lazyweb, is there a way to securely use a browser as a UI for a user's program, without exposing access to the backend to other users in the system? Access token in the URL Emanuele Di Giacomo suggests to add an access token to the URL that gets passed to the browser. This would work to protect access on localhost: even if the application cannot use HTTPS, other users cannot see packets that go through the local interface, so both the access token and the session cookie that one could send afterwards would be protected. Network namespaces I thought about isolating server and browser in a private network namespace with something like unshare(1), but it seems to require root. Johannes Schauer Marin Rodrigues wrote to correct that:
It's possible to unshare the network namespace by first unsharing the user namespace and thus becoming root which is possible without being root since #898446 got fixed. For example you can run this as the normal user: lxc-usernsexec -- lxc-unshare -s NETWORK -- ip addr If you don't want to depend on lxc, you can write a wrapper in Perl or Python. I have a Perl implementation of that in mmdebstrap.
Firewalling Martin Schuster wrote to suggest another option:
I had the same issue. My approach was "weird", but worked: Block /outgoing/ connections to the port, unless the uid is correct. That might be counter-intuitive, but of course all connections /to/ localhost will be done /from/ localhost also. Something like: iptables -A OUTPUT -p tcp -d localhost --dport 8123 -m owner --uid-owner joe -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 8123 -j REJECT

30 June 2021

Enrico Zini: Systemd containers with unittest

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. Unit testing some parts of Transilience, like the apt and systemd actions, or remote Mitogen connections, can really use a containerized system for testing. To have that, I reused my work on nspawn-runner. to build a simple and very fast system of ephemeral containers, with minimal dependencies, based on systemd-nspawn and btrfs snapshots: Setup To be able to use systemd-nspawn --ephemeral, the chroots needs to be btrfs subvolumes. If you are not running on a btrfs filesystem, you can create one to run the tests, even on a file:
fallocate -l 1.5G testfile
/usr/sbin/mkfs.btrfs testfile
sudo mount -o loop testfile test_chroots/
I created a script to setup the test environment, here is an extract:
mkdir -p test_chroots
cat << EOF > "test_chroots/CACHEDIR.TAG"
Signature: 8a477f597d28d172789f06886806bc55
# chroots used for testing transilience, can be regenerated with make-test-chroot
btrfs subvolume create test_chroots/buster
eatmydata debootstrap --variant=minbase --include=python3,dbus,systemd buster test_chroots/buster
CACHEDIR.TAG is a nice trick to tell backup software not to bother backing up the contents of this directory, since it can be easily regenerated. eatmydata is optional, and it speeds up debootstrap quite a bit. Running unittest with sudo Here's a simple helper to drop root as soon as possible, and regain it only when needed. Note that it needs $SUDO_UID and $SUDO_GID, that are set by sudo, to know which user to drop into:
class ProcessPrivs:
    Drop root privileges and regain them only when needed
    def __init__(self):
        self.orig_uid, self.orig_euid, self.orig_suid = os.getresuid()
        self.orig_gid, self.orig_egid, self.orig_sgid = os.getresgid()
        if "SUDO_UID" not in os.environ:
            raise RuntimeError("Tests need to be run under sudo")
        self.user_uid = int(os.environ["SUDO_UID"])
        self.user_gid = int(os.environ["SUDO_GID"])
        self.dropped = False
    def drop(self):
        Drop root privileges
        if self.dropped:
        os.setresgid(self.user_gid, self.user_gid, 0)
        os.setresuid(self.user_uid, self.user_uid, 0)
        self.dropped = True
    def regain(self):
        Regain root privileges
        if not self.dropped:
        os.setresuid(self.orig_suid, self.orig_suid, self.user_uid)
        os.setresgid(self.orig_sgid, self.orig_sgid, self.user_gid)
        self.dropped = False
    def root(self):
        Regain root privileges for the duration of this context manager
        if not self.dropped:
    def user(self):
        Drop root privileges for the duration of this context manager
        if self.dropped:
privs = ProcessPrivs()
As soon as this module is loaded, root privileges are dropped, and can be regained for as little as possible using a handy context manager:
   with privs.root():["systemd-run", ...], check=True, capture_output=True)
Using the chroot from test cases The infrastructure to setup and spin down ephemeral machine is relatively simple, once one has worked out the nspawn incantations:
class Chroot:
    Manage an ephemeral chroot
    running_chroots: Dict[str, "Chroot"] =  
    def __init__(self, name: str, chroot_dir: Optional[str] = None): = name
        if chroot_dir is None:
            self.chroot_dir = self.get_chroot_dir(name)
            self.chroot_dir = chroot_dir
        self.machine_name = f"transilience- uuid.uuid4() "
    def start(self):
        Start nspawn on this given chroot.
        The systemd-nspawn command is run contained into its own unit using
        unit_config = [
        cmd = ["systemd-run"]
        for c in unit_config:
            cmd.append(f"--property= c ")
            f"--directory= self.chroot_dir ",
            f"--machine= self.machine_name ",
            "--notify-ready=yes"))"%s: starting machine using image %s", self.machine_name, self.chroot_dir)
        log.debug("%s: running %s", self.machine_name, " ".join(shlex.quote(c) for c in cmd))
        with privs.root():
  , check=True, capture_output=True)
        log.debug("%s: started", self.machine_name)
        self.running_chroots[self.machine_name] = self
    def stop(self):
        Stop the running ephemeral containers
        cmd = ["machinectl", "terminate", self.machine_name]
        log.debug("%s: running %s", self.machine_name, " ".join(shlex.quote(c) for c in cmd))
        with privs.root():
  , check=True, capture_output=True)
        log.debug("%s: stopped", self.machine_name)
        del self.running_chroots[self.machine_name]
    def create(cls, chroot_name: str) -> "Chroot":
        Start an ephemeral machine from the given master chroot
        res = cls(chroot_name)
        return res
    def get_chroot_dir(cls, chroot_name: str):
        Locate a master chroot under test_chroots/
        chroot_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "test_chroots", chroot_name))
        if not os.path.isdir(chroot_dir):
            raise RuntimeError(f" chroot_dir  does not exists or is not a chroot directory")
        return chroot_dir
# We need to use atextit, because unittest won't run
# tearDown/tearDownClass/tearDownModule methods in case of KeyboardInterrupt
# and we need to make sure to terminate the nspawn containers at exit
def cleanup():
    # Use a list to prevent changing running_chroots during iteration
    for chroot in list(Chroot.running_chroots.values()):
And here's a TestCase mixin that starts a containerized systems and opens a Mitogen connection to it:
class ChrootTestMixin:
    Mixin to run tests over a setns connection to an ephemeral systemd-nspawn
    container running one of the test chroots
    chroot_name = "buster"
    def setUpClass(cls):
        import mitogen
        from transilience.system import Mitogen = mitogen.master.Broker()
        cls.router = mitogen.master.Router(
        cls.chroot = Chroot.create(cls.chroot_name)
        with privs.root():
            cls.system = Mitogen(
          , "setns", kind="machinectl",
                    container=cls.chroot.machine_name, router=cls.router)
    def tearDownClass(cls):
Running tests Once the tests are set up, everything goes on as normal, except one needs to run nose2 with sudo:
sudo nose2-3
Spin up time for containers is pretty fast, and the tests drop root as soon as possible, and only regain it for as little as needed. Also, dependencies for all this are minimal and available on most systems, and the setup instructions seem pretty straightforward

29 June 2021

Enrico Zini: Building a Transilience playbook in a zipapp

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. Mitogen is a great library, but scarily complicated, and I've been wondering how hard it would be to make alternative connection methods for Transilience. Here's a wild idea: can I package a whole Transilience playbook, plus dependencies, in a zipapp, then send the zipapp to the machine to be provisioned, and run it locally? It turns out I can. Creating the zipapp This is somewhat hackish, but until I can rely on Python 3.9's improved importlib.resources module, I cannot think of a better way:
    def zipapp(self, target: str, interpreter=None):
        Bundle this playbook into a self-contained zipapp
        import zipapp
        import jinja2
        import transilience
        if interpreter is None:
            interpreter = sys.executable
        if getattr(transilience.__loader__, "archive", None):
            # Recursively iterating module directories requires Python 3.9+
            raise NotImplementedError("Cannot currently create a zipapp from a zipapp")
        with tempfile.TemporaryDirectory() as workdir:
            # Copy transilience
            shutil.copytree(os.path.dirname(__file__), os.path.join(workdir, "transilience"))
            # Copy jinja2
            shutil.copytree(os.path.dirname(jinja2.__file__), os.path.join(workdir, "jinja2"))
            # Copy argv[0] as
            shutil.copy(sys.argv[0], os.path.join(workdir, ""))
            # Copy argv[0]/roles
            role_dir = os.path.join(os.path.dirname(sys.argv[0]), "roles")
            if os.path.isdir(role_dir):
                shutil.copytree(role_dir, os.path.join(workdir, "roles"))
            # Turn everything into a zipapp
            zipapp.create_archive(workdir, target, interpreter=interpreter, compressed=True)
Since the zipapp contains not just the playbook, the roles, and the roles' assets, but also Transilience and Jinja2, it can run on any system that has a Python 3.7+ interpreter, and nothing else! I added it to the standard set of playbook command line options, so any Transilience playbook can turn itself into a self-contained zipapp:
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--local LOCAL]
                 [--ansible-to-python role   --ansible-to-ast role   --zipapp file.pyz]
  --zipapp file.pyz     bundle this playbook in a self-contained executable
                        python zipapp
Loading assets from the zipapp I had to create ZipFile varieties of some bits of infrastructure in Transilience, to load templates, files, and Ansible yaml files from zip files. You can see above a way to detect if a module is loaded from a zipfile: check if the module's __loader__ attribute has an archive attribute. Here's a Jinja2 template loader that looks into a zip:
class ZipLoader(jinja2.BaseLoader):
    def __init__(self, archive: zipfile.ZipFile, root: str):
        self.zipfile = archive
        self.root = root
    def get_source(self, environment: jinja2.Environment, template: str):
        path = os.path.join(self.root, template)
        with, "r") as fd:
            source =
        return source, None, lambda: True
I also created a FileAsset abstract interface to represent a local file, and had Role.lookup_file return an appropriate instance:
    def lookup_file(self, path: str) -> str:
        Resolve a pathname inside the place where the role assets are stored.
        Returns a pathname to the file
        if self.role_assets_zipfile is not None:
            return ZipFileAsset(self.role_assets_zipfile, os.path.join(self.role_assets_root, path))
            return LocalFileAsset(os.path.join(self.role_assets_root, path))
An interesting side effect of having smarter local file accessors is that I can cache the contents of small files and transmit them to the remote host together with the other action parameters, saving a potential network round trip for each builtin.copy action that has a small source. The result The result is kind of fun:
$ time ./provision --zipapp test.pyz
real    0m0.203s
user    0m0.174s
sys 0m0.029s
$ time scp test.pyz root@test:
test.pyz                                                                                                         100%  528KB 388.9KB/s   00:01
real    0m1.576s
user    0m0.010s
sys 0m0.007s
And on the remote:
# time ./test.pyz --local=test
2021-06-29 18:05:41,546 test: [connected 0.000s]
2021-06-29 18:12:31,555 test: 88 total actions in 0.00ms: 87 unchanged, 0 changed, 1 skipped, 0 failed, 0 not executed.
real    0m0.979s
user    0m0.783s
sys 0m0.172s
Compare with a Mitogen run:
$ time PYTHONPATH=../transilience/ ./provision
2021-06-29 18:13:44 test: [connected 0.427s]
2021-06-29 18:13:46 test: 88 total actions in 2.50s: 87 unchanged, 0 changed, 1 skipped, 0 failed, 0 not executed.
real    0m2.697s
user    0m0.856s
sys 0m0.042s
From a single test run, not a good benchmark, it's 0.203 + 1.576 + 0.979 = 2.758s with the zipapp and 2.697s with Mitogen. Even if I've been lucky, it's a similar order of magnitude. What can I use this for? This was mostly a fun hack. It could however be the basis for a Fabric-based connector, or a clusterssh-based connector, or for bundling a Transilience playbook into an installation image, or to add a provisioning script to the boot partition of a Raspberry Pi. It looks like an interesting trick to have up one's sleeve. One could even build an Ansible-based connector(!) in which a simple Ansible playbook, with no facts gathering, is used to build the zipapp, push it to remote systems and run it. That would be the wackiest way of speeding up Ansible, ever! Next: using Systemd containers with unittest, for Transilience's test suite.

26 June 2021

Enrico Zini: Ansible conditionals in Transilience

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. I thought a lot of what I managed to do so far with Transilience would be impossible, but then here I am. How about Ansible conditionals? Those must be impossible, right? Let's give it a try. A quick recon of Ansible sources Looking into Ansible's sources, when expressions are lists of strings AND-ed together. The expressions are Jinja2 expressions that Ansible pastes into a mini-template, renders, and checks the string that comes out. A quick recon of Jinja2 Jinja2 has a convenient function (jinja2.Environment.compile_expression) that compiles a template snippet into a Python function. It can also parse a template into an AST that can be inspected in various ways. Evaluating Ansible conditionals in Python Environment.compile_expression seems to really do precisely what we need for this, straight out of the box. There is an issue with the concept of "defined": for Ansible it seems to mean "the variable is present in the template context". In Transilience instead, all variables are fields in the Role dataclass, and can be None when not set. This means that we need to remove variables that are set to None before passing the parameters to the compiled Jinjae expression:
class Conditional:
    An Ansible conditional expression
    def __init__(self, engine: template.Engine, body: str):
        # Original unparsed expression
        self.body: str = body
        # Expression compiled to a callable
        self.expression: Callable = engine.env.compile_expression(body)
    def evaluate(self, ctx: Dict[str, Any]):
        ctx =  name: val for name, val in ctx.items() if val is not None 
        return self.expression(**ctx)
Generating Python code Transilience does not only support running Ansible roles, but also converting them to Python code. I can keep this up by traversing the Jinja2 AST generating Python expressions. The code is straightforward enough that I can throw in a bit of pattern matching to make some expressions more idiomatic for Python:
class Conditional:
    def __init__(self, engine: template.Engine, body: str):
        parser = jinja2.parser.Parser(engine.env, body, state='variable')
        self.jinja2_ast: nodes.Node = parser.parse_expression()
    def get_python_code(self) -> str:
        return to_python_code(self.jinja2_ast
def to_python_code(node: nodes.Node) -> str:
    if isinstance(node, nodes.Name):
        if node.ctx == "load":
            return f"self. "
            raise NotImplementedError(f"jinja2 Name nodes with ctx= node.ctx!r  are not supported:  node!r ")
    elif isinstance(node, nodes.Test):
        if == "defined":
            return f" to_python_code(node.node)  is not None"
        elif == "undefined":
            return f" to_python_code(node.node)  is None"
            raise NotImplementedError(f"jinja2 Test nodes with name=!r  are not supported:  node!r ")
    elif isinstance(node, nodes.Not):
        if isinstance(node.node, nodes.Test):
            # Special case match well-known structures for more idiomatic Python
            if == "defined":
                return f" to_python_code(node.node.node)  is None"
            elif == "undefined":
                return f" to_python_code(node.node.node)  is not None"
        elif isinstance(node.node, nodes.Name):
            return f"not  to_python_code(node.node) "
        return f"not ( to_python_code(node.node) )"
    elif isinstance(node, nodes.Or):
        return f"( to_python_code(node.left)  or  to_python_code(node.right) )"
    elif isinstance(node, nodes.And):
        return f"( to_python_code(node.left)  and  to_python_code(node.right) )"
        raise NotImplementedError(f"jinja2  node.__class__  nodes are not supported:  node!r ")
Scanning for variables Lastly, I can implement scanning conditionals for variable references to add as fields to the Role dataclass:
class FindVars(jinja2.visitor.NodeVisitor):
    def __init__(self):
        self.found: Set[str] = set()
    def visit_Name(self, node):
        if node.ctx == "load":
class Conditional:
    def list_role_vars(self) -> Sequence[str]:
        fv = FindVars()
        return fv.found
The result in action Take this simple Ansible task:
 - name: Example task
      state: touch
      path: /tmp/test
   when: (is_test is defined and is_test) or debug is defined
Run it through ./provision --ansible-to-python test and you get:
from __future__ import annotations
from typing import Any
from transilience import role
from transilience.actions import builtin, facts
class Role(role.Role):
    # Role variables used by templates
    debug: Any = None
    is_test: Any = None
    def all_facts_available(self):
        if ((self.is_test is not None and self.is_test)
                or self.debug is not None):
                builtin.file(path='/tmp/test', state='touch'),
                name='Example task')
Besides one harmless set of parentheses too much, what I wasn't sure would be possible is there, right there, staring at me with a mischievous grin. Next: Building a Transilience playbook in a zipapp.

25 June 2021

Enrico Zini: Parsing YAML

This is part of a series of posts on ideas for an Ansible-like provisioning system, implemented in Transilience. The time has come for me to try and prototype if it's possible to load some Transilience roles from Ansible's YAML instead of Python. The data models of Transilience and Ansible are not exactly the same. Some of the differences that come to mind: To simplify the work, I'll start from loading a single role out of Ansible, not an entire playbook. TL;DR: scroll to the bottom of the post for the conclusion! Loading tasks The first problem of loading an Ansible task is to figure out which of the keys is the module name. I have so far failed to find precise reference documentation about what keyboards are used to define a task, so I'm going by guesswork, and if needed a look at Ansible's sources. My first attempt goes by excluding all known non-module keywords:
        candidates = []
        for key in task_info.keys():
            if key in ("name", "args", "notify"):
        if len(candidates) != 1:
            raise RoleNotLoadedError(f"could not find a known module in task  task_info!r ")
        modname = candidates[0]
        if modname.startswith("ansible.builtin."):
            name = modname[16:]
            name = modname
This means that Ansible keywords like when or with will break the parsing, and it's fine since they are not supported yet. args seems to carry arguments to the module, when the module main argument is not a dict, as may happen at least with the command module. Task parameters One can do all sorts of chaotic things to pass parameters to Ansible tasks: for example string lists can be lists of strings or strings with comma-separated lists, and they can be preprocesed via Jinja2 templating, and they can be complex data structures that might contain strings that need Jinja2 preprocessing. I ended up mapping the behaviours I encountered in an AST-like class hierarchy which includes recursive complex structures. Variables Variables look hard: Ansible has a big free messy cauldron of global variables, and Transilience needs a predefined list of per-role variables. However, variables are mainly used inside Jinja2 templates, and Jinja2 can parse to an Abstract Syntax Tree and has useful methods to examine its AST. Using that, I managed with resonable effort to scan an Ansible role and generate a list of all the variables it uses! I can then use that list, filter out facts-specific names like ansible_domain, and use them to add variable definition to the Transilience roles. That is exciting! Handlers Before loading tasks, I load handlers as one-action roles, and index them by name. When an Ansible task notifies a handler, I can then loop up by name the roles I generated in the earlier pass, and I have all that I need. Parsed Abstract Syntax Tree Most of the results of all this parsing started looking like an AST, so I changed the rest of the prototype to generate an AST. This means that, for a well defined subset of nsible's YAML, there exists now a tool that is able to parse it into an AST and raeson with it. Transilience's playbooks gained a --ansible-to-ast option to parse an Ansible role and dump the resulting AST as JSON:
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--ansible-to-python role]
                 [--ansible-to-ast role]
Provision my VPS
optional arguments:
  -C, --check           do not perform changes, but check if changes would be
  --ansible-to-ast role
                        print the AST of the given Ansible role as understood
                        by Transilience
The result is extremely verbose, since every parameter is itself a node in the tree, but I find it interesting. Here is, for example, a node for an Ansible task which has a templated parameter:
      "node": "task",
      "action": "builtin.blockinfile",
          "node": "parameter",
          "type": "scalar",
          "value": "/etc/aliases"
          "node": "parameter",
          "type": "template_string",
          "value": "root:  postmaster \n % for name, dest in aliases.items() % \n name :  dest \n % endfor % \n"
        "name": "configure /etc/aliases",
        "blockinfile":  ,
        "notify": "reread /etc/aliases"
      "notify": [
Here's a node for an Ansible template task converted to Transilience's model:
      "node": "task",
      "action": "builtin.copy",
          "node": "parameter",
          "type": "scalar",
          "value": "/etc/dovecot/local.conf"
          "node": "parameter",
          "type": "template_path",
          "value": "dovecot.conf"
        "name": "configure dovecot",
        "template":  ,
        "notify": "restart dovecot"
      "notify": [
Executing The first iteration of prototype code for executing parsed Ansible roles is a little execise in closures and dynamically generated types:
    def get_role_class(self) -> Type[Role]:
        # If we have handlers, instantiate role classes for them
        handler_classes =  
        for name, ansible_role in self.handlers.items():
            handler_classes[name] = ansible_role.get_role_class()
        # Create all the functions to start actions in the role
        start_funcs = []
        for task in self.tasks:
        # Function that calls all the 'Action start' functions
        def role_main(self):
            for func in start_funcs:
        if self.uses_facts:
            role_cls = type(, (Role,),  
                "start": lambda host: None,
                "all_facts_available": role_main
            role_cls = dataclass(role_cls)
            role_cls = with_facts(facts.Platform)(role_cls)
            role_cls = type(, (Role,),  
                "start": role_main
            role_cls = dataclass(role_cls)
        return role_cls
Now that the parsed Ansible role is a proper AST, I'm considering redesigning that using a generic Role class that works as an AST interpreter. Generating Python I maintain a library that can turn an invoice into Python code, and I have a convenient AST. I can't not generate Python code out of an Ansible role!
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--ansible-to-python role]
                 [--ansible-to-ast role]
Provision my VPS
optional arguments:
  --ansible-to-python role
                        print the given Ansible role as Transilience Python
  --ansible-to-ast role
                        print the AST of the given Ansible role as understood
                        by Transilience
And will you look at this annotated extract:
$ ./provision --ansible-to-python mailserver
from __future__ import annotations
from typing import Any
from transilience import role
from transilience.actions import builtin, facts
# Role classes generated from Ansible handlers!
class ReloadPostfix(role.Role):
    def start(self):
            builtin.systemd(unit='postfix', state='reloaded'),
            name='reload postfix')
class RestartDovecot(role.Role):
    def start(self):
            builtin.systemd(unit='dovecot', state='restarted'),
            name='restart dovecot')
# The role, including a standard set of facts
class Role(role.Role):
    # These are the variables used by Jinja2 template files and strings. I need
    # to use Any, since Ansible variables are not typed
    aliases: Any = None
    myhostname: Any = None
    postmaster: Any = None
    virtual_domains: Any = None
    def all_facts_available(self):
        # A Jinja2 string inside a string list!
                    'certbot', 'certonly', '-d',
                    self.render_string('mail. ansible_domain '), '-n',
                    '/etc/letsencrypt/live/mail. ansible_domain /fullchain.pem'
            name='obtain mail.* letsencrypt certificate')
        # A converted template task!
            name='configure dovecot',
            # Notify referring to the corresponding Role class!
        # Referencing a variable collected from a fact!
            builtin.copy(dest='/etc/mailname', content=self.ansible_domain),
            name='configure /etc/mailname',
Conclusion Transilience can load a (growing) subset of Ansible syntax, one role at a time, which contains: The role loader in Transilience now looks for YAML when it does not find a Python module, and runs it pipelined and fast! There is code to generate Python code from an Ansible module: you can take an Ansible role, convert it to Python, and then work on it to add more complex logic, or clean it up for adding it to a library of reusable roles! Next: Ansible conditionals

23 June 2021

Enrico Zini: Transilience check mode

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. I added check mode to Transilience, to do everything except perform changes, like Ansible does:
$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--to-python role]
Provision my VPS
optional arguments:
  -h, --help        show this help message and exit
  -v, --verbose     verbose output
  --debug           verbose output
  -C, --check       do not perform changes, but check if changes would be    NEW!
                    needed                                                   NEW!
It was quite straightforwad to add a new field to the base Action class, and tweak the implementations not to perform changes if it is True:
# Shortcut function to annotate dataclass fields with documentation metadata
def doc(default: Any, doc: str, **kw):
    return field(default=default, metadata= "doc": doc )
class Action:
    check: bool = doc(False, "when True, check if the action would perform changes, but do nothing")
Like with Ansible, check mode takes about the same time as a normal run which does not perform changes. Unlike Ansible, with Transilience this is actually pretty fast! ;) Next step: parsing YAML!

18 June 2021

Enrico Zini: Playbooks, host vars, group vars

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. Host variables Ansible allows to specify per-host variables, and I like that. Let's try to model a host as a dataclass:
class Host:
    A host to be provisioned.
    name: str
    type: str = "Mitogen"
    args: Dict[str, Any] = field(default_factory=dict)
    def _make_system(self) -> System:
        cls = getattr(transilience.system, self.type)
        return cls(, **self.args)
This should have enough information to create a connection to the host, and can be subclassed to add host-specific dataclass fields. Host variables can then be provided as default constructor arguments when instantiating Roles:
    # Add host/group variables to role constructor args
    host_fields = f for f in fields(host) 
    for field in fields(role_cls):
        if in host_fields:
            role_kwargs.setdefault(, getattr(host,
    role = role_cls(**role_kwargs)
Group variables It looks like I can model groups and group variables by using dataclasses as mixins:
class Webserver:
    server_name: str = ""
class Srv1(Webserver):
Doing things like filtering all hosts that are members of a given group can be done with a simple isinstance or issubclass test. Playbooks So far Transilience is executing on one host at a time, and Ansible can execute on a whole host inventory. Since the most part of running a playbook is I/O bound, we can parallelize hosts using threads, without worrying too much about the performance impact of GIL. Let's introduce a Playbook class as the main entry point for a playbook:
class Playbook:
    def setup_logging(self):
    def make_argparser(self):
        description = inspect.getdoc(self)
        if not description:
            description = "Provision systems"
        parser = argparse.ArgumentParser(description=description)
        parser.add_argument("-v", "--verbose", action="store_true",
                            help="verbose output")
        parser.add_argument("--debug", action="store_true",
                            help="verbose output")
        return parser
    def hosts(self) -> Sequence[Host]:
        Generate a sequence with all the systems on which the playbook needs to run
        return ()
    def start(self, runner: Runner):
        Start the playbook on the given runner.
        This method is called once for each system returned by systems()
        raise NotImplementedError(f" self.__class__.__name__ .start is not implemented")
    def main(self):
        parser = self.make_argparser()
        self.args = parser.parse_args()
        # Start all the runners in separate threads
        threads = []
        for host in self.hosts():
            runner = Runner(host)
            t = threading.Thread(target=runner.main)
        # Wait for all threads to complete
        for t in threads:
And an actual playbook will now look like something like this:
from dataclasses import dataclass
import sys
from transilience import Playbook, Host
class MyServer(Host):
    srv_root: str = "/srv"
    site_admin: str = ""
class VPS(Playbook):
    Provision my VPS
    def hosts(self):
        yield MyServer(name="server", args= 
            "method": "ssh",
            "hostname": "",
            "username": "root",
    def start(self, runner):
                aliases= ... )
if __name__ == "__main__":
It looks quite straightforward to me, works on any number of hosts, and has a proper command line interface:
./provision  --help
usage: provision [-h] [-v] [--debug]
Provision my VPS
optional arguments:
  -h, --help     show this help message and exit
  -v, --verbose  verbose output
  --debug        verbose output
Next step: check mode!