Search Results: "mf"

12 July 2025

Reproducible Builds: Reproducible Builds in June 2025

Welcome to the 6th report from the Reproducible Builds project in 2025. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:
  1. Reproducible Builds at FOSSY 2025
  2. Distribution work
  3. diffoscope
  4. OSS Rebuild updates
  5. Website updates
  6. Upstream patches
  7. Reproducibility testing framework

Reproducible Builds at FOSSY 2025 On Saturday 2nd August, Vagrant Cascadian and Chris Lamb will be presenting at this year s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here s Reproducible Builds!, is being introduced as follows:
There are numerous policy compliance and regulatory processes being developed that target software development but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted forever?
The talk will introduce the audience to Reproducible Builds as a set of best practices which allow users and developers to verify that software artifacts were built from the source code, but also allows auditing for license compliance, providing security benefits, and removes the need to trust arbitrary software vendors. Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you . More information on the event is available on the FOSSY 2025 website, including the full programme schedule. Vagrant and Chris will also be staffing a table this year, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.

Distribution work In Debian this month:
  • Holger Levsen has discovered that it is now possible to bootstrap a minimal Debian trixie using 100% reproducible packages. This result can itself be reproduced, using the debian-repro-status tool and mmdebstrap s support for hooks:
      $ mmdebstrap --variant=apt --include=debian-repro-status \
           --chrooted-customize-hook=debian-repro-status \
           trixie /dev/null 2>&1   grep "Your system has"
       INFO  debian-repro-status > Your system has 100.00% been reproduced.
    
  • On our mailing list this month, Helmut Grohne wrote an extensive message raising an issue related to Uploads with conflicting buildinfo filenames:
    Having several .buildinfo files for the same architecture is something that we plausibly want to have eventually. Imagine running two sets of buildds and assembling a single upload containing buildinfo files from both buildds in the same upload. In a similar vein, as a developer I may want to supply several .buildinfo files with my source upload (e.g. for multiple architectures). Doing any of this is incompatible with current incoming processing and with reprepro.
  • 5 reviews of Debian packages were added, 4 were updated and 8 were removed this month adding to our ever-growing knowledge about identified issues.

In GNU Guix, Timothee Mathieu reported that a long-standing issue with reproducibility of shell containers across different host operating systems has been solved. In their message, Timothee mentions:
I discovered that pytorch (and maybe other dependencies) has a reproducibility problem of order 1e-5 when on AVX512 compared to AVX2. I first tried to solve the problem by disabling AVX512 at the level of pytorch, but it did not work. The dev of pytorch said that it may be because some components dispatch computation to MKL-DNN, I tried to disable AVX512 on MKL, and still the results were not reproducible, I also tried to deactivate in openmpi without success. I finally concluded that there was a problem with AVX512 somewhere in the dependencies graph but I gave up identifying where, as this seems very complicated.

The IzzyOnDroid Android APK repository made more progress in June. Not only have they just passed 48% reproducibility coverage, Ben started making their reproducible builds more visible, by offering rbtlog shields, a kind of badge that has been quickly picked up by many developers who are proud to present their applications reproducibility status.
Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 298, 299 and 300 to Debian:
  • Add python3-defusedxml to the Build-Depends in order to include it in the Docker image. [ ]
  • Handle the RPM format s HEADERSIGNATURES and HEADERIMMUTABLE as a special-case to avoid unnecessarily large diffs. Thanks to Daniel Duan for the report and suggestion. [ ][ ]
  • Update copyright years. [ ]
In addition, @puer-robustus fixed a regression introduced in an earlier commit which resulted in some differences being lost. [ ][ ] Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 299 [ ][ ] and 300 [ ][ ].

OSS Rebuild updates OSS Rebuild has added a new network analyzer that provides transparent HTTP(S) interception during builds, capturing all network traffic to monitor external dependencies and identify suspicious behavior, even in unmodified maintainer-controlled build processes. The text-based user interface now features automated failure clustering that can group similar rebuild failures and provides natural language failure summaries, making it easier to identify and understand patterns across large numbers of build failures. OSS Rebuild has also improved the local development experience with a unified interface for build execution strategies, allowing for more extensible environment setup for build execution. The team also designed a new website and logo.

Website updates Once again, there were a number of improvements made to our website this month including:
  • Arnaud Brousseau added Stage , a new Linux distribution, to our Tools page.
  • Chris Lamb improved the docker instructions on the diffoscope website. [ ]


Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Installed and deployed rebuilderd version 0.24 from Debian unstable in order to make use of the new compression feature added by Jarl Gullberg for the database. This resulted in massive decrease of the SQLite databases:
      • 79G 2.8G (all)
      • 84G 3.2G (amd64)
      • 75G 2.9G (arm64)
      • 45G 2.1G (armel)
      • 48G 2.2G (armhf)
      • 73G 2.8G (i386)
      • 72G 2.7G (ppc64el)
      • 45G 2.1G (riscv64)
      for a combined saving from 521G 20.8G. This naturally reduces the requirements to run an independent rebuilderd instance and will permit us to add more Debian suites as well.
    • During migration to the latest version of rebuilderd, make sure several services are not started. [ ]
    • Actually run rebuilderd from /usr/bin. [ ]
    • Raise temperatures for NVME devices on some riscv64 nodes that should be ignored. [ ][ ]
    • Use a 64KB kernel page size on the ppc64el architecture (see #1106757). [ ]
    • Improve ordering of some failed to reproduce statistics. [ ]
    • Detect a number of potential causes of build failures within the statistics. [ ][ ]
    • Add support for manually scheduling for the any architecture. [ ]
  • Misc:
    • Update the Codethink nodes as there are now many kernels installed. [ ][ ]
    • Install linux-sysctl-defaults on Debian trixie systems as we need ping functionality. [ ]
    • Limit the fs.nr_open kernel turnable. [ ]
    • Stop submitting results to deprecated buildinfo.debian.net service. [ ][ ]
In addition, Jochen Sprickerhof greatly improved the statistics and the logging functionality, including adopting to the new database format of rebuilderd version 0.24.0 [ ] and temporarily increasing maximum log size in order to debug a nettlesome build [ ]. Jochen also dropped the CPUSchedulingPolicy=idle systemd flag on the workers. [ ]

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 July 2025

Tianon Gravi: Yubi Whati? (YubiKeys, ECDSA, and X.509)

Off-and-on over the last several weeks, I've been spending time trying to learn/understand YubiKeys better, especially from the perspective of ECDSA and signing. I had a good mental model for how "slots" work (canonically referenced by their hexadecimal names such as 9C), but found that it had a gap related to "objects"; while closing that, I was annoyed that the main reference table for this gap lives primarily in either a PDF or inside several implementations, so I figured I should create the reference I want to see in the world, but that it would also be useful to write down some of my understanding for my own (and maybe others') future reference. So, to that end, I'm going to start with a bit ( ) of background information, with the heavy caveat that this only applies to "PIV" ("FIPS 201") usage of YubiKeys, and that I only actually care about ECDSA, although I've been reassured that it's the same for at least RSA (anything outside this is firmly Here Be Not Tianon; "gl hf dd"). (Incidentally, learning all this helped me actually appreciate the simplicity of cloud-based KMS solutions, which was an unexpected side effect. ) At a really high level, ECDSA is like many other (asymmetric) cryptographic solutions you've got a public key and a private key, the private key can be used to "sign" data (tiny amounts of data, in fact, like P-256 can only reasonably sign 256 bits of data, which is where cryptographic hashes like SHA256 come in as secure analogues for larger data in small bit sizes), and the public key can then be used to verify that the data was indeed signed by the private key, and only someone with the private key could've done so. There's some complex math and RNGs involved, but none of that's actually relevant to this post, so find that information elsewhere. Unfortunately, this is where things go off the rails: PIV is X.509 ("x509") heavy, and there's no X.509 in the na ve view of my use case. In a YubiKey (or any other PIV-signing-supporting smart card? do they actually have competitors in this specific niche? ), a given "slot" can hold one single private key. There are ~24 slots which can hold a private key and be used for signing, although "Slot 9c" is officially designated as the "Digital Signature" slot and is encouraged for signing purposes. One of the biggest gotchas is that with pure-PIV (and older YubiKey firmware ) the public key for a given slot is only available at the time the key is generated, and the whole point of the device in the first place is that the private key is never, ever available from it (all cryptographic operations happen inside the device), so if you don't save that public key when you first ask the device to generate a private key in a particular slot, the public key is lost forever (asterisk).
$ # generate a new ECDSA P-256 key in "slot 9c" ("Digital Signature")
$ # WARNING: THIS WILL GLEEFULLY WIPE SLOT 9C WITHOUT PROMPTING
$ yubico-piv-tool --slot 9c --algorithm ECCP256 --action generate
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtGoWRGyjjUlJFXpu8BL6Rnx8jjKR
5+Mzl2Vepgor+k7N9q7ppOtSMWefjFVR0SEPmXqXINNsCi6LpLtNEigIRg==
-----END PUBLIC KEY-----
Successfully generated a new private key.
$ # this is the only time/place we (officially) get this public key
With that background, now let's get to the second aspect of "slots" and how X.509 fits. For every aforementioned slot, there is a corresponding "object" (read: place to store arbitrary data) which is corresponding only by convention. For all these "key" slots the (again, by convention) corresponding "object" is explicitly supposed to be an X.509 certificate (see also the PDF reference linked above). It turns out this is a useful and topical place to store that public key we need to keep handy! It's also an interesting place to shove additional details about what the key in a given slot is being used for, if that's your thing. Converting the raw public key into a (likely self-signed) X.509 certificate is an exercise for the reader, but if you want to follow the conventions, you need some way to convert a given "slot" to the corresponding "object", and that is the lookup table I wish existed in more forms. So, without further ado, here is the anti-climax:
Slot Object Description
0x9A 0x5FC105 X.509 Certificate for PIV Authentication
0x9E 0x5FC101 X.509 Certificate for Card Authentication
0x9C 0x5FC10A X.509 Certificate for Digital Signature
0x9D 0x5FC10B X.509 Certificate for Key Management
0x82 0x5FC10D Retired X.509 Certificate for Key Management 1
0x83 0x5FC10E Retired X.509 Certificate for Key Management 2
0x84 0x5FC10F Retired X.509 Certificate for Key Management 3
0x85 0x5FC110 Retired X.509 Certificate for Key Management 4
0x86 0x5FC111 Retired X.509 Certificate for Key Management 5
0x87 0x5FC112 Retired X.509 Certificate for Key Management 6
0x88 0x5FC113 Retired X.509 Certificate for Key Management 7
0x89 0x5FC114 Retired X.509 Certificate for Key Management 8
0x8A 0x5FC115 Retired X.509 Certificate for Key Management 9
0x8B 0x5FC116 Retired X.509 Certificate for Key Management 10
0x8C 0x5FC117 Retired X.509 Certificate for Key Management 11
0x8D 0x5FC118 Retired X.509 Certificate for Key Management 12
0x8E 0x5FC119 Retired X.509 Certificate for Key Management 13
0x8F 0x5FC11A Retired X.509 Certificate for Key Management 14
0x90 0x5FC11B Retired X.509 Certificate for Key Management 15
0x91 0x5FC11C Retired X.509 Certificate for Key Management 16
0x92 0x5FC11D Retired X.509 Certificate for Key Management 17
0x93 0x5FC11E Retired X.509 Certificate for Key Management 18
0x94 0x5FC11F Retired X.509 Certificate for Key Management 19
0x95 0x5FC120 Retired X.509 Certificate for Key Management 20
See also "piv-objects.json" for a machine-readable copy of this data. (Major thanks to paultag and jon gzip johnson for helping me learn and generally putting up with me, but especially dealing with my live-stream-of-thoughts while I stumble through the dark. )

1 July 2025

Ben Hutchings: FOSS activity in June 2025

24 June 2025

Matthew Garrett: Why is there no consistent single signon API flow?

Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.

comment count unavailable comments

19 June 2025

Debian Outreach Team: GSoC 2025 Introduction: Make Debian for Raspberry Pi Build Again

Hello everyone! I am Kurva Prashanth, Interested in the lower level working of system software, CPUs/SoCs and Hardware design. I was introduced to Open Hardware and Embedded Linux while studying electronics and embedded systems as part of robotics coursework. Initially, I did not pay much attention to it and quickly moved on. However, a short talk on Liberating SBCs using Debian by Yuvraj at MiniDebConf India, 2021 caught my interest. The talk focused on Open Hardware platforms such as Olimex and BeagleBone Black, as well as the Debian distributions tailored for these ARM-based single-board computers has intrigued me to delve deeper into the realm of Open Hardware and Embedded Linux. These days I m trying to improve my abilities to contribute to Debian and Linux Kernel development. Before finding out about the Google Summer of Code project, I had already started my journey with Debian. I extensively used Debian system build tools(debootstrap, sbuild, deb-build-pkg, qemu-debootstrap) for Building Debian Image for Bela Cape a real-time OS for music making to achieve extremely fast audio and sensor processing times. In 2023, I had the opportunity to attend DebConf23 in Kochi, India - thanks to Nilesh Patra (@nilesh) and I met Hector Oron (@zumbi) over dinner at DebConf23 and It was nice talking about his contributions/work at Debian on armhf port and Debian System Administration that conversation got me interested in knowing more about Debian ARM, Installer and I found it fascinating that EmDebian was once a external project bringing Debian to embedded systems and now, Debian itself can be run on many embedded systems. And, also during DebCamp I got Introduced to PGP/GPG keys and the web of trust by Carlos Henrique Lima Melara (@charles) I learned how to use and generate GPG keys. After DebConf23 I tried debian packaging and I miserably failed to get sponsorship for a python library I packaged. I came across the Debian project for this year s Google Summer of Code and found the project titled Make Debian for Raspberry Pi Build Again quite interesting to me and applied. Gladly, on May 8th, I received an acceptance e-mail from GSoC. I got excited that I ll spend the summer working on something that I like doing. I am thrilled to be part of this project and I am super excited for the summer of 25. I m looking forward to work on what I most like, new connections and learning opportunities. So, let me talk a bit more about my project. I will be working on to Make Debian for Raspberry Pi SBC s under the guidance of Gunnar Wolf (@gwolf). In this post, I will describe the project I will be working on.

Why make Debian for Raspberry Pi build again? There is an available set of images for running Debian in Raspberry Pi computers (all models below the 5 series)! However, the maintainer severely lacking time to take care for them; called for help for somebody to adopt them, but have not been successful. The image generation scripts might have bitrotted a bit, but it is mostly all done. And there is a lot of interest and use still in having the images freshly generated and decently tested! This GSoC project is about getting the [https://raspi.debian.net/ Raspberry Pi Debian images] site working reliably, daily-built images become automatic again and ideally making it easily deployable to be run in project machines and migrating exsisting hosting infrastructure to Debian.

How much it differ from Debian build process? While the goal is to stay as close as possible to the Debian build process, Raspberry Pi boards require some necessary platform-specific changes primarily in the early boot sequence and firmware handling. Unlike typical Debian systems, Raspberry Pi boards depend on a non-standard bootloader and use non-free firmware (raspi-firmware), Introducing some hardware-specific differences in the initialization process. These differences are largely confined to the early boot and hardware initialization stages. Once the system boots, the userspace remains closely aligned with a typical Debian install, using Debian packages. The current modifications are required due to non-free firmware. However, several areas merit review: but there are a few parts that might be worth changing.
  1. Boot flow: Transitioning to a U-Boot based boot process (as used in Debian installer images for many other SBCs) would reduce divergence and better align with Debian Installer.
  2. Current scripts/workarounds: Some existing hacks may now be redundant with recent upstream support and could be removed.
  3. Board-specific images: Shift to architecture-specific base images with runtime detection could simplify builds and reduce duplication.
Debian already already building SD card images for a wide range of SBCs (e.g., BeagleBone, BananaPi, OLinuXino, Cubieboard, etc.) installer-arm64/images/u-boot and installer-armhf/images/u-boot, a similar approach for Raspberry Pi could improve maintainability and consistency with Debian s broader SBC support.

Quoted from Mail Discussion Thread with Mentor (Gunnar Wolf)
"One direction we wanted to explore was whether we should still be building one image per family, or whether we could instead switch to one image per architecture (armel, armhf, arm64). There were some details to iron out as RPi3 and RPi4 were quite different, but I think it will be similar to the differences between the RPi 0 and 1, which are handled at first-boot time. To understand what differs between families, take a look at Cyril Brulebois generate-recipe (in the repo), which is a great improvement over the ugly mess I had before he contributed it"
In this project, I intend to to build one image per architecture (armel, armhf, arm64) rather than continuing with the current model of building one image per board. This change simplifies image management, reduces redundancy, and leverages dynamic configuration at boot time to support all supported boards within each architecture. By using U-Boot and flash-kernel, we can detect the board type and configure kernel parameters, DTBs, and firmware during the first boot, reducing duplication across images and simplifying the maintenance burden and we can also generalize image creation while still supporting board-specific behavior at runtime. This method aligns with existing practices in the DebianInstaller team and aligns with Debian s long-term maintainability goals and better leverages upstream capabilities, ensuring a consistent and scalable boot experience. To streamline and standardize the process of building bootable Debian images for Raspberry Pi devices, I proposed a new workflow that leverages U-Boot and flash-kernel Debian packages. This provides a clean, maintainable, and reproducible way to generate images for armel, armhf and arm64 boards. The workflow is vmdb2, a lightweight, declarative tool designed to automate the creation of disk images. A typical vmdb2 recipe defines the disk layout, base system installation (via debootstrap), architecture-specific packages, and any custom post-install hooks and the image should includes U-Boot (the u-boot-rpi package), flash-kernel, and a suitable Debian kernel package like linux-image-arm64 or linux-image-armmp. U-Boot serves as the platform s bootloader and is responsible for loading the kernel and initramfs. Unlike Raspberry Pi s non-free firmware/proprietary bootloader, U-Boot provides an open and scriptable interface, allowing us to follow a more standard Debian boot process. It can be configured to boot using either an extlinux.conf or a boot.scr script generated automatically by flash-kernel. The role of flash-kernel is to bridge Debian s kernel installation system with the specifics of embedded bootloaders like U-Boot. When installed, it automatically copies the kernel image, initrd, and device tree blobs (DTBs) to the /boot partition. It also generates the necessary boot.scr script if the board configuration demands it. To work correctly, flash-kernel requires that the target machine be identified via /etc/flash-kernel/machine, which must correspond to an entry in its internal machine database.\ Once the vmdb2 build is complete, the resulting image will contain a fully configured bootable system with all necessary boot components correctly installed. The image can be flashed to an SD card and used to boot on the intended device without additional manual configuration. Because all key packages (U-Boot, kernel, flash-kernel) are managed through Debian s package system, kernel updates and boot script regeneration are handled automatically during system upgrades.

Current Workflow: Builds one Image per family The current vmdb2 recipe uses the Raspberry Pi GPU bootloader provided via the raspi-firmware package. This is the traditional boot process followed by Raspberry Pi OS, and it s tightly coupled with firmware files like bootcode.bin, start.elf, and fixup.dat. These files are installed to /boot/firmware, which is mounted from a FAT32 partition labeled RASPIFIRM. The device tree files (*.dtb) are manually copied from /usr/lib/linux-image-*-arm64/broadcom/ into this partition. The kernel is installed via the linux-image-arm64 package, and the boot arguments are injected by modifying /boot/firmware/cmdline.txt using sed commands. Booting depends on the root partition being labeled RASPIROOT, referenced through that file. There is no bootloader like UEFI-based or U-Boot involved the Raspberry Pi firmware directly loads the kernel, which is standard for Raspberry Pi boards.
- apt: install
  packages:
    ...
    - raspi-firmware  
The boot partition contents and kernel boot setup are tightly controlled via scripting in the recipe. Limitations of Current Workflow: While this setup works, it is
  1. Proprietary and Raspberry Pi specific It relies on the closed-source GPU bootloader the raspi-firmware package, which is tightly coupled to specific Raspberry Pi models.
  2. Manual DTB handling Device tree files are manually copied and hardcoded, making upgrades or board-specific changes error-prone.
  3. Not easily extendable to future Raspberry Pi boards Any change in bootloader behavior (as seen in the Raspberry Pi 5, which introduces a more flexible firmware boot process) would require significant rework.
  4. No UEFI-based/U-Boot The current method bypasses the standard bootloader layers, making it inconsistent with other Debian ARM platforms and harder to maintain long-term.
As Raspberry Pi firmware and boot processes evolve, especially with the introduction of Pi 5 and potentially Pi 6, maintaining compatibility will require more flexibility - something best delivered by adopting U-Boot and flash-kernel.

New Workflow: Building Architecture-Specific Images with vmdb2, U-Boot, flash-kernel, and Debian Kernel This workflow outlines an improved approach to generating bootable Debian images architecture specific, using vmdb2, U-Boot, flash-kernel, and Debian kernels and also to move away from Raspberry Pi s proprietary bootloader to a fully open-source boot process which improves maintainability, consistency, and cross-board support.

New Method: Shift to U-Boot + flash-kernel U-Boot (via Debian su-boot-rpi package) and flash-kernel bring the image building process closer to how Debian officially boots ARM devices. flash-kernel integrates with the system s initramfs and kernel packages to install bootloaders, prepare boot.scr or extlinux.conf, and copy kernel/initrd/DTBs to /boot in a format that U-Boot expects. U-Boot will be used as a second-stage bootloader, loaded by the Raspberry Pi s built-in firmware. Once U-Boot is in place, it will read standard boot scripts ( boot.scr) generated by flash-kernel, providing a Debian-compatible and board-flexible solution. Extending YAML spec for vmdb2 build with U-Boot and flash-kernel To improve an existing vmdb2 YAML spec(https://salsa.debian.org/raspi-team/image-specs/raspi_master.yaml), to integrate U-Boot, flash-kernel, and the architecture-specific Debian kernel into the image build process. By incorporating u-boot-rpi and flash-kernel from Debian packages, alongside the standard initramfs-tools, we align the image closer to Debian best practices while supporting both armhf and arm64 architectures. Below are key additions and adjustments needed in a vmdb2 YAML spec to support the workflow: Install U-Boot, flash-kernel, initramfs-tools and the architecture-specific Debian kernel.
- apt: install
  packages:
    - u-boot-rpi
    - flash-kernel
    - initramfs-tools
    - linux-image-arm64 # or linux-image-armmp for armhf 
  tag: tag-root
Replace linux-image-arm64 with the correct kernel package for specific target architecture. These packages should be added under the tag-root section in YAML spec for vmdb2 build recipe. This ensures that the necessary bootloader, kernel, and initramfs tools are included and properly configured in the image. Configure Raspberry Pi firmware to Load U-Boot Install the U-Boot binary as kernel.img in /boot/firmware we can also download and build U-Boot from source, but Debian provides tested binaries.
- shell:  
    cp /usr/lib/u-boot/rpi_4/u-boot.bin $ ROOT? /boot/firmware/kernel.img
    echo "enable_uart=1" >> $ ROOT? /boot/firmware/config.txt
  root-fs: tag-root
This makes the RPi firmware load u-boot.bin instead of the Linux kernel directly. Set Up flash-kernel for Debian-style Boot flash-kernel integrates with initramfs-tools and writes boot config suitable for U-Boot. We need to make sure /etc/flash-kernel/db contains an entry for board (most Raspberry Pi boards already supported in Bookworm). Set up /etc/flash-kernel.conf with:
- create-file: /etc/flash-kernel.conf
  contents:  
    MACHINE="Raspberry Pi 4"
    BOOTPART="/dev/disk/by-label/RASPIFIRM"
    ROOTPART="/dev/disk/by-label/RASPIROOT"
  unless: rootfs_unpacked
This allows flash-kernel to write an extlinux.conf or boot.scr into /boot/firmware. Clean up Proprietary/Non-Free Firmware Bootflow Remove the direct kernel loading flow:
- shell:  
    rm -f $ ROOT? /boot/firmware/vmlinuz*
    rm -f $ ROOT? /boot/firmware/initrd.img*
    rm -f $ ROOT? /boot/firmware/cmdline.txt
  root-fs: tag-root
Let U-Boot and flash-kernel manage kernel/initrd and boot parameters instead. Boot Flow After This Change
[SoC ROM] -> [start.elf] -> [U-Boot] -> [boot.scr] -> [Linux Kernel]
  1. This still depends on the Raspberry Pi firmware to start, but it only loads U-Boot, not Linux kernel.
  2. U-Boot gives you more flexibility (e.g., networking, boot menus, signed boot).
  3. Using flash-kernel ensures kernel updates are handled the Debian Installer way.
  4. Test with a serial console (enable_uart=1) in case HDMI doesn t show early boot logs.
Advantage of New Workflow
  1. Replaces the proprietary Raspberry Pi bootloader with upstream U-Boot.
  2. Debian-native tooling Uses flash-kernel and initramfs-tools to manage boot configuration.
  3. Consistent across boards Works for both armhf and arm64, unifying the image build process.
  4. Easier to support new boards Like the Raspberry Pi 5 and future models.
This transition will standardize a bit image-building process, making it aligned with upstream Debian Installer workflows.

vmdb2 configuration for arm64 using u-boot and flash-kernel NOTE: This is a baseline example and may require tuning.
# Raspberry Pi arm64 image using U-Boot and flash-kernel
steps:
  # ... (existing mkimg, partitions, mount, debootstrap, etc.) ...
  # Install U-Boot, flash-kernel, initramfs-tools and architecture specific kernel
  - apt: install
    packages:
      - u-boot-rpi
      - flash-kernel
      - initramfs-tools
      - linux - image - arm64 # or linux - image - armmp for armhf
    tag: tag-root
  # Install U-Boot binary as kernel.img in firmware partition
  - shell:  
      cp /usr/lib/u-boot/rpi_arm64 /u-boot.bin $ ROOT? /boot/firmware/kernel.img
      echo "enable_uart=1" >> $ ROOT? /boot/firmware/config.txt
    root-fs: tag-root
  # Configure flash-kernel for Raspberry Pi
  - create-file: /etc/flash-kernel.conf
    contents:  
      MACHINE="Generic Raspberry Pi ARM64"
      BOOTPART="/dev/disk/by-label/RASPIFIRM"
      ROOTPART="/dev/disk/by-label/RASPIROOT"
    unless: rootfs_unpacked
  # Remove direct kernel boot files from Raspberry Pi firmware
  - shell:  
      rm -f $ ROOT? /boot/firmware/vmlinuz*
      rm -f $ ROOT? /boot/firmware/initrd.img*
      rm -f $ ROOT? /boot/firmware/cmdline.txt
    root-fs: tag-root
  # flash-kernel will manage boot scripts and extlinux.conf
  # Rest of image build continues...

Required Changes to Support Raspberry Pi Boards in Debian (flash-kernel + U-Boot)

Overview of Required Changes
Component Required Task
Debian U-Boot Package Add build target for rpi_arm64 in u-boot-rpi. Optionally deprecate legacy 32-bit targets.
Debian flash-kernel Package Add or verify entries in db/all.db for Pi 4, Pi 5, Zero 2W, CM4. Ensure boot script generation works via bootscr.uboot-generic.
Debian Kernel Ensure DTBs are installed at /usr/lib/linux-image-<version>/ and available for flash-kernel to reference.

flash-kernel

Already Supported Boards in flash-kernel Debian Package https://sources.debian.org/src/flash-kernel/3.109/db/all.db/#L1700
Model Arch DTB-Id
Raspberry Pi 1 A/B/B+, Rev2 armel bcm2835-*
Raspberry Pi CM1 armel bcm2835-rpi-cm1-io1.dtb
Raspberry Pi Zero/Zero W armel bcm2835-rpi-zero*.dtb
Raspberry Pi 2B armhf bcm2836-rpi-2-b.dtb
Raspberry Pi 3B/3B+ arm64 bcm2837-*
Raspberry Pi CM3 arm64 bcm2837-rpi-cm3-io3.dtb
Raspberry Pi 400 arm64 bcm2711-rpi-400.dtb

uboot

Already Supported Boards in Debian U-Boot Package https://salsa.debian.org/installer-team/flash-kernel/-/blob/master/db/all.db

arm64 Model Arch Upstream Defconfig Debian Target - - - Raspberry Pi 3B arm64 rpi_3_defconfig rpi_3 Raspberry Pi 4B arm64 rpi_4_defconfig rpi_4 Raspberry Pi 3B/3B+/CM3/CM3+/4B/CM4/400/5B/Zero 2W arm64 rpi_arm64_defconfig rpi_arm64
armhf Model Arch Upstream Defconfig Debian Target - - - Raspberry Pi 2 armhf rpi_2_defconfig rpi_2 Raspberry Pi 3B (32-bit) armhf rpi_3_32b_defconfig rpi_3_32b Raspberry Pi 4B (32-bit) armhf rpi_4_32b_defconfig rpi_4_32b
armel Model Arch Upstream Defconfig Debian Target - - - Raspberry Pi armel rpi_defconfig rpi Raspberry Pi 1/Zero armel rpi_0_w rpi_0_w These boards are already defined in debian/rules under the u-boot-rpi source package and generates usable U-Boot binaries for corresponding Raspberry Pi models.

To-Do: Add Missing Board Support to U-Boot and flash-kernel in Debian Several Raspberry Pi models are missing from the Debian U-Boot and flash-kernel packages, even though upstream support and DTBs exist in the Debian kernel but are missing entries in the flash-kernel database to enable support for bootloader installation and initrd handling.

Boards Not Yet Supported in flash-kernel Debian Package
Model Arch DTB-Id
Raspberry Pi 3A+ (32 & 64 bit) armhf, arm64 bcm2837-rpi-3-a-plus.dtb
Raspberry Pi 4B (32 & 64 bit) armhf, arm64 bcm2711-rpi-4-b.dtb
Raspberry Pi CM4 arm64 bcm2711-rpi-cm4-io.dtb
Raspberry Pi CM 4S arm64 -
Raspberry Zero 2 W arm64 bcm2710-rpi-zero-2-w.dtb
Raspberry Pi 5 arm64 bcm2712-rpi-5-b.dtb
Raspberry Pi CM5 arm64 -
Raspberry Pi 500 arm64 -

Boards Not Yet Supported in Debian U-Boot Package
Model Arch Upstream defconfig(s)
Raspberry Pi 3A+/3B+ arm64 -, rpi_3_b_plus_defconfig
Raspberry Pi CM 4S arm64 -
Raspberry Pi 5 arm64 -
Raspberry Pi CM5 arm64 -
Raspberry Pi 500 arm64 -

So, what next? During the Community Bonding Period, I got hands-on with workflow improvements, set up test environments, and began reviewing Raspberry Pi support in Debian s U-Boot and flash-kernel and these are the logs of the project, where I provide weekly reports on the work done. You can check here: Community Bonding Period logs. My next steps include submitting patches to the u-boot and flash-kernel packages to ensure all missing Raspberry Pi entries are built and shipped. And, also to confirm the kernel DTB installation paths and make sure the necessary files are included for all Raspberry Pi variants. Finally, plan to validate changes with test builds on Raspberry Pi hardware. In parallel, I m organizing my tasks and setting up my environment to contribute more effectively. It s been exciting to explore how things work under the hood and to prepare for a summer of learning and contributing to this great community.

16 June 2025

Kentaro Hayashi: Fixing long standing font issue about Debian Graphical Installer

Introduction This is just a note-taking about how fixed the long standing font issue about Debian Graphical Installer for up-coming trixie ready. Recently, this issue had been resolved by Cyril Brulebois. Thanks!

What is the problem? Because of Han unification, wrong font typefaces are rendered by default when you choose Japanese language using Graphical Debian installer.
"Wrong" glyph for Japanese
Most of typefaces seems correct, but there are wrong typefaces (Simplified Chinese) which is used for widget rendering. This issue will not be solved during using DroidSansFallback.ttf continuously for Japanese. Thus, it means that we need to switch font itself which contains Japanese typeface to fix this issue. If you wan to know about how Han Unification is harmful in this context, See

What causes this problem? In short, fonts-android (DroidSansFallback.ttf) had been used for CJK, especially for Japanese. Since Debian 9 (stretch), fonts-android was adopted for CJK fonts by default. Thus this issue was not resolved in Debian 9, Debian 10, Debian 11 and Debian 12 release cycle!

What is the impact about this issue? For sadly, Japanese native speakers can recognize such a unexpectedly rendered "Wrong" glyph, so it is not hard to continue Debian installation process. Even if there is no problem with the installer's functionality, it gives a terrible user experience for newbie. For example, how can you trust an installer which contains full of typos? It is similar situation for Japanese users.

How Debian Graphical Installer was fixed? In short, newly fonts-motoya-l-cedar-udeb was bundled for Japanese, and changed to switch that font via gtk-set-font command. It was difficult that what is the best font to deal font file size and visibility. Typically Japanese font file occupies extra a few MB. Luckily, some space was back for Installer, it was not seen as a problem (I guess). As a bonus, we tried to investigate a possibility of font compression mechanism for Installer, but it was regarded as too complicated and not suitable for trixie release cycle.

Conclution
  • The font issue was fixed in Debian Graphical Installer for Japanese
  • As recently fixed, not officially shipped yet (NOTE Debian Installer Trixie RC1 does not contain this fix) Try daily build installer if you want.
This article was written with Ultimate Hacking Keyboard 60 v2 with Rizer 60 (New my gear!).

8 June 2025

Colin Watson: Free software activity in May 2025

My Debian contributions this month were all sponsored by Freexian. Things were a bit quieter than usual, as for the most part I was sticking to things that seemed urgent for the upcoming trixie release. You can also support my work directly via Liberapay or GitHub Sponsors. OpenSSH After my appeal for help last month to debug intermittent sshd crashes, Michel Casabona helped me put together an environment where I could reproduce it, which allowed me to track it down to a root cause and fix it. (I also found a misuse of strlcpy affecting at least glibc-based systems in passing, though I think that was unrelated.) I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent socket handling. I fixed a reproducibility bug depending on whether passwd is installed on the build system, which would have affected security updates during the lifetime of trixie. I backported openssh 1:10.0p1-5 to bookworm-backports. I issued bookworm and bullseye updates for CVE-2025-32728. groff I backported a fix for incorrect output when formatting multiple documents as PDF/PostScript at once. debmirror I added a simple autopkgtest. Python team I upgraded these packages to new upstream versions: In bookworm-backports, I updated these packages: I fixed problems building these packages reproducibly: I backported fixes for some security vulnerabilities to unstable (since we re in freeze now so it s not always appropriate to upgrade to new upstream versions): I fixed various other build/test failures: I added non-superficial autopkgtests to these packages: I packaged python-django-hashids and python-django-pgbulk, needed for new upstream versions of python-django-pgtrigger. I ported storm to Python 3.14. Science team I fixed a build failure in apertium-oci-fra.

6 June 2025

Reproducible Builds: Reproducible Builds in May 2025

Welcome to our 5th report from the Reproducible Builds project in 2025! Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please do visit the Contribute page on our website. In this report:
  1. Security audit of Reproducible Builds tools published
  2. When good pseudorandom numbers go bad
  3. Academic articles
  4. Distribution work
  5. diffoscope and disorderfs
  6. Website updates
  7. Reproducibility testing framework
  8. Upstream patches

Security audit of Reproducible Builds tools published The Open Technology Fund s (OTF) security partner Security Research Labs recently an conducted audit of some specific parts of tools developed by Reproducible Builds. This form of security audit, sometimes called a whitebox audit, is a form testing in which auditors have complete knowledge of the item being tested. They auditors assessed the various codebases for resilience against hacking, with key areas including differential report formats in diffoscope, common client web attacks, command injection, privilege management, hidden modifications in the build process and attack vectors that might enable denials of service. The audit focused on three core Reproducible Builds tools: diffoscope, a Python application that unpacks archives of files and directories and transforms their binary formats into human-readable form in order to compare them; strip-nondeterminism, a Perl program that improves reproducibility by stripping out non-deterministic information such as timestamps or other elements introduced during packaging; and reprotest, a Python application that builds source code multiple times in various environments in order to to test reproducibility. OTF s announcement contains more of an overview of the audit, and the full 24-page report is available in PDF form as well.

When good pseudorandom numbers go bad Danielle Navarro published an interesting and amusing article on their blog on When good pseudorandom numbers go bad. Danielle sets the stage as follows:
[Colleagues] approached me to talk about a reproducibility issue they d been having with some R code. They d been running simulations that rely on generating samples from a multivariate normal distribution, and despite doing the prudent thing and using set.seed() to control the state of the random number generator (RNG), the results were not computationally reproducible. The same code, executed on different machines, would produce different random numbers. The numbers weren t just a little bit different in the way that we ve all wearily learned to expect when you try to force computers to do mathematics. They were painfully, brutally, catastrophically, irreproducible different. Somewhere, somehow, something broke.
Thanks to David Wheeler for posting about this article on our mailing list

Academic articles There were two scholarly articles published this month that related to reproducibility: Daniel Hugenroth and Alastair R. Beresford of the University of Cambridge in the United Kingdom and Mario Lins and Ren Mayrhofer of Johannes Kepler University in Linz, Austria published an article titled Attestable builds: compiling verifiable binaries on untrusted systems using trusted execution environments. In their paper, they:
present attestable builds, a new paradigm to provide strong source-to-binary correspondence in software artifacts. We tackle the challenge of opaque build pipelines that disconnect the trust between source code, which can be understood and audited, and the final binary artifact, which is difficult to inspect. Our system uses modern trusted execution environments (TEEs) and sandboxed build containers to provide strong guarantees that a given artifact was correctly built from a specific source code snapshot. As such it complements existing approaches like reproducible builds which typically require time-intensive modifications to existing build configurations and dependencies, and require independent parties to continuously build and verify artifacts.
The authors compare attestable builds with reproducible builds by noting an attestable build requires only minimal changes to an existing project, and offers nearly instantaneous verification of the correspondence between a given binary and the source code and build pipeline used to construct it , and proceed by determining that t he overhead (42 seconds start-up latency and 14% increase in build duration) is small in comparison to the overall build time.
Timo Pohl, Pavel Nov k, Marc Ohm and Michael Meier have published a paper called Towards Reproducibility for Software Packages in Scripting Language Ecosystems. The authors note that past research into Reproducible Builds has focused primarily on compiled languages and their ecosystems, with a further emphasis on Linux distribution packages:
However, the popular scripting language ecosystems potentially face unique issues given the systematic difference in distributed artifacts. This Systemization of Knowledge (SoK) [paper] provides an overview of existing research, aiming to highlight future directions, as well as chances to transfer existing knowledge from compiled language ecosystems. To that end, we work out key aspects in current research, systematize identified challenges for software reproducibility, and map them between the ecosystems.
Ultimately, the three authors find that the literature is sparse , focusing on few individual problems and ecosystems, and therefore identify space for more critical research.

Distribution work In Debian this month:
Hans-Christoph Steiner of the F-Droid catalogue of open source applications for the Android platform published a blog post on Making reproducible builds visible. Noting that Reproducible builds are essential in order to have trustworthy software , Hans also mentions that F-Droid has been delivering reproducible builds since 2015 . However:
There is now a Reproducibility Status link for each app on f-droid.org, listed on every app s page. Our verification server shows or based on its build results, where means our rebuilder reproduced the same APK file and means it did not. The IzzyOnDroid repository has developed a more elaborate system of badges which displays a for each rebuilder. Additionally, there is a sketch of a five-level graph to represent some aspects about which processes were run.
Hans compares the approach with projects such as Arch Linux and Debian that provide developer-facing tools to give feedback about reproducible builds, but do not display information about reproducible builds in the user-facing interfaces like the package management GUIs.
Arnout Engelen of the NixOS project has been working on reproducing the minimal installation ISO image. This month, Arnout has successfully reproduced the build of the minimal image for the 25.05 release without relying on the binary cache. Work on also reproducing the graphical installer image is ongoing.
In openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.
Lastly in Fedora news, Jelle van der Waa opened issues tracking reproducible issues in Haskell documentation, Qt6 recording the host kernel and R packages recording the current date. The R packages can be made reproducible with packaging changes in Fedora.

diffoscope & disorderfs diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 295, 296 and 297 to Debian:
  • Don t rely on zipdetails --walk argument being available, and only add that argument on newer versions after we test for that. [ ]
  • Review and merge support for NuGet packages from Omair Majid. [ ]
  • Update copyright years. [ ]
  • Merge support for an lzma comparator from Will Hollywood. [ ][ ]
Chris also merged an impressive changeset from Siva Mahadevan to make disorderfs more portable, especially on FreeBSD. disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues [ ]. This was then uploaded to Debian as version 0.6.0-1. Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 296 [ ][ ] and 297 [ ][ ], and disorderfs to version 0.6.0 [ ][ ].

Website updates Once again, there were a number of improvements made to our website this month including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. However, Holger Levsen posted to our mailing list this month in order to bring a wider awareness to funding issues faced by the Oregon State University (OSU) Open Source Lab (OSL). As mentioned on OSL s public post, recent changes in university funding makes our current funding model no longer sustainable [and that] unless we secure $250,000 in committed funds, the OSL will shut down later this year . As Holger notes in his post to our mailing list, the Reproducible Builds project relies on hardware nodes hosted there. Nevertheless, Lance Albertson of OSL posted an update to the funding situation later in the month with broadly positive news.
Separate to this, there were various changes to the Jenkins setup this month, which is used as the backend driver of for both tests.reproducible-builds.org and reproduce.debian.net, including:
  • Migrating the central jenkins.debian.net server AMD Opteron to Intel Haswell CPUs. Thanks to IONOS for hosting this server since 2012.
  • After testing it for almost ten years, the i386 architecture has been dropped from tests.reproducible-builds.org. This is because that, with the upcoming release of Debian trixie, i386 is no longer supported as a regular architecture there will be no official kernel and no Debian installer for i386 systems. As a result, a large number of nodes hosted by Infomaniak have been retooled from i386 to amd64.
  • Another node, ionos17-amd64.debian.net, which is used for verifying packages for all.reproduce.debian.net (hosted by IONOS) has had its memory increased from 40 to 64GB, and the number of cores doubled to 32 as well. In addition, two nodes generously hosted by OSUOSL have had their memory doubled to 16GB.
  • Lastly, we have been granted access to more riscv64 architecture boards, so now we have seven such nodes, all with 16GB memory and 4 cores that are verifying packages for riscv64.reproduce.debian.net. Many thanks to PLCT Lab, ISCAS for providing those.

Outside of this, a number of smaller changes were also made by Holger Levsen:
  • reproduce.debian.net-related:
    • Only use two workers for the ppc64el architecture due to RAM size. [ ]
    • Monitor nginx_request and nginx_status with the Munin monitoring system. [ ][ ]
    • Detect various variants of network and memory errors. [ ][ ][ ][ ]
    • Add a prominent link to reproducible-builds.org. [ ]
    • Add a rebuilderd-cache-cleanup.service and run it daily via timer. [ ][ ][ ][ ][ ]
    • Be more verbose what sources are being downloaded. [ ]
    • Correctly deal with packages with an epoch in their version [ ] and deal with binNMUs versions with an epoch as well [ ][ ].
    • Document how to reschedule all other errors on all archs. [ ]
    • Misc documentation improvements. [ ][ ][ ][ ]
    • Include the $HOSTNAME variable in the rebuilderd logfiles. [ ]
    • Install the equivs package on all worker nodes. [ ][ ]
  • Jenkins nodes:
    • Permit the sudo tool to fix up permission issues. [ ][ ]
    • Document how to manage diskspace with OpenStack. [ ]
    • Ignore a number of spurious monitoring errors on riscv64, FreeBSD, etc.. [ ][ ][ ][ ]
    • Install ntpsec-ntpdate (instead of ntpdate) as the former is available on Debian trixie and bookworm. [ ][ ]
    • Use the same SSH ControlPath for all nodes. [ ]
    • Make sure the munin user uses the same SSH config as the jenkins user. [ ]
  • tests.reproducible-builds.org-related:
    • Disable testing of the i386 architecture. [ ][ ][ ][ ][ ]
    • Document the current disk usage. [ ][ ]
    • Address some image placement now that we only test three architectures. [ ]
    • Keep track of build performance. [ ]
  • Misc:
    • Fix a (harmless) typo in the multiarch_versionskew script. [ ]
In addition, Jochen Sprickerhof made a series of changes related to reproduce.debian.net:
  • Add out of memory detection to the statistics page. [ ]
  • Reverse the sorting order on the statistics page. [ ][ ][ ][ ]
  • Improve the spacing between statistics groups. [ ]
  • Update a (hard-coded) line number in error message detection pertaining to a debrebuild line number. [ ]
  • Support Debian unstable in the rebuilder-debian.sh script. [ ] ]
  • Rely on rebuildctl to sync only arch-specific packages. [ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

1 June 2025

Ben Hutchings: FOSS activity in May 2025

27 May 2025

Russell Coker: Leaf ZE1

I ve just got a second hand Nissan LEAF. It s not nearly as luxurious as the Genesis EV that I test drove [1]. It s also just over 5 years old so it s not as slick as the MG4 I test drove [2]. But the going rate for a LEAF of that age is $17,000 vs $35,000 or more for a new MG4 or $130,000+ for a Genesis. At this time the LEAF is the only EV in Australia that s available on the second hand market in quantity. Apparently the cheapest new EV in Australia is a Great Wall one which is $32,000 and which had a wait list last time I checked, so $17,000 is a decent price if you want an electric car and aren t interested in paying the price of a new car. Starting the Car One thing I don t like about most recent cars (petrol as well as electric) is that they needlessly break traditions of car design. Inserting a key and turning it clockwise to start a car is a long standing tradition that shouldn t be broken without a good reason. With the use of traditional keys you know that when a car has the key removed it can t be operated, there s no situation of the person with the key walking away and leaving the car driveable and there s no possibility of the owner driving somewhere without the key and then being unable to start it. To start a LEAF you have to have the key fob device in range, hold down the brake pedal, and then press the power button. To turn on accessories you do the same but without holding down the brake pedal. They also have patterns of pushes, push twice to turn it on, push three times to turn it off. This is all a lot easier with a key where you can just rotate it as many clicks as needed. The change of car design for the key means that no physical contact is needed to unlock the car. If someone stands by a car fiddling with the door lock it will get noticed which deters certain types of crime. If a potential thief can sit in a nearby car to try attack methods and only walk to the target vehicle once it s unlocked it makes the crime a lot easier. Even if the electronic key is as secure as a physical key allowing attempts to unlock remotely weakens security. Reports on forums suggest that the electronic key is vulnerable to replay attacks. I guess I just have to hope that as car thieves typically get less than 10% of the value of a car it s just not worth their effort to steal a $17,000 car. Unlocking doors remotely is a common feature that s been around for a while but starting a car without a key being physically inserted is a new thing. Other Features The headlights turn on automatically when the car thinks that the level of ambient light warrants it. There is an option to override this to turn on lights but no option to force the lights to be off. So if you have your car in the on state while parked the headlights will be on even if you are parked and listening to the radio. The LEAF has a bunch of luxury features which seem a bit ridiculous like seat warmers. It also has a heated steering wheel which has turned out to be a good option for me as I have problems with my hands getting cold. According to the My Nissan LEAF Forum the seat warmer uses a maximum of 50W per seat while the car heater uses a minimum of 250W [3]. So if there are one or two people in the car then significantly less power is used by just heating the seats and also keeping the car air cool reduces window fog. The Bluetooth audio support works well. I ve done hands free calls and used it for playing music from my phone. This is the first car I ve owned with Bluetooth support. It also has line-in which might have had some use in 2019 but is becoming increasingly useless as phones with Bluetooth become more popular. It has support for two devices connecting via Bluetooth at the same time which could be handy if you wanted to watch movies on a laptop or tablet while waiting for someone. The LEAF has some of the newer safety features, it tracks lane markers and notifies the driver via beeps and vibration if they stray from their lane. It also tries to read speed limit signs and display the last observed speed limit on the dash display. It also has a skid alert which in my experience goes off under hard acceleration when it s not skidding but doesn t go off if you lose grip when cornering. The features for detecting changing lanes when close to other cars and for emergency braking when another car is partly in the lane (even if moving out of the lane) don t seem well tuned for Australian driving, the common trend on Australian roads is lawful-evil to use DND terminology. Range My most recent driving was just over 2 hours driving with a distance of a bit over 100Km which took the battery from 62% to 14%. So it looks like I can drive a bit over 200Km at an average speed of 50Km/h. I have been unable to find out the battery size for my car, my model will have either a 40KWh or 62KWh battery. Google results say it should be printed on the B pillar (it s not) and that it can be deduced from the VIN (it can t). I m guessing that my car is the cheaper option which is supposed to do 240Km when new which means that a bit over 200Km at an average speed of 50Km/h when 6yo is about what s expected. If it has the larger battery designed to do 340Km then doing 200Km in real use would be rather disappointing. Assuming the battery is 40KWh that means it s 5Km/KWh or 10KW average for the duration. That means that the 250W or so used by the car heater should only make a about 2% difference to range which is something that a human won t usually notice. If I was to drive to another state I d definitely avoid using the heater or airconditioner as an extra 4km could really matter when trying to find a place to charge when you aren t familiar with the area. It s also widely reported that the LEAF is less efficient at highway speeds which is an extra difficulty for that. It seems that the LEAF just isn t designed for interstate driving in Australia, it would be fine for driving between provinces of the Netherlands as it s difficult to drive for 200km without leaving that country. Driving 700km to another city in a car with 200km range would mean charging 3 times along the way, that s 2 hours of charging time when using fast chargers. This isn t a problem at all as the average household in Australia has 1.8 cars and the battery electric vehicles only comprise 6.3% of the market. So if a household had a LEAF and a Prius they could just use the Prius for interstate driving. A recent Prius could drive from Melbourne to Canberra or Adelaide without refuelling on the way. If I was driving to another state a couple of times a year I could rent an old fashioned car to do that and still be saving money when compared to buying petrol all the time. Running Cost Currently I m paying about $0.28 per KWh for electricity, it s reported that the efficiency of charging a LEAF is as low as 83% with the best efficiency when fast charging. I don t own the fast charge hardware and don t plan to install it as that would require getting a replacement of the connection to my home from the street, a new switchboard, and other expenses. So I expect I ll be getting 83% efficiency when charging which means 48KWh for 200KM or 96KWH for the equivalent of a $110 tank of petrol. At $0.28/KWh it will cost $26 for the same amount of driving as $110 of petrol. I also anticipate saving money on service as there s no need for engine oil changes and all the other maintenance of a petrol engine and regenerative braking will reduce the incidence of brake pad replacement. I expect to save over $1100 per annum on using electricity instead of petrol even if I pay the full rate. But if I charge my car in the middle of the day when there is over supply and I don t get paid for feeding electricity from my solar panels into the grid (as is common nowadays) it could be almost free to charge the car and I could save about $1500 on fuel. Comfort Electric cars are much quieter than cars with petrol or Diesel engines which is a major luxury feature. This car is also significantly newer than any other car I ve driven much so it has features like Bluetooth audio which weren t in other cars I ve driven. When doing 100Km/h I can hear a lot of noise from the airflow, part of that would be due to the LEAF not having the extreme streamlining features that are associated with Teslas (such as retracting door handles) and part of that would be due to the car being older and the door seals not being as good as they were when new. It s still a very quiet car with a very smooth ride. It would be nice if they used the quality of seals and soundproofing that VW uses in the Passat but I guess the car would be heavier and have a shorter range if they did that. This car has less space for the driver than any other car I ve driven (with the possible exception of a 1989 Ford Laser AKA Mazda 323). The front seats have less space than the Prius. Also the batteries seem to be under the front seats so there s a bulge in the floor going slightly in front of the front seats when they are moved back which gives less space for the front passenger to move their legs and less space for the driver when sitting in a parked car. There are a selection of electric cars from MG, BYD, and Great Wall that have more space in the front seats, if those cars were on the second hand market I might have made a different choice but a second hand LEAF is the only option for a cheap electric car in Australia now. The heated steering wheel and heated seats took a bit of getting used to but I have come to appreciate the steering wheel and the heated seats are a good way of extending the range of the car. Misc Notes The LEAF is a fun car to drive and being quiet is a luxury feature, it s no different to other EVs in this regard. It isn t nearly as fast as a Tesla, but is faster than most cars actually drive on the road. When I was looking into buying a LEAF from one of the car sales sites I was looking at models less than 5 years old. But the ZR1 series went from 2017 to 2023 so there s probably not much difference between a 2019 model and a 2021 model but there is a significant price difference. I didn t deliberately choose a 2019 car, it was what a relative was selling at a time when I needed a new car. But knowing what I know now I d probably look at that age of LEAF if choosing from the car sales sites. Problems When I turn the car off the side mirrors fold in but when I turn it on they usually don t automatically unfold if I have anything connected to the cigarette lighter power port. This is a well known problem and documented on forums. This is something that Nissan really should have tested before release because phone chargers that connect to the car cigarette lighter port have been common for at least 6 years before my car was manufactured and at least 4 years before the ZE1 model was released. The built in USB port doesn t supply enough power to match the power use of a Galaxy Note 9 running Google maps and playing music through Bluetooth. On it s own this isn t a big deal but combined with the mirror issue of using a charger in the cigarette lighter port it s a problem. The cover over the charging ports doesn t seem to lock easily enough, I had it come open when doing 100Km/h on a freeway. This wasn t a big deal but as the cover opens in a suicide-door manner at a higher speed it could have broken off. The word is that LEAF service in Australia is not done well. Why do you need regular service of an electric car anyway? For petrol and Diesel cars it s engine oil replacement that makes it necessary to have regular service. Surely you can just drive it until either the brakes squeak or the tires seem worn. I have been having problems charging, sometimes it will charge from ~20% to 100% in under 24 hours, sometimes in 14+ hours it only gets to 30%. Conclusion This is a good car and the going price on them is low. I generally recommend them as long as you aren t really big and aren t too worried about the poor security. It s a fun car to drive even with a few annoying things like the mirrors not automatically extending on start. The older ones like this are cheap enough that they should be able to cover the entire purchase cost in 10 years by the savings from not buying petrol even if you don t drive a lot. With a petrol car I use about 13 tanks of petrol a year so my driving is about half the average for Australia. Some people could cover the purchase price of a second hand leaf in under 5 years.

25 May 2025

Valhalla's Things: Honeycomb shirt

Posted on May 25, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear, GNU Terry Pratchett
A woman wearing a purplish blue shirt with very wide sleeves, gathered at the cuffs and shoulder with honeycombing, and also a rectangle of honeycombing in the front between the neckline and just above the bust. The shirt is gathered at the waist with a wide belt, and an almost lilac towel hangs from the belt. After cartridge pleating, the next fabric manipulation technique I wanted to try was smocking, of the honeycombing variety, on a shirt. My current go-to pattern for shirts is the 1880 menswear one I have on my website: I love the fact that most of the fabric is still cut as big rectangles, but the shaped yoke and armscyes make it significantly more comfortable than the earlier style where most of the shaping at the neck was done with gathers into a straight collar. A woman wearing a shirt in the same fabric; this one has a slit in the front, is gathered into a tall rectangular collar and has dropped shoulders because it's cut from plain rectangles. The sleeves are still huge, and gathered into tall cuffs. It is worn belted (with the same wide white elastic belt used in the previous picture) and the woman is wearing a matching fabric mask, because the picture has been taken in 2021. In my stash I had a cut of purple-blue hopefully cotton [#cotton] I had bought for a cheap price and used for my first attempt at an historically accurate pirate / vampire shirt that has now become by official summer vaccine jab / blood test shirt (because it has the long sleeves I need, but they are pretty easy to roll up to give access to my arm. That shirt tends to get out of the washing machine pretty wearable even without ironing, which made me think it could be a good fabric for something that may be somewhat hard to iron (but also made me suspicious about the actual composition of the fabric, even if it feels nice enough even when worn in the summer). A piece of fabric with many rows of honeycombing laid on top of the collar and yoke of the shirt; a metal snap peeks from behind the piece of honeycombed fabric.  There are still basting lines for the armscyes. Of course I wanted some honeycombing on the front, but I was afraid that the slit in the middle of it would interfere with the honeycombing and gape, so I decided to have the shirt open in an horizontal line at the yoke. I added instructions to the pattern page for how I changed the opening in the front, basically it involved finishing the front edge of the yoke, and sewing the honeycombed yoke to a piece of tape with snaps. Another change from the pattern is that I used plain rectangles for the sleeves, and a square gusset, rather than the new style tapered sleeve , because I wanted to have more fabric to gather at the wrist. I did the side and sleeve seams with a hem + whipstitch method rather than a felled seam, which may have helped, but the sleeves went into the fitted armscyes with no issue. I think that if (yeah, right. when) I ll make another sleeve in this style I ll sew it into the side seam starting 2-3 cm lower than the place I ve marked on the pattern for the original sleeve. The back of the unbelted shirt: it has a fitted yoke, and then it is quite wide and unfitted, with the fabric gathered into the yoke with a row of honeycombing and some pleating on top. I also used a row of honeycombing on the back and two on the upper part of the sleeves, instead of the gathering, and of course some rows to gather the cuffs. The honeycombing on the back was a bit too far away from the edge, so it s a bit of an odd combination of honeycombing and pleating that I don t hate, but don t love either. It s on the back, so I don t mind. On the sleeves I ve done the honeycombing closer to the edge and I ve decided to sew the sleeve as if it was a cartridge pleated sleeve, and that worked better. Because circumstances are still making access to my sewing machine more of a hassle than I d want it to be, this was completely sewn by hand, and at a bit more than a month I have to admit that near the end it felt like it had been taken forever. I m not sure whether it was the actual sewing being slow, some interruptions that happened when I had little time to work on it, or the fact that I ve just gone through a time when my brain kept throwing new projects at me, and I kept thinking of how to make those. Thanks brain. Even when on a hurry to finish it, however, it was still enjoyable sewing, and I think I ll want to do more honeycombing in the future. The same woman with arms wide to show the big sleeves and the shirt unbelted to show that it is pretty wide also from the front, below the yoke and the honeycombing. The back can be seen as about 10 cm longer than the front. Anyway, it s done! And it s going straight into my daily garment rotation, because the weather is getting hot, and that means it s definitely shirt time.

22 May 2025

Simon Quigley: Bootstrapping and Bikeshedding

When you learn to write, one of the major pieces is writing an introduction. You could say quite a bit about the formal process and the way formal writing is typically structured.That being said, I m not your typical writer. I didn t exactly plan to be in this spot, and honestly, it s just a hobby of mine. If you enjoy reading my posts, great. I appreciate it.My entire point, from the very beginning of this week, has been to bootstrap my own platform, on my own two feet. As you all know, rumors were going around, and I felt as if I wasn t in that great of a position to go out and say everything that I ve said, in one long post. I ve needed to break it apart, and work through some of my own notes again. Prove myself, rather than asking for it to be handed to me. Keep people guessing on some elements, to let the people who have been doing wrong firmly prove themselves. At least twelve people at this point have sent me undeniable proof. I don t want to do anything with it. I want to move on, and I want to write about technology, and other things I actually enjoy writing about.I really don t enjoy conflict, or writing about it, at all. I just know how to defend myself if I need to. If you think I actively want to make everyone mad for no good reason, you re honestly fooling yourself.I already know that I m not going to agree with everything I write in a few years, or maybe even weeks. But I ll hold myself to the same standard I hold everyone else. If you say something and a few years pass, I m not going to assume you still have the same opinion. I ll give you the opportunity to correct it. I d ask you lend me the same courtesy.I do this because I enjoy it, not out of anger, sadness, or anything like that. A number of people have approached me now simply asking why I m doing what I m doing. I planned to write this blog post at the very end all along. I just haven t revealed the plans before publishing. [I wrote the outline for this more than a day ago.]My entire point here is to give the common person a voice. I know exactly what it s like to start from nothing, and work your way up to a comfortable spot. I didn t only do it once, I did it twice. This is my third time. I genuinely don t appreciate it when people silence other people s voices just because they don t like them. I wasn t raised that way, and that s not how I run my own projects. In fact, I d say that if you silence opinions or values that don t exactly match yours, you re missing out on the variety of life.You can lead a horse to water, but you can t make them drink. If you still think there are issues on my end, you re being misinformed. Plain and simple. I m not going to spend any more time on it, I actually want to start what I ve always wanted to do for years, just never had the opportunity to.This is my last post for the week. I feel like I ve bootstrapped enough where next week I can focus on another topic, as originally planned.Thanks for reading. I appreciate your support. Talk to you on Monday. If you have topic suggestions, feel free to leave them in the comments. Feel free to leave your hate mail too, so I know where I m at.If you read Piaget, you know that silence is concerning with respect to socialization. I just took a look at the view count for the first time, and I m already up to more than 2k views. In the first week. Without actually digging into the content I want to write about, yet.I don t mean to say that to brag, I m just telling you for a fact that I m not ranting into thin air. I m not even ranting at all. In fact, every single one of my posts has been written either calmly or with happiness, including this one.My entire point is this: debate me on the merits of the subject, not on me as a person. If you don t like me, that s okay. Go play elsewhere.But, I m still going to keep writing. Next week, different metaphors.We re bootstrapped now.

19 May 2025

Simon Quigley: Toolboxes and Hammers Be You

Toolboxes and Hammers Be YouEveryone has a story. We all started from somewhere, and we re all going somewhere.Ten years ago this summer, I first heard of Ubuntu. It took me time to learn how to properly pronounce the word, although I m glad I learned that early on. I was less fortunate when it came to the pronunciation of the acronym for the Ubuntu Code of Conduct. I had spent time and time again breaking my computer, and I d wanted to start fresh.I ve actually talked about this in an interview before, which you can find here (skip to 5:02 6:12 for my short explanation, I m in orange):https://medium.com/media/ad59becdbd06d230b875fb1512df1921/hrefI ve also done a few interviews over the years, here are some of the more recent ones:https://medium.com/media/83bda448d5f2a979f848e17f04376aa6/hrefAsk Noah Show 377Lastly, I did a few talks at SCaLE 21x (which the Ubuntu community donation funds helped me attend, thank you for that!):https://medium.com/media/0fbde7ef0ed83c2272a8653a5ea38b67/hrefhttps://medium.com/media/4d18f1770dc7eed6c7a9d711ff6a6e89/hrefMy story is fairly simple to summarize, if you don t have the time to go through all the clips.I started in the Ubuntu project at 13 years old, as a middle school student living in Green Bay, WI. I m now 23 years old, still living in Green Bay, but I became an Ubuntu Core Developer, Lubuntu s Release Manager, and worked up to a very great and comfortable spot.So, Simon, what advice would you give to someone at 13 who wants to do the same thing? Here are a few tips * Don t be afraid to be yourself. If you put on a mask, it hinders your growth, and you ll end up paying for it later anyway.
* Find a mentor. Someone who is okay working with someone your age, and ideally someone who works well with people your age (quick shoutout to Aaron Prisk and Walter Lapchynski for always being awesome to me and other folks starting out at high school.) This is probably the most important part.
* Ask questions. Tons of them. Ask questions until you re blue in the face. Ask questions until you get a headache so bad that your weekend needs to come early. Okay, maybe don t go that far, but at the very least, always stay curious.
* Own up to your mistakes. Even the most experienced people you know have made tons of mistakes. It s not about the mistake itself, it s about how you handle it and grow as a person.Now, after ten years, I ve seen many people come and go in Ubuntu. I was around for the transition from upstart to systemd. I was around for the transition from Unity to GNOME. I watched Kubuntu as a flavor recover from the arguments only a few years before I first started, only to jump in and help years later when the project started to trend downwards again.I have deep love, respect, and admiration for Ubuntu and its community. I also have deep love, respect, and admiration for Canonical as a company. It s all valuable work. That being said, I need to recognize where my own limits are, and it s not what you d think. This isn t some big burnout rant.Some of you may have heard rumors about an argument between me and the Ubuntu Community Council. I refuse to go into the private details of that, but what I ll tell you is this in retrospect, it was in good faith. The entire thing, from both my end and theirs, was to try to either help me as a person, or the entire community. If you think any part of this was bad faith from either side, you re fooling yourself. Plus, tons of great work and stories actually came out of this.The Ubuntu Community Council really does care. And so does Mark Shuttleworth.Now, I won t go into many specifics. If you want specifics, I d direct you to the Ubuntu Community Council who would be more than happy to answer any questions (actually they d probably stay silent. Nevermind.) That being said, I can t really talk about any of this without mentioning how great Mark has become.Remember, I was around for a few different major changes within the project. I ve heard and seen stories about Mark that actually match what Reddit says about him. But in 2025, out of the bottom of my heart, I m here to tell you that you re all wrong now.See, Mark didn t just side with somebody and be done with it. He actually listened, and I could tell, he cares very very deeply. I really enjoyed reading ogra s recent blog post, you should seriously check it out. Of course, I m only 23 years old, but I have to say, my experiences with Mark match that too.Now, as for what happens from here. I m taking a year off from Ubuntu. I talked this over with a wide variety of people, and I think it s the right decision. People who know me personally know that I m not one to make a major decision like this without a very good reason to. Well, I d like to share my reasons with you, because I think they d help.People who contribute time to open source find it to be very rewarding. Sometimes so rewarding, in fact, that no matter how many economics and finance books they read, they still haven t figured out how to balance that with a job that pays money. I m sure everyone deeply involved in this space has had the urge to quit their job at least once or twice to pursue their passions.Here s the other element too I ve had a handful of romantic relationships before, and they ve never really panned out. I found the woman that I truly believe I m going to marry. Is it going to be a rough road ahead of us? Absolutely, and to be totally honest, there is still a (small, at this point) chance it doesn t work out.That being said I remain optimistic. I m not taking a year off because I m in some kind of trouble. I haven t burned any bridge here except for one.You know who you are. You need help. I d be happy to reconnect with you once you realize that it s not okay to do what you did. An apology letter is all I want. I don t want Mutually Assured Destruction, I don t want to sit and battle on this for years on end. Seriously dude, just back off. Please.I hate having to take out the large hammer. But sometimes, you just have to do it. I ve quite enjoyed Louis Rossmann s (very not-safe-for-work) videos on BwE.https://medium.com/media/ab64411c41e65317f271058f56bb2aba/hrefI genuinely enjoy being nice to people. I want to see everyone be successful and happy, in that order (but with both being very important). I m not perfect, I m a 23-year-old who just happened to stumble into this space at the right time.To this specific person only, I tell you, please, let me go take my year off in peace. I don t wish you harm, and I won t make anything public, including your name, if you just back off.Whew. Okay. Time to be happy again.Again, I want to see people succeed. That goes for anyone in Ubuntu, Lubuntu, Kubuntu, Canonical, you name it. I m going to remain detached from Ubuntu for at least a year. If circumstances change, or if I feel the timing just isn t right, I ll wait longer. My point is, I ll be back, the when of it will just never be public before it happens.In the meantime, you re welcome to reach out to me. It ll take me some time to bootstrap things, more than I originally thought, but I m hoping it ll be quick. After all, I ve had practice.I m also going to continue writing. About what? I don t know yet.But, I ll just keep writing. I want to share all of the useful tips I ve learned over the years. If you actually liked this post, or if you ve enjoyed my work in the Ubuntu project, please do subscribe to my personal blog, which will be here on Medium (unless someone can give me an open source alternative with a funding model). This being said, while I d absolutely take any donations people would like to provide, at the end of the day, I don t do this for the money. I do this for the people just like me, out of love.So you, just like me, can make your dreams happen.Don t give up, it ll come. Just be patient with yourself.As for me, I have business to attend to. What business is that, exactly? Read Walden, and you ll find out.I wish you all well, even the person I called out. I sincerely hope you find what you re looking for in life. It takes time. Sometimes you have to listen to some music to pass the time, so I created a conceptual mixtape if you want to listen to some of the same music as me.I ll do another blog post soon, don t worry.Be well. Much, much more to come.

10 May 2025

Taavi V n nen: Wikimedia Hackathon Istanbul 2025

It's that time of the year again: the Wikimedia Hackathon 2025 happened last weekend in Istanbul. This year was my third time attending what has quickly become one of my favourite events of the year simply due to the concentration of friends and other like-minded nerds in a single location.1 Valerio, Lucas, me and a shark.
Image by Chlod Alejandro is licensed under CC BY-SA 4.0.
This year I did a short presentation about the MediaWiki packages in Debian (slides), which is something I do but I suspect is fairly obscure to most people in the MediaWiki community. I was hoping to do some work on reproducibility of MediaWiki releases, but other interests (plus lack of people involved in the release process at the hackathon) meant that I didn't end up getting any work done on that (assuming this does not count). Other long-standing projects did end up getting some work done! MusikAnimal and I ended up fixing the Commons deletion notification bot, which had been broken for well over two years at that point (and was at some point in the hackathon plans for last year for both of us). Other projects that I made progress on include supporting multiple types of two-factor devices, and LibraryUpgrader which gained support for rebasing and updating existing patches2. In addition to hacking, the other highlight of these events is the hallway track. Some of the crowd is people who I've seen at previous events and/or interact very frequently with, but there are also significant parts of the community and the Foundation that I don't usually get to interact with outside of these events. (Although it still feels extremely weird to heard from various mostly-WMF people with whom I haven't spoken with before that they've heard various (usually positive) rumours stories about me.) Unfortunately we did not end up having a Cuteness Association meetup this year, but we had an impromptu PGP key signing party which is basically almost as good, right? However, I did continue a tradition from last year: I ended up nominating Chlod, a friend of mine, to receive +2 access to mediawiki/* during the hackathon. The request is due to be closed sometime tomorrow. (Usual disclosure: My travel was funded by the Wikimedia Foundation. Thank you! This is my personal blog and these are my own opinions.) Now that you've read this post, maybe check out posts from others?

  1. Unfortunately you can never have absolutely everyone attending :(
  2. Amir, I still have not forgiven you about this.

7 May 2025

Jonathan Dowland: procmail versus exim filters

I ve been using Procmail to filter mail for a long time. Reading Antoine s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions). My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language. Requirements A good first step is to look at what I'm using Procmail for:
  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories
  2. I file messages into different folders depending on the outcome of the above filters
  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules
  4. I move mailing list mail into folders, semi-automatically (see list filtering)
  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.
  6. I file a copy of some messages, the name of which is partly derived from the current calendar year
Exim Filters I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters. autolists Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:
if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif
Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine). killfile An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:
if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif
I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement. It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar. external filters With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails. With Exim filters, we can use pipe to invoke an external program:
pipe "$home/mail/mailreaver.crm -u $home/mail/"
However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors. Here's Exim's documentation on what happens when the external command fails:
Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.
That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message. The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves. removing subject tagging Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters. copy mail to archive folder I can't see a way to derive a folder name from the calendar year. next steps Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do. However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.

  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

5 May 2025

Ravi Dwivedi: A visit to Paris

After attending the 2024 LibreOffice conference in Luxembourg, I visited Paris in October 2024. If you are wondering whether I needed another visa to cross the border into France I didn t! Further, they are both also EU members, which means you don t need to go through customs either. Thus, crossing the Luxembourg-France border is no different from crossing Indian state borders - like going from Rajasthan to Uttar Pradesh. I took a TGV train from Luxembourg Central Station, which was within walking distance from my hostel. The train took only 2 hours and 20 minutes to cover the 300 km distance to Paris. It departed from Luxembourg at 10:00 AM and reached Paris at 12:20 PM. The ride was smooth and comfortable, arriving on time. It gave me an opportunity to see the countryside of France. I booked the train ticket online a couple of days prior through the Omio website.
A train standing on a platform TGV train I rode from Luxembourg to Paris
I planned the first day with my friend Joenio, whom I met upon arriving in Paris Gare de l Est station, along with his wife Mari. We went to my hostel (which was within walking distance from the station) to store my luggage, but we were informed that we needed to wait for a couple of hours before I could check in. Consequently, we went to an Italian restaurant nearby for lunch, where I ordered pasta. My hostel was unbelievably cheap by French standards (25 euros per night) that Joenio was shocked when he learned about it.
Pasta on a plate topped with Ricotta cheese Pasta I had in Paris
Walking in the city, I noticed it had separate cycling tracks and wide footpaths, just like Luxembourg. The traffic was also organized. For instance, there were traffic lights even for pedestrian crossings, unlike India, where crossing roads can be a nightmare. Car drivers stopping for pedestrians is a big improvement over what I am used to in India. The weather was also pleasant. It was a bit on the cooler side - around 15 degrees Celsius - and I had to wear a jacket.
A cycling track in Paris A cycling track in Paris
After lunch, we returned to my hostel for my check-in at around 3 o clock. Then, we went to the Luxembourg Museum (Mus e du Luxembourg in French) as Joenio had booked tickets for an exhibition of paintings by the Brazilian painter Tarsila do Amaral. To reach there, we took a subway train from Gare du Nord station. The Paris subway charges 2.15 euros irrespective of the distance (or number of stations) traveled, as opposed to other metro systems I have used. We reached the museum at around 4 o clock. I found the paintings beautiful, but I would have appreciated them much more if the descriptions were in English.
A building wit trees on the left and right side of it and sky in the background. People can be seen in front of the building. Luxembourg Museum
Afterward, we went to a beautiful garden just behind the museum. It served as a great spot to relax and take pictures. Following this, we walked to the Pantheon - a well-known attraction in the city. It is a church built a couple of centuries ago. It has a dome-shaped structure at the top, recognizable from far away.
A building with a garden in front it and people sitting closer to us. Sky can be seen in the background. A shot of the park near to the Luxembourg Museum
A building with a dome shaped structure on top. Closer to camera, roads can be seen. In the background is blue colored cloudy sky. Pantheon, one of the attractions of Paris.
Then we went to Notre Dame after having evening snacks and coffee at a nearby bakery. The Notre Dame was just over a kilometer from the Pantheon, so we took a walk. We also crossed the beautiful Seine river. On the way, I sampled a cr pe, a signature dish of France. The shop was named Cr perie and had many varieties of Cr pe. I took the one with eggs and Emmental cheese. It was savory and delicious.
Photo with Joenio and Mari Photo with Joenio and Mari
Notre Dame, another tourist attraction of Paris. Notre Dame, another tourist attraction of Paris.
By the time we reached Notre Dame, it was 07:30 PM. I learned from Joenio that Notre Dame was closed and being renovated due to a fire a couple of years ago, so we just sat around and clicked photos. It is a catholic cathedral built in French Gothic architecture (I read that on Wikipedia ;)). I read on Wikipedia that it is located on an island named le de la Cit and I didn t even realize we are on an island. At night, we visited the most well-known attraction of Paris, The Eiffel Tower. We again took the subway, alighting at the Bir-Hakeim station, followed by a short walk. We reached the Eiffel Tower at 9 o clock. It was lit bright yellow. There was not much to do there, so we just clicked photos and hung out. After that, I came back to my hostel.
The Eiffel Tower lit with bright yellow My photo with Eiffel Tower in the background
Next day, I roamed around the city by walking mostly. France is known for its bakeries, so I checked out a couple of local bakeries. I had espresso a couple of times and sampled croissant, pain au chocolat and lemon meringue tartlet.
Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.Items at a bakery in Paris Items at a bakery in Paris. Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.
Here are some random shots:
The Paris subway The Paris subway
Inside a Paris metro train Inside a Paris subway
A random building and road in Paris A random building and road in Paris
A shot near Seine river A shot near Seine river
A view of Seine river A view of Seine river
On the third day, I had my flight for India. Thus, I checked out of the hostel early in the morning, took an RR train from Gare du Nord station to reach the airport. It costs 11.8 euros. I heard some of my friends had bad experiences in France. Thus, I had the impression that I would not feel welcomed. Furthermore, I have encountered language problems in my previous Europe trip to Albania and Kosovo. Likewise, I learned a couple of French words, like how to say thank you and good morning, which went a long way. However, I didn t have bad experiences in Paris, except for one instance in which I asked my hostel s reception about my misplaced watch and the person at the reception asked me to be polite by being rude. She said, Excuse me! You don t know how to say Good Morning? Overall, I enjoyed my time in Paris and would like to thank Joenio and Mari for joining me. I would also like to thank Sophie, who gave me a map of Paris. Let s end this post here. I ll meet you in the next one! Credits: Thanks to contrapunctus for reviewing this post before publishing

3 May 2025

Russ Allbery: Review: Paper Soldiers

Review: Paper Soldiers, by Saleha Mohsin
Publisher: Portfolio
Copyright: 2024
ISBN: 0-593-53912-5
Format: Kindle
Pages: 250
The subtitle of Paper Soldiers is "How the Weaponization of the Dollar Changed the World Order," which may give you the impression that this book is about US use of the dollar system for political purposes such as sanctions. Do not be fooled like I was; this subtitle is, at best, deceptive. Coverage of the weaponization of the dollar is superficial and limited to a few chapters. This book is, instead, a history of the strong dollar policy told via a collection of hagiographies of US Treasury Secretaries and written with all of the skeptical cynicism of a poleaxed fawn. There is going to be some grumbling about the state of journalism in this review. Per the author's note, Saleha Mohsin is the Bloomberg News beat reporter for the US Department of the Treasury. That is, sadly, exactly what this book reads like: routine beat reporting. Mohsin asked current and former Treasury officials what they were thinking at various points in history and then wrote down their answers without, so far as I can tell, considering any contradictory evidence or wondering whether they were telling the truth. Paper Soldiers does contain extensive notes (those plus the index fill about forty pages), so I guess you could do the cross-checking yourself, although apparently most of the interviews for this book were "on background" and are therefore unattributed. (Is this weird? I feel like this is weird.) Mohsin adds a bit of utterly conventional and uncritical economic framing and casts the whole project in the sort of slightly breathless and dramatized prose style that infests routine news stories in the US. I find this style of book unbelievably frustrating because it represents such a wasted opportunity. To me, the point of book-length journalism is precisely to not write in this style. When you're trying to crank out two or three articles a week covering current events, I understand why there isn't always space or time to go deep into background, skepticism, and contrary opinions. But when you expand that material into a book, surely the whole point is to take the time to do some real reporting. Dig into what people told you, see if they're lying, talk to the people who disagree with them, question the conventional assumptions, and show your work on the page so that the reader is smarter after finishing your book than they were before they started. International political economics is not a sequence of objective facts. It's a set of decisions made in pursuit of economic and political theories that are disputed and arguable, and I think you owe the reader some sense of the argument and, ideally, some defensible position on the merits that is more than a transcription of your interviews. This is... not that.
It's a power loop that the United States still enjoys today: trust in America's dollar (and its democratic government) allows for cheap debt financing, which buys health care built on the most advanced research and development and inventions like airplanes and the iPhone. All of this is propelled by free market innovation and the superpowered strength to keep the nation safe from foreign threats. That investment boosts the nation's economic, military, and technological prowess, making its economy (and the dollar) even more attractive.
Let me be precise about my criticism. I am not saying that every contention in the above excerpt is wrong. Some of them are probably correct; more of them are at least arguable. This book is strictly about the era after Bretton Woods, so using airplanes as an example invention is a bizarre choice, but sure, whatever, I get the point. My criticism is that paragraphs like this, as written in this book, are not introductions to deeper discussions that question or defend that model of economic and political power. They are simple assertions that stand entirely unsupported. Mohsin routinely writes paragraphs like the above as if they are self-evident, and then immediately moves on to the next anecdote about Treasury dollar policy. Take, for example, the role of the US dollar as the world's reserve currency, which roughly means that most international transactions are conducted in dollars and numerous countries and organizations around the world hold large deposits in dollars instead of in their native currency. The conventional wisdom holds that this is a great boon to the US economy, but there are also substantive critiques and questions about that conventional wisdom. You would never know that from this book; Mohsin asserts the conventional wisdom about reserve currencies without so much as a hint that anyone might disagree. For example, one common argument, repeated several times by Mohsin, is that the US can only get away with the amount of deficit spending and cheap borrowing that it does because the dollar is the world's reserve currency. Consider two other countries whose currencies are clearly not the international reserve currency: Japan and the United Kingdom. The current US debt to GDP ratio is about 125% and the current interest rate on US 10-year bonds is about 4.2%. The current Japanese debt to GDP ratio is about 260% and the current interest rate on Japanese 10-year bonds is about 1.2%. The current UK debt to GDP ratio is 160% and the current interest rate on UK 10-year bonds is 4.5%. Are you seeing the dramatic effects of the role of the dollar as reserve currency? Me either. Again, I am not saying that this is a decisive counter-argument. I am not an economist; I'm just some random guy on the Internet who finds macroeconomics interesting and reads a few newsletters. I know the Japanese bond market is unusual in ways I'm not accounting for. There may well be compelling arguments for why reserve currency status matters immensely for US borrowing capacity. My point is not that Mohsin is wrong; my point is that you have to convince me and she doesn't even try. Nowhere in this book is a serious effort to view conventional wisdom with skepticism or confront it with opposing arguments. Instead, this book is full of blithe assertions that happen to support the narrative the author was fed by a bunch of former Treasury officials and does not appear to question in any way. I want books like this to increase my understanding of the world. To do that, they need to show me multiple sides of debates and teach me how to evaluate evidence, not simply reinforce a superficial conventional wisdom. It doesn't help that whatever fact-checking process this book went through left some glaring errors. For example, on the Plaza Accord:
With their central banks working in concert, enough dollars were purchased on the open market to weaken the currency, making American goods more affordable for foreign buyers.
I don't know what happened after the Plaza Accord (I read books like this to find out!), but clearly it wasn't that. This is utter nonsense. Buying dollars on the open market would increase the value of the dollar, not weaken it; this is basic supply and demand that you learn in the first week of a college economics class. This is the type of error that makes me question all the other claims in the book that I can't easily check. Mohsin does offer a more credible explanation of the importance of a reserve currency late in the book, although it's not clear to me that she realizes it: The widespread use of the US dollar gives US government sanctions vast international reach, allowing the US to punish and coerce its enemies through the threat of denying them access to the international financial system. Now we're getting somewhere! This is a more believable argument than a small and possibly imaginary effect on government borrowing costs. It is clear why a bellicose US government, particularly one led by advocates of a unitary executive theory that elevates the US president to a status of near-emperor, want to turn the dollar into a weapon of international control. It's much less obvious how comfortable the rest of the world should be with that concentration of power. This would be a fascinating topic for a journalistic non-fiction book. Some reporter should dive deep into the mechanics of sanctions and ask serious questions about the moral, practical, and diplomatic consequences of this aggressive wielding of US power. One could give it a title like Paper Soldiers that reflected the use of banks and paper currency as foot soldiers enforcing imperious dictates on the rest of the world. Alas, apart from a brief section in which the US scared other countries away from questioning the dollar, Mohsin does not tug at this thread. Maybe someone should write that book someday. As you will have gathered by now, I think this is a bad book and I do not recommend that you read it. Its worst flaw is one that it shares with far too much mainstream US print and TV journalism: the utter credulity of the author. I have the old-fashioned belief that a journalist should be more than a transcriptionist for powerful people. They should be skeptical, they should assume public figures may be lying, they should look for ulterior motives, and they should try to bring the reader closer to some objective truths about the world, wherever they may lie. I have no solution for this degradation of journalism. I'm not even sure that it's a change. There were always reporters eager to transcribe the voice of power into the newspaper, and if we remember the history of journalism differently, that may be because we have elevated the rare exceptions and forgotten the average. But after watching too many journalists I once respected start parroting every piece of nonsense someone tells them, from NFTs to UFOs to the existential threat of AI, I've concluded that the least I can do as a reader is to stop rewarding reporters who cannot approach powerful subjects with skepticism, suspicion, and critical research. I failed in this case, but perhaps I can serve as a warning to others. Rating: 3 out of 10

2 May 2025

Daniel Lange: Compiling and installing the Gentoo Linux kernel on emerge without genkernel (part 2)

The first install of a Gentoo kernel needs to be somewhat manual if you want to optimize the kernel for the (virtual) system it boots on. In part 1 I laid out how to improve the subsequent emerges of sys-kernel/gentoo-sources with a small drop in script to build the kernel as part of the ebuild. Since end of last year Gentoo also supports a less manual way of emerging a kernel: The following kernel blends are available: So a quick walk-through for the gentoo-kernel variant: 1. Set up the correct package USE flags We do not want an initrd and we want our own config to be re-used so:
echo "sys-kernel/gentoo-kernel -initramfs savedconfig" >> /etc/portage/package.use/gentoo-kernel
2. Preseed the saved config The current kernel config needs to be saved as the initial savedconfig so it is found and applied for our emerge below:
mkdir -p /etc/portage/savedconfig/sys-kernel
cp -n "/usr/src/linux-$(uname -r)/.config" /etc/portage/savedconfig/sys-kernel/gentoo-kernel
3. Emerge the new kernel
emerge sys-kernel/gentoo-kernel
4. Update grub and reboot Unfortunately this ebuild does not update grub, so we have to run grub-mkconfig manually. This can again be automated via a post_pkg_postinst() script. See the step 7 below. But for now, let's do it manually:
grub-mkconfig -o /boot/grub/grub.cfg
# All fine? Time to reboot the machine:
reboot
5. (Optional) Prepare for the next kernel build Run etc-update and merge the new kernel config entries into your savedconfig. Screenshot of etc-update The kernel should auto-build once new versions become available via portage. Again the etc-update can be automated if you feel that is sufficiently safe to do in your environment. See step 7 below for details. 6. (Optional) Remove the old kernel sources If you want to switch from the method based on gentoo-sources to the gentoo-kernel one, you can remove the kernel sources:
emerge -C "=sys-kernel/gentoo-sources-5*"
Be sure to update the /usr/src/linux symlink to the new kernel sources directory from gentoo-kernel, e.g.:
rm /usr/src/linux; ln -s "/usr/src/$(uname -r)" /usr/src/linux
This may be a good time for a bit more house-keeping: Clean up a bit in /usr/src/ to remove old build artefacts, /boot/ to remove old kernels and /lib/modules/ to get rid of old kernel modules. 7. (Optional) Further automate the ebuild In part 1 we automated the kernel compile, install and a bit more via a helper function for post_pkg_postinst(). We can do the similarly for what is (currently) missing from the gentoo-kernel ebuilds: Create /etc/portage/env/sys-kernel/gentoo-kernel with the following:
post_pkg_postinst()
etc-update --automode -5 /etc/portage/savedconfig/sys-kernel
grub-mkconfig -o /boot/grub/grub.cfg
The upside of gentoo-kernel over gentoo-sources is that you can put "config override files" in /etc/kernel/config.d/. That way you theoretically profit from config improvements made by the upstream developers. See the Gentoo distribution kernel documentation for a sample snippet. I am fine with savedconfig for now but it is nice that Gentoo provides the flexibility to support both approaches.

Ben Hutchings: FOSS activity in April 2025

I also co-organised a Debian BSP (Bug-Squashing Party) last weekend, for which I will post a separate report later.

1 May 2025

Ian Jackson: Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn t be doing politics , and instead should just focus on technology. But that s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms. Today I m talking about small-p politics In this article I m using politics in the very wide sense: us humans managing our disagreements with each other. I m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it s impossible to be neutral because choosing not to take a stand is itself to take a stand. Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today. Today I m talking in more general terms about politics, power, and governance. Many people working together always entails politics Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors. Humans don t always agree about everything. This is natural. Indeed, it s healthy: to write the best code, we need a wide range of knowledge and experience. When we can t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone. Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed. This is all politics. Consensus is great but always requiring it is harmful Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus. When consensus can t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation. If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win. This is where governance comes in. Governance is like backups: we need to practice it Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don t see eye to eye. In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around. That means we need to practice our governance processes. We can t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair. So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that. First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full. Governance should usually be routine and boring When governance is working well it s quite boring. People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn t reached, the committee, or elected leader, makes a decision. Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons. Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome. Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience. Governance means deciding, not just mediating By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It s not governance if we re advising or cajoling: in that case, we re back to demanding consensus. Governance is necessary precisely when consensus is not achieved. If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted. Otherwise, when the we need to overrule, we ll find that we can t, because we lack the collective practice. To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants status, and not only on process questions. On the autonomy of the programmer Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable. Ultimately, it means sometimes overruling someone s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy. But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer s bad decisions can cause problems for many of the rest of us. We exasperate, why won t they just do the right thing . This is futile. People have never just ed and they re not going to start just ing now. So often the boot is on the other foot. More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!) Governance mechanisms are the answer. (No, forking anything but the smallest project is very rarely a practical answer.) Mitigate the consequences of decisions retain flexibility In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements. If we can convert the question from how will the software always behave into merely what should the default be , we can often save ourselves a lot of drama. So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them. There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software even crusty or buggy software is a lot more fun than having unpleasant arguments. But don t do decisionmaking like a corporation Many programmers experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example. They typically don t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations goals are often bad. You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable typically the effects of their tenure are only properly felt well after they ve left to mess up somewhere else. We should select our leaders more wisely, and base decisions on substance. If you won t do politics, politics will do you As a participant in a project, or a society, you can of course opt out of getting involved in politics. You can opt out of learning how to do politics generally, and opt out of understanding your project s governance structures. You can opt out of making judgements about disputed questions, and tell yourself there s merit on both sides . You can hate politicians indiscriminately, and criticise anyone you see doing politics. If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You re tacitly supporting the existing power bases. You re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted. If enough people won t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic. If you don t see the politics, it s still happening If your governance systems don t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres. Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal. So if you have a reasonable sized community, but don t see your formal governance systems working people debating things, votes, leadership making explicit decisions that doesn t mean everything is fine, and all the decisions are great, and there s no politics happening. It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won t put up with that will leave. The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process. Conclusions

comment count unavailable comments

Next.