Search Results: "error"

2 December 2024

Dirk Eddelbuettel: anytime 0.3.10 on CRAN: Multiple Enhancements

A new release of the anytime package arrived on CRAN today the first is well over four years. The package is fairly feature-complete, and code and functionality remain mature and stable, of course. anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub repo for a few examples, and the beautiful documentation site for all documentation. This release slowly matured over four years. It combines a number of strictly internal repository maintenance such as changes to continuous integration with small enhancements (adding for example some new formats, responding better to an error condition, dealing with logical input as an error) with a relaxation of the C++ compilation standard. While we once needed C++11, it is now a constraint as as R itself is quite proactive (the last two releases defaulted already to C++17, suitable compiler permitting) we can now relax this constraint. The documentation site is new, as some other small changes. See the full list of changes which follows.

Changes in anytime version 0.3.10 (2024-12-02)
  • A new documentation site was added.
  • Continuous Integration now uses run.sh from r-ci with bspm
  • Logical input vectors are now recognised as an error (#121)
  • Additional dot-separated format '%Y.%m.%d' is supported
  • Other small updates were made throughout the package
  • No longer set a C++ compilation standard as the default choices by R are sufficient for the package
  • Switch Rcpp include file to Rcpp/Lightest
  • We recommend ~/.R/Makevars compiler flag options -Wno-ignored-attributes -Wno-nonnull -Wno-parentheses
  • The tinytest runner was simplified
  • NA values from conversion now trigger a warning

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo and the documentation site. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 November 2024

Enrico Zini: New laptop setup

My new laptop Framework (Framework Laptop 13 DIY Edition (AMD Ryzen 7040 Series)) arrived, all the hardware works out of the box on Debian Stable, and I'm very happy indeed. This post has the notes of all the provisioning steps, so that I can replicate them again if needed. Installing Debian 12 Debian 12's installer just worked, with Secure Boot enabled no less, which was nice. The only glitch is an argument with the guided partitioner, which was uncooperative: I have been hit before by a /boot partition too small, and I wanted 1G of EFI and 1G of boot, while the partitioner decided that 512Mb were good enough. Frustratingly, there was no way of changing that, nor I found how to get more than 1G of swap, as I wanted enough swap to fit RAM for hybernation. I let it install the way it pleased, then I booted into grml for a round of gparted. The tricky part of that was resizing the root btrfs filesystem, which is in an LV, which is in a VG, which is in a PV, which is in LUKS. Here's a cheatsheet. Shrink partitions: note that I used an increasing size because I don't trust that each tool has a way of representing sizes that aligns to the byte. I'd be happy to find out that they do, but didn't want to find out the hard way that they didn't. Resize with gparted: Move and resize partitions at will. Shrinking first means it all takes a reasonable time, and you won't have to wait almost an hour for a terabyte-sized empty partition to be carefully moved around. Don't ask me why I know. Regrow partitions: Setup gnome When I get a new laptop I have a tradition of trying to make it work with Gnome and Wayland, which normally ended up in frustration and a swift move to X11 and Xfce: I have a lot of long-time muscle memory involved in how I use a computer, and it needs to fit like prosthetics. I can learn to do a thing or two in a different way, but any papercut that makes me break flow and I cannot fix will soon become a dealbreaker. This applies to Gnome as present in Debian Stable. General Gnome settings tips I can list all available settings with:
gsettings list-recursively
which is handy for grepping things like hotkeys. I can manually set a value with:
gsettings set <schema> <key> <value>
and I can reset it to its default with:
gsettings reset <schema> <key>
Some applications like Gnome Terminal use "relocatable schemas", and in those cases you also need to specify a path, which can be discovered using dconf-editor:
gsettings set <schema>:<path> <key> <value>
Install appindicators First thing first: app install gnome-shell-extension-appindicator, log out and in again: the Gnome Extension manager won't see the extension as available until you restart the whole session. I have no idea why that is so, and I have no idea why a notification area is not present in Gnome by default, but at least now I can get one. Fix font sizes across monitors My laptop screen and monitor have significantly different DPIs, so:
gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
And in Settings/Displays, set a reasonable scaling factor for each display. Disable Alt/Super as hotkey for the Overlay Seeing all my screen reorganize and reshuffle every time I accidentally press Alt leaves me disoriented and seasick:
gsettings set org.gnome.mutter overlay-key ''
Focus-follows-mouse and Raise-or-lower My desktop is like my desktop: messy and cluttered. I have lots of overlapping window and I switch between them by moving the focus with the mouse, and when the visible part is not enough I have a handy hotkey mapped to raise-or-lower to bring forward what I need and send back what I don't need anymore. Thankfully Gnome can be configured that way, with some work: This almost worked, but sometimes it didn't do what I wanted, like I expected to find a window to the front but another window disappeared instead. I eventually figured that by default Gnome delays focus changes by a perceivable amount, which is evidently too slow for the way I move around windows. The amount cannot be shortened, but it can be removed with:
gsettings set org.gnome.shell.overrides focus-change-on-pointer-rest false
Mouse and keyboard shortcuts Gnome has lots of preconfigured sounds, shortcuts, animations and other distractions that I do not need. They also either interfere with key combinations I want to use in terminals, or cause accidental window moves or resizes that make me break flow, or otherwise provide sensory overstimulation that really does not work for me. It was a lot of work, and these are the steps I used to get rid of most of them. Disable Super+N combinations that accidentally launch a questionable choice of programs:
for i in  seq 1 9 ; do gsettings set org.gnome.shell.keybindings switch-to-application-$i '[]'; done
Gnome-Shell settings: gnome-tweak-tool settings: Gnome Terminal settings: Thankfully 10 years ago I took notes on how to customize Gnome Terminal, and they're still mostly valid: Other hotkeys that got in my way and had to disable the hard way:
for n in  seq 1 12 ; do gsettings set org.gnome.mutter.wayland.keybindings switch-to-session-$n '[]'; done
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-down '[]'
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-up '[]'
gsettings set org.gnome.desktop.wm.keybindings panel-main-menu '[]'
gsettings set org.gnome.desktop.interface menubar-accel '[]'
Note that even after removing F10 from being bound to menubar-accel, and after having to gsetting binding to F10 as is:
$ gsettings list-recursively grep F10
org.gnome.Terminal.Legacy.Keybindings switch-to-tab-10 '<Alt>F10'
I still cannot quit Midnight Commander using F10 in a terminal, as that moves the focus in the window title bar. This looks like a Gnome bug, and a very frustrating one for me. Appearance Gnome-Shell settings: gnome-tweak-tool settings: Gnome Terminal settings: Other decluttering and tweaks Gnome Shell Settings: Set a delay between screen blank and lock: when the screen goes blank, it is important for me to be able to say "nope, don't blank yet!", and maybe switch on caffeine mode during a presentation without needing to type my password in front of cameras. No UI for this, but at least gsettings has it:
gsettings set org.gnome.desktop.screensaver lock-delay 30
Extensions I enabled the Applications Menu extension, since it's impossible to find less famous applications in the Overview without knowing in advance how they're named in the desktop. This stole a precious hotkey, which I had to disable in gsettings:
gsettings set org.gnome.shell.extensions.apps-menu apps-menu-toggle-menu '[]'
I also enabled: I didn't go and look for Gnome Shell extentions outside what is packaged in Debian, as I'm very wary about running JavaScript code randomly downloaded from the internet with full access over my data and desktop interaction. I also took care of checking that the Gnome Shell Extensions web page complains about the lacking "GNOME Shell integration" browser extension, because the web browser shouldn't be allowed to download random JavaScript from the internet and run it with full local access. Yuck. Run program dialog The default run program dialog is almost, but not quite, totally useless to me, as it does not provide completion, not even just for executable names in path, and so it ends up being faster to open a new terminal window and type in there. It's possible, in Gnome Shell settings, to bind a custom command to a key. The resulting keybinding will now show up in gsettings, though it can be located in a more circuitous way by grepping first, and then looking up the resulting path in dconf-editor:
gsettings list-recursively grep custom-key
org.gnome.settings-daemon.plugins.media-keys custom-keybindings ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/']
I tried out several run dialogs present in Debian, with sad results, possibly due to most of them not being tested on wayland: Both gmrun and xfrun4 seem like workable options, with xfrun4 being customizable with convenient shortcut prefixes, so xfrun4 it is. TODO I'll try to update these notes as I investigate. Conclusion so far I now have something that seems to work for me. A few papercuts to figure out still, but they seem manageable. It all feels a lot harder than it should be: for something intended to be minimal, Gnome defaults feel horribly cluttered and noisy to me, continuosly getting in the way of getting things done until tamed into being out of the way unless called for. It felt like a device that boots into flashy demo mode, which needs to be switched off before actual use. Thankfully it can be switched off, and now I have notes to do it again if needed. gsettings oddly feels to me like a better UI than the interactive settings managers: it's more comprehensive, more discoverable, more scriptable, and more stable across releases. Most of the Q&A I found on the internet with guidance given on the UI was obsolete, while when given with gsettings command lines it kept being relevant. I also have the feeling that these notes would be easier to understand and follow if given as gsettings invocations instead of descriptions of UI navigation paths. At some point I'll upgrade to Trixie and reevaluate things, and these notes will be a useful checklist for that. Fingers crossed that this time I'll manage to stay on Wayland. If not, I know that Xfce is still there for me, and I can trust it to be both helpful and good at not getting in the way of my work.

29 November 2024

Raju Devidas: Finding all sub domains of a main domain

Problem: Need to know all the sub domains of a main domain, e.g. example.com has a sub domain dev.example.com , I also want to know other sub domains. Solution: Install the package called sublist3r, written by Ahmed Aboul-Ela
$ sudo apt install sublist3r
run the command
$ sublist3r -d example.com -o subdomains-example.com.txt
                 ____        _     _ _     _   _____
                / ___  _   _   __   (_)___   _ ___ / _ __
                \___ \        &apos_ \    / __  __   _ \  &apos__ 
                 ___)    _     _)     \__ \  _ ___)    
                 ____/ \__,_ _.__/ _ _ ___/\__ ____/ _ 
                # Coded By Ahmed Aboul-Ela - @aboul3la
[-] Enumerating subdomains now for example.com
[-] Searching now in Baidu..
[-] Searching now in Yahoo..
[-] Searching now in Google..
[-] Searching now in Bing..
[-] Searching now in Ask..
[-] Searching now in Netcraft..
[-] Searching now in DNSdumpster..
[-] Searching now in Virustotal..
[-] Searching now in ThreatCrowd..
[-] Searching now in SSL Certificates..
[-] Searching now in PassiveDNS..
Process DNSdumpster-8:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 269, in run
    domain_list = self.enumerate()
                  ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 649, in enumerate
    token = self.get_csrftoken(resp)
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 644, in get_csrftoken
    token = csrf_regex.findall(resp)[0]
            ~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
[!] Error: Google probably now is blocking our requests
[~] Finished now the Google Enumeration ...
[!] Error: Virustotal probably now is blocking our requests
[-] Saving results to file: subdomains-example.com.txt
[-] Total Unique Subdomains Found: 7
AS207960 Test Intermediate - example.com
www.example.com
dev.example.com
m.example.com
products.example.com
support.example.com
m.testexample.com
We can see the subdomains listed at the end of the command output. enjoy, have fun, drink water!

26 November 2024

Emanuele Rocca: Building Debian packages The Right Way

There is more than one way to do it, but it seems that The Right Way to build Debian packages today is using sbuild with the unshare backend. The most common backend before the rise of unshare was schroot.
The official Debian Build Daemons have recently transitioned to using sbuild with unshare, providing a strong motivation to consider making the switch. Additionally the new approach means: (1) no need to configure schroot, and (2) no need to run the build as root.
Here are my notes about moving to the new setup, for future reference and in case they may be useful to others.
First I installed the required packages:
apt install sbuild mmdebstrap uidmap
Then I created the following script to update my chroots every night:
#!/bin/bash
for arch in arm64 armhf armel; do
    HOME=/tmp mmdebstrap --quiet --arch=$arch --include=ca-certificates --variant=buildd unstable \
        ~/.cache/sbuild/unstable-$arch.tar http://127.0.0.1:3142/debian
done
In the script, I m calling mmdebstrap with --quiet because I don t want to get any output on succesful execution. The script is running in cron with email notifications, and I only want to get a message if something goes south. I m setting HOME=/tmp for a similar reason: the unshare user does not have access to my actual home directory, and by default dpkg tries to use $HOME/.dpkg.cfg as the configuration file. By overriding HOME I avoid the following message to standard error:
dpkg: warning: failed to open configuration file '/home/ema/.dpkg.cfg' for reading: Permission denied
Then I added the following to my sbuild configuration file (~/.sbuildrc):
$chroot_mode = 'unshare';
$unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXXXX';
The first option sets the sbuild backend to unshare, whereas unshare_tmpdir_template is needed on Bookworm to ensure that the build process runs in memory rather than on disk for performance reasons. Starting with Trixie, /tmp is by default a tmpfs so the setting won t be needed anymore.
Packages for different architectures can now be built as follows:
# Tarball used: ~/.cache/sbuild/unstable-arm64.tar
$ sbuild --dist=unstable hello
# Tarball used: ~/.cache/sbuild/unstable-armhf.tar
$ sbuild --dist=unstable --arch=armhf hello
If you have any comments or suggestions about any of this, please let me know.

22 November 2024

Matthew Palmer: Your Release Process Sucks

For the past decade-plus, every piece of software I write has had one of two release processes. Software that gets deployed directly onto servers (websites, mostly, but also the infrastructure that runs Pwnedkeys, for example) is deployed with nothing more than git push prod main. I ll talk more about that some other day. Today is about the release process for everything else I maintain Rust / Ruby libraries, standalone programs, and so forth. To release those, I use the following, extremely intricate process:
  1. Create an annotated git tag, where the name of the tag is the software version I m releasing, and the annotation is the release notes for that version.
  2. Run git release in the repository.
  3. There is no step 3.
Yes, it absolutely is that simple. And if your release process is any more complicated than that, then you are suffering unnecessarily. But don t worry. I m from the Internet, and I m here to help.

Sidebar: annotated what-now?!? The annotated tag is one git s best-kept secrets. They ve been available in git for practically forever (I ve been using them since at least 2014, which is practically forever in software development), yet almost everyone I mention them to has never heard of them. A tag , in git parlance, is a repository-unique named label that points to a single commit (as identified by the commit s SHA1 hash). Annotating a tag is simply associating a block of free-form text with that tag. Creating an annotated tag is simple-sauce: git tag -a tagname will open up an editor window where you can enter your annotation, and git tag -a -m "some annotation" tagname will create the tag with the annotation some annotation . Retrieving the annotation for a tag is straightforward, too: git show tagname will display the annotation along with all the other tag-related information. Now that we know all about annotated tags, let s talk about how to use them to make software releases freaking awesome.

Step 1: Create the Annotated Git Tag As I just mentioned, creating an annotated git tag is pretty simple: just add a -a (or --annotate, if you enjoy typing) to your git tag command, and WHAM! annotation achieved. Releases, though, typically have unique and ever-increasing version numbers, which we want to encode in the tag name. Rather than having to look at the existing tags and figure out the next version number ourselves, we can have software do the hard work for us. Enter: git-version-bump. This straightforward program takes one mandatory argument: major, minor, or patch, and bumps the corresponding version number component in line with Semantic Versioning principles. If you pass it -n, it opens an editor for you to enter the release notes, and when you save out, the tag is automagically created with the appropriate name. Because the program is called git-version-bump, you can call it as a git command: git version-bump. Also, because version-bump is long and unwieldy, I have it aliased to vb, with the following entry in my ~/.gitconfig:
[alias]
    vb = version-bump -n
Of course, you don t have to use git-version-bump if you don t want to (although why wouldn t you?). The important thing is that the only step you take to go from here is our current codebase in main to everything as of this commit is version X.Y.Z of this software , is the creation of an annotated tag that records the version number being released, and the metadata that goes along with that release.

Step 2: Run git release As I said earlier, I ve been using this release process for over a decade now. So long, in fact, that when I started, GitHub Actions didn t exist, and so a lot of the things you d delegate to a CI runner these days had to be done locally, or in a more ad-hoc manner on a server somewhere. This is why step 2 in the release process is run git release . It s because historically, you can t do everything in a CI run. Nowadays, most of my repositories have this in the .git/config:
[alias]
    release = push --tags
Older repositories which, for one reason or another, haven t been updated to the new hawtness, have various other aliases defined, which run more specialised scripts (usually just rake release, for Ruby libraries), but they re slowly dying out. The reason why I still have this alias, though, is that it standardises the release process. Whether it s a Ruby gem, a Rust crate, a bunch of protobuf definitions, or whatever else, I run the same command to trigger a release going out. It means I don t have to think about how I do it for this project, because every project does it exactly the same way.

The Wiring Behind the Button
It wasn t the button that was the problem. It was the miles of wiring, the hundreds of miles of cables, the circuits, the relays, the machinery. The engine was a massive, sprawling, complex, mind-bending nightmare of levers and dials and buttons and switches. You couldn t just slap a button on the wall and expect it to work. But there should be a button. A big, fat button that you could press and everything would be fine again. Just press it, and everything would be back to normal.
  • Red Dwarf: Better Than Life
Once you ve accepted that your release process should be as simple as creating an annotated tag and running one command, you do need to consider what happens afterwards. These days, with the near-universal availability of CI runners that can do anything you need in an isolated, reproducible environment, the work required to go from annotated tag to release artifacts can be scripted up and left to do its thing. What that looks like, of course, will probably vary greatly depending on what you re releasing. I can t really give universally-applicable guidance, since I don t know your situation. All I can do is provide some of my open source work as inspirational examples. For starters, let s look at a simple Rust crate I ve written, called strong-box. It s a straightforward crate, that provides ergonomic and secure cryptographic functionality inspired by the likes of NaCl. As it s just a crate, its release script is very straightforward. Most of the complexity is working around Cargo s inelegant mandate that crate version numbers are specified in a TOML file. Apart from that, it s just a matter of building and uploading the crate. Easy! Slightly more complicated is action-validator. This is a Rust CLI tool which validates GitHub Actions and Workflows (how very meta) against a published JSON schema, to make sure you haven t got any syntax or structural errors. As not everyone has a Rust toolchain on their local box, the release process helpfully build binaries for several common OSes and CPU architectures that people can download if they choose. The release process in this case is somewhat larger, but not particularly complicated. Almost half of it is actually scaffolding to build an experimental WASM/NPM build of the code, because someone seemed rather keen on that. Moving away from Rust, and stepping up the meta another notch, we can take a look at the release process for git-version-bump itself, my Ruby library and associated CLI tool which started me down the Just Tag It Already rabbit hole many years ago. In this case, since gemspecs are very amenable to programmatic definition, the release process is practically trivial. Remove the boilerplate and workarounds for GitHub Actions bugs, and you re left with about three lines of actual commands. These approaches can certainly scale to larger, more complicated processes. I ve recently implemented annotated-tag-based releases in a proprietary software product, that produces Debian/Ubuntu, RedHat, and Windows packages, as well as Docker images, and it takes all of the information it needs from the annotated tag. I m confident that this approach will successfully serve them as they expand out to build AMIs, GCP machine images, and whatever else they need in their release processes in the future.

Objection, Your Honour! I can hear the howl of the but, actuallys coming over the horizon even as I type. People have a lot of Big Feelings about why this release process won t work for them. Rather than overload this article with them, I ve created a companion article that enumerates the objections I ve come across, and answers them. I m also available for consulting if you d like a personalised, professional opinion on your specific circumstances.

DVD Bonus Feature: Pre-releases Unless you re addicted to surprises, it s good to get early feedback about new features and bugfixes before they make it into an official, general-purpose release. For this, you can t go past the pre-release. The major blocker to widespread use of pre-releases is that cutting a release is usually a pain in the behind. If you ve got to edit changelogs, and modify version numbers in a dozen places, then you re entirely justified in thinking that cutting a pre-release for a customer to test that bugfix that only occurs in their environment is too much of a hassle. The thing is, once you ve got releases building from annotated tags, making pre-releases on every push to main becomes practically trivial. This is mostly due to another fantastic and underused Git command: git describe. How git describe works is, basically, that it finds the most recent commit that has an associated annotated tag, and then generates a string that contains that tag s name, plus the number of commits between that tag and the current commit, with the current commit s hash included, as a bonus. That is, imagine that three commits ago, you created an annotated release tag named v4.2.0. If you run git describe now, it will print out v4.2.0-3-g04f5a6f (assuming that the current commit s SHA starts with 04f5a6f). You might be starting to see where this is going. With a bit of light massaging (essentially, removing the leading v and replacing the -s with .s), that string can be converted into a version number which, in most sane environments, is considered newer than the official 4.2.0 release, but will be superceded by the next actual release (say, 4.2.1 or 4.3.0). If you re already injecting version numbers into the release build process, injecting a slightly different version number is no work at all. Then, you can easily build release artifacts for every commit to main, and make them available somewhere they won t get in the way of the official releases. For example, in the proprietary product I mentioned previously, this involves uploading the Debian packages to a separate component (prerelease instead of main), so that users that want to opt-in to the prerelease channel simply modify their sources.list to change main to prerelease. Management have been extremely pleased with the easy availability of pre-release packages; they ve been gleefully installing them willy-nilly for testing purposes since I rolled them out. In fact, even while I ve been writing this article, I was asked to add some debug logging to help track down a particularly pernicious bug. I added the few lines of code, committed, pushed, and went back to writing. A few minutes later (next week s job is to cut that in-process time by at least half), the person who asked for the extra logging ran apt update; apt upgrade, which installed the newly-built package, and was able to progress in their debugging adventure. Continuous Delivery: It s Not Just For Hipsters.

+1, Informative Hopefully, this has spurred you to commit your immortal soul to the Church of the Annotated Tag. You may tithe by buying me a refreshing beverage. Alternately, if you re really keen to adopt more streamlined release management processes, I m available for consulting engagements.

20 November 2024

Arnaud Rebillout: Installing an older Ansible version via pipx

Latest Ansible requires Python 3.8 on the remote hosts ... and therefore, hosts running Debian Buster are now unsupported. Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:
$ ansible --version   head -1
ansible [core 2.18.0]
To my surprise, Ansible started to fail with some remote hosts:
ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]
Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian s Extended LTS. How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible. Pipx to the rescue TL;DR
pipx install --include-deps ansible==10.6.0
pipx inject ansible dnspython    # for community.general.dig
Installing Ansible via pipx Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case. Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/. First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands. The output should look something like that:
$ pipx install --include-deps ansible==10.6.0
  installed package ansible 10.6.0, installed using Python 3.12.7
  These apps are now globally available
    - ansible
    - ansible-community
    - ansible-config
    - ansible-connection
    - ansible-console
    - ansible-doc
    - ansible-galaxy
    - ansible-inventory
    - ansible-playbook
    - ansible-pull
    - ansible-test
    - ansible-vault
done!      
Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that. Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi
Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in. Now, let's open a new terminal and check if we're good:
$ which ansible
/home/me/.local/bin/ansible
$ ansible --version   head -1
ansible [core 2.17.6]
Yep! And that's working already, I can use Ansible with Buster hosts again. What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT. Injecting Python dependencies needed by collections Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:
# Works with APT-installed Ansible? Yes!
$ /usr/bin/ansible all -i localhost, -m debug -a msg="  lookup('dig', 'debian.org./A')  "
localhost   SUCCESS =>  
    "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132"
 
# Works with pipx-installed Ansible? No!
$ ansible all -i localhost, -m debug -a msg="  lookup('dig', 'debian.org./A')  "
localhost   FAILED! =>  
  "msg": "An unhandled exception occurred while running the lookup plugin 'dig'.
  Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig
  lookup requires the python 'dnspython' library and it is not installed."
 
The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:
$ pipx inject ansible dnspython
  injected package dnspython into venv ansible
done!      
Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks. Closing thoughts Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post. Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

12 November 2024

Sven Hoexter: fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations. Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst. First Iteration: Just Run kustomize Like Flux Would Do It With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:
for CLUSTER in $ CLUSTERS ; do
    pushd clusters/$ CLUSTER 
    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster $ CLUSTER "
        cat error.log
    fi
    popd
done
Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work. Thus we started to catch that as well in our growing for loop:
for CLUSTER in $ CLUSTERS ; do
    pushd clusters/$ CLUSTER 
    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster $ CLUSTER "
        cat error.log
    fi
    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f $ CLFOLDER /kustomization.yaml && continue
        test -f $ CLFOLDER /kustomization.yml && continue
        if [[ $(find $ CLFOLDER  -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f wc -l) != 0 ]]; then
            echo "Error Cluster $ CLUSTER  folder $ CLFOLDER  lacks a kustomization.yaml"
        fi
    done
    popd
done
Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

11 November 2024

Vincent Bernat: Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration. While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.
  pkgs  :
pkgs.stdenv.mkDerivation  
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
 
Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:
  stdenv, fetchurl  :
stdenv.mkDerivation rec  
  pname = "hello";
  version = "2.12.1";
  src = fetchurl  
    url = "mirror://gnu/hello/hello-$ version .tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
   ;
 
To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.
pkgs.stdenvNoCC.mkDerivation rec  
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v$ version  --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';
  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
 
With a fixed-output derivation, it is up to us to ensure the output is always the same: You can use this derivation to override the src attribute in pkgs.caddy:
pkgs.caddy.overrideAttrs (prev:  
  src = pkgs.stdenvNoCC.mkDerivation   /* ... */  ;
  vendorHash = null;
  subPackages = [ "." ];
 );
Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:
 
  inputs =  
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
   ;
  outputs =   self, nixpkgs, flake-utils, caddy  :
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs  
          inherit system;
          overlays = [ caddy.overlays.default ];
         ;
      in
       
        packages =  
          default = pkgs.caddy.withPlugins  
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
           ;
         ;
       );
 

Update (2024-11) This flake won t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term plugins, though Caddy documentation also refers to them as modules since they are implemented as Go modules.
  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different and I have proposed it in another pull request.
  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail.

10 November 2024

Dirk Eddelbuettel: inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages. This release was tickled by changing in r-devel just this week, and the corresponding please fix or else email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository. The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)
  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)
  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository
  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues). If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible Builds: Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project. Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Beyond bitwise equality for Reproducible Builds?
  2. Two Ways to Trustworthy at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds? Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds :
The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: Does build A confirm the integrity of build B? or Can build A reveal a compromised build B? . To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.
A PDF of the paper is freely available.

Two Ways to Trustworthy at SeaGL 2024 On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant s talk:
[ ] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust. Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.

Number of cores affected Android compiler output Fay Stegerman wrote that the cause of the Android toolchain bug from September s report that she reported to the Android issue tracker has been found and the bug has been fixed.
the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.
To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.

On our mailing list On our mailing list this month:

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:
  • Ignore errors when listing .ar archives (#1085257). [ ]
  • Don t try and test with systemd-ukify in the Debian stable distribution. [ ]
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). [ ]
In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [ ][ ][ ] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [ ][ ]

IzzyOnDroid passed 25% reproducible apps The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.

Distribution work In Debian this month:
  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results [ ]
  • Recently, a news entry was added to snapshot.debian.org s homepage, describing the recent changes that made the system stable again:
    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM s at LeaseWeb.
    The entry list a number of specific updates surrounding the API endpoints and rate limiting.
  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.
Elsewhere in distribution news, Zbigniew J drzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage. Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.

Website updates There were an enormous number of improvements made to our website this month, including:
  • Alba Herrerias:
    • Improve consistency across distribution-specific guides. [ ]
    • Fix a number of links on the Contribute page. [ ]
  • Chris Lamb:
  • hulkoba
  • James Addison:
    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [ ][ ][ ][ ][ ]
    • On the homepage, link directly to the Projects subpage. [ ]
    • Relocate dependency-drift notes to the Volatile inputs page. [ ]
  • Ninette Adhikari:
    • Add a brand new Success stories page that highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds . [ ][ ][ ][ ][ ][ ]
  • Pol Dellaiera:
    • Update the website s README page for building the website under NixOS. [ ][ ][ ][ ][ ]
    • Add a new academic paper citation. [ ]
Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:
  • Add a basic index.html for rebuilderd. [ ]
  • Update the nginx.conf configuration file for rebuilderd. [ ]
  • Document how to use a rescue system for Infomaniak s OpenStack cloud. [ ]
  • Update usage info for two particular nodes. [ ]
  • Fix up a version skew check to fix the name of the riscv64 architecture. [ ]
  • Update the rebuilderd-related TODO. [ ]
In addition, Mattia Rizzolo added a new IP address for the inos5 node [ ] and Vagrant Cascadian brought 4 virt nodes back online [ ].

Supply-chain security at Open Source Summit EU The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

8 November 2024

Freexian Collaborators: Debian Contributions: October s report (by Anupa Ann Joseph)

Debian Contributions: 2024-10 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

rebootstrap, by Helmut Grohne After significant changes earlier this year, the state of architecture cross bootstrap is normalizing again. More and more architectures manage to complete rebootstrap testing successfully again. Here are two examples of what kind of issues the bootstrap testing identifies. At some point, libpng1.6 would fail to cross build on musl architectures whereas it would succeed on other ones failing to locate zlib. Adding --debug-find to the cmake invocation eventually revealed that it would fail to search in /usr/lib/<triplet>, which is the default library path. This turned out to be a bug in cmake assuming that all linux systems use glibc. libpng1.6 also gained a baseline violation for powerpc and ppc64 by enabling the use of AltiVec there. The newt package would fail to cross build for many 32-bit architectures whereas it would succeed for armel and armhf due to -Wincompatible-pointer-types. It turns out that this flag was turned into -Werror and it was compiling with a warning earlier. The actual problem is a difference in signedness between wchar_t and FriBidChar (aka uint32_t) and actually affects native building on i386.

Miscellaneous contributions
  • Helmut sent 35 patches for cross build failures.
  • Stefano Rivera uploaded the Python 3.13.0 final release.
  • Stefano continued to rebuild Python packages with C extensions using Python 3.13, to catch compatibility issues before the 3.13-add transition starts.
  • Stefano uploaded new versions of a handful of Python packages, including: dh-python, objgraph, python-mitogen, python-truststore, and python-virtualenv.
  • Stefano packaged a new release of mkdocs-macros-plugin, which required packaging a new Python package for Debian, python-super-collections (now in NEW review).
  • Stefano helped the mini-DebConf Online Brazil get video infrastructure up and running for the event. Unfortunately, Debian s online-DebConf setup has bitrotted over the last couple of years, and it eventually required new temporary Jitsi and Jibri instances.
  • Colin Watson fixed a number of autopkgtest failures to get ansible back into testing.
  • Colin fixed an ssh client failure in certain cases when using GSS-API key exchange, and added an integration test to ensure this doesn t regress in future.
  • Colin worked on the Python 3.13 transition, fixing problems related to it in 15 packages. This included upstream work in a number of packages (postgresfixture, python-asyncssh, python-wadllib).
  • Colin upgraded 41 Python packages to new upstream versions.
  • Carles improved po-debconf-manager: now it can create merge requests to Salsa automatically (created 17, new batch coming this month), imported almost all the packages with debconf translation templates whose VCS is Salsa (currently 449 imported), added statistics per package and language, improved command line interface options. Performed user support fixing different issues. Also prepared an abstract for the talk at MiniDebConf Toulouse.
  • Santiago Ruano Rinc n continued the organization work for the DebConf 25 conference, to be held in Brest, France. Part of the work relates to the initial edits of the sponsoring brochure. Thanks to Benjamin Somers who finalized the French and English versions.
  • Rapha l forwarded a couple of zim and hamster bugs to the upstream developers, and tried to diagnose a delayed startup of gdm on his laptop (cf #1085633).
  • On behalf of the Debian Publicity Team, Anupa interviewed 7 women from the Debian community, old and new contributors. The interview was published in Bits from Debian.

2 November 2024

Russell Coker: More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I m still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what s the best resolution to have on a laptop if price isn t an issue, but it s at least 1440p for a 14 display, that s 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term Retina Display to mean something in the range of 250DPI to 300DPI, so my current laptop is below Retina while the most expensive new Thinkpads are above it. I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32 5K display, currently they all seem to be 27 and I ve found that even for 4K resolution 32 is better than 27 . With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it s touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn t confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :
# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3
[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/acpi/wakeup"
[Install]
WantedBy=multi-user.target
Now it works fine, for stylus at least. I still get kernel error messages like the following which don t seem to cause problems:
wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.
When it wasn t working I got the above but also kernel error messages like:
wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events
This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I ve configured KDE to suspend when the lid is closed and there s no monitor connected.

1 November 2024

Colin Watson: Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. Ansible I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone. The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process: This should now get back into testing tomorrow. OpenSSH Martin- ric Racine reported that ssh-audit didn t list the ext-info-s feature as being available in Debian s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page. I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct. On upstream s advice, I cherry-picked some key exchange fixes needed for big-endian architectures. Python team I packaged python-evalidate, needed for a new upstream version of buildbot. The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface. A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python s dead batteries PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian s python-wadllib source package to allow its tests to pass. I ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging. tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left. I tracked down an nltk regression that caused build failures in many other packages. I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernoo , but it needed a little extra work). I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto). I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools. I fixed broken symlinks in python-treq. I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream). I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions. I tried to fix a regression in python-scruffy, but I need testing feedback. I requested removal of python-testing.mysqld.

27 October 2024

Enrico Zini: Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.
Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:
from unittest import mock
default = 42
def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
It works nicely as expected:
$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None
It lacks functools.wraps and typing, though. Let's add them. Adding functools.wraps Adding a simple @functools.wraps, mock unexpectedly stops working:
# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'
This is the new code, with explanations and a fix:
# Introduce functools
import functools
from unittest import mock
default = 42
def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)
    # Fix:
    # del wrapped.__wrapped__
    return wrapped
class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()
Adding typing For simplicity, from now on let's change Fiddle.print to match its wrapped signature:
      # Give up with making value not optional, to simplify things :(
      def print(self, value: int   None = None) -> None:
          assert value is not None
          print("Answer:", value)
Typing with ParamSpec
# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock
default = 42
P = ParamSpec("P")
def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:
test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int   None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int   None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int   None, 'value')], None]"
Typing with Callable We can use explicit Callable argument lists:
# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock
default = 42
A = TypeVar("A")
# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int   None], None]) -> Callable[[A, int   None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return wrapped
class Fiddle:
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
if TYPE_CHECKING:
    reveal_type(Fiddle.print)
fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:
test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]
typing's documentation says:
Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:
Let's do that! Typing with Protocol, take 1
# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock
default = 42
A = TypeVar("A", contravariant=True)
class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int   None = None) -> None:
        ...
def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return cast(Printer, wrapped)
class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
if TYPE_CHECKING:
    reveal_type(Fiddle.print)
fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
New mypy complaints:
test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly. Typing with Protocol, take 2 So... we add the function descriptor methos to our Protocol! A lot of this is taken from this discussion.
# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock
default = 42
A = TypeVar("A", contravariant=True)
# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040
class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""
    def __call__(_, value: int   None = None) -> None:
        """Bound signature."""
class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""
    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)
    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A]   None = None  # noqa: F841
    ) -> BoundPrinter:
        ...
    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A]   None = None  # noqa: F841
    ) -> "Printer[A]":
        ...
    def __get__(
        self, obj: A   None, objtype: type[A]   None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""
    def __call__(_, self: A, value: int   None = None) -> None:
        """Unbound signature."""
def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int   None = None) -> None:
        if value is None:
            value = default
        return f(self, value)
    return cast(Printer, wrapped)
class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int   None = None) -> None:
        assert value is not None
        print("Answer:", value)
fiddle = Fiddle()
fiddle.print(12)
fiddle.print()
def mocked(self, value=None):
    print("Mocked answer:", value)
with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()
It works! It's typed! And mypy is happy!

25 October 2024

Reproducible Builds (diffoscope): diffoscope 282 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 282. This version includes the following changes:
[ Chris Lamb ]
* Ignore errors when listing .ar archives. (Closes: #1085257)
* Update copyright years.
You find out more by visiting the project homepage.

21 October 2024

Sven Hoexter: Terraform: Making Use of Precondition Checks

I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets. Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:
#tfvars Input for Module
org_secrets =  
    "SECRET_A" =  
        repos = [
            "infra-foo",
            "infra-baz",
            "deployment-foobar",
        ]
    "SECRET_B" =  
        repos = [
            "job-abc",
            "job-xyz",
        ]
     
 
# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos"  
    query           = "org:myorg archived:false -is:public -is:private"
    include_repo_id = true
 
/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals  
    # Assemble the set of repository names we need repo_ids for
    repos = toset(flatten([for v in var.org_secrets : v.repos]))
    # Walk through all names in the query result list and check
    # if they're also in our repo set. If yes add the repo name -> id
    # mapping to our resulting map
    repos_and_ids =  
        for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
        if contains(local.repos, v)
     
 
resource "github_actions_organization_secret" "org_secrets"  
    for_each        = var.org_secrets
    secret_name     = each.key
    visibility      = "selected"
    # the logic how the secret value is sourced is omitted here
    plaintext_value = data.xxx
    selected_repository_ids = [
        for r in each.value.repos : local.repos_and_ids[r]
        if can(local.repos_and_ids[r])
    ]
 
Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently. Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:
locals  
    # Debug facility in combination with an output and precondition check
    # There we can report which repository we still have in our configuration
    # but no longer get as a result from the data provider query
    missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
 
# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos"  
    value = local.missing_repos
    precondition  
        condition     = length(local.missing_repos) == 0
        error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
     
 
Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.

15 October 2024

Jonathan Dowland: Whisper (pipewire tool)

It's time to mint a new blog tag I want to write to pour praise on some software I recently discovered. I'm not up to speed on Pipewire the latest piece of Linux plumbing related to audio nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what else?). I recently tried to plug something into the line-in port on my external audio interface, and wished to hear it on the machine. A simple task, you'd think. I'll refrain from writing about the stuff that didn't work well and focus on the thing that did: A little tool called Whisper, which is designed to let you listen to a microphone through your speakers.
_Whisper_'s UI. Screenshot from upstream. Whisper's UI. Screenshot from upstream.
Whisper does a great job of hiding the complexity of what lies beneath and asking two questions: which microphone, and which speakers? In my case this alone was not quite enough, as I was presented with two identically-named "SB Live Extigy" "microphone" devices, but that's easily resolved with trial and error. More stuff like this please!

Iustin Pop: Optical media lifetime - one data point

Way back (more than 10 years ago) when I was doing DVD-based backups, I knew that normal DVDs/Blu-Rays are no long-term archival solutions, and that if I was real about doing optical media backups, I need to switch to M-Disc. I actually bought a (small stack) of M-Disc Blu-Rays, but never used them. I then switched to other backups solutions, and forgot about the whole topic. Until, this week, while sorting stuff, I happened upon a set of DVD backups from a range of years, and was very curious whether they are still readable after many years. And, to my surprise, there were no surprises! Went backward in time, and: I also found stack of dual-layer DVD+R from 2012-2014, some for sure Verbatim, and some unmarked (they were intended to be printed on), but likely Verbatim as well. All worked just fine. Just that, even at ~8GiB per disk, backing up raw photo files took way too many disks, even in 2014 . At this point I was happy that all 12+ DVDs I found, ranging from 10 to 14 years, are all good. Then I found a batch of 3 CDs! Here the results were mixed: I think the takeaway is that for all explicitly selected media - TDK, JVC and Verbatim - they hold for 10-20 years. Valid reads from summer 2003 is mind boggling for me, for (IIRC) organic media - not sure about the TDK metallic substrate. And when you just pick whatever ( Creation ), well, the results are mixed. Note that in all this, it was about CDs and DVDs. I have no idea how Blu-Rays behave, since I don t think I ever wrote a Blu-Ray. In any case, surprising to me, and makes me rethink a bit my backup options. Sizes from 25 to 100GB Blu-Rays are reasonable for most critical data. And they re WORM, as opposed to most LTO media, which is re-writable (and to some small extent, prone to accidental wiping). Now, I should check those M-Disks to see if they can still be written to, after 10 years

13 October 2024

Andy Simpkins: The state of the art

A long time ago . A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of repetitive mathematics typically for accounting and stock / order processing. Then along came Lyons, who deployed an artificial computer to perform the same task, only with fewer errors in less time. Modern day computing was born we had entered the age of the Digital Computer. These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results. Over time the huge mainframe digital computers have shrunk in size, increased in performance, and consume far less power so much so that they often didn t need the specialist CFC based, refrigerated liquid cooling systems of their bigger mainframe counterparts, only requiring forced air flow, and occasionally just convection cooling. They shrank so far and became cheep enough that the Personal Computer became to be, replacing the mainframe with its time shared resources with a machine per user. Desktop or even portable laptop computers were everywhere. We networked them together, so now we can share information around the office, a few computers were given specialist tasks of being available all the time so we could share documents, or host databases these servers were basically PCs designed to operate 24 7, usually more powerful than their desktop counterparts (or at least with faster storage and networking). Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age. Fast forward a few years and all of a sudden we need huge data-halls filled with Rack scale machines augmented with exotic GPUs and NPUs again with refrigerated liquid cooling, all to do the same task that we were doing previously without the magical buzzword that has been named AI; because we all need another dot com bubble or block chain band waggon to jump aboard. Our AI enabled searches take slightly longer, consume magnitudes more power, and best of all the results we are given may or may not be correct . Progress, less precise answers, taking longer, consuming more power, without any verification and often giving a different result if you repeat your question AND we still need a personal computing device to access this wondrous thing. Remind me again why we are here? (time lines and huge swaves of history simply ignored to make an attempted comic point this is intended to make a point and not be scholarly work)

10 October 2024

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

Next.