Search Results: "lena"

12 May 2025

Sergio Talens-Oliag: Playing with vCluster

After my previous posts related to Argo CD (one about argocd-autopilot and another with some usage examples) I started to look into Kluctl (I also plan to review Flux, but I m more interested on the kluctl approach right now). While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time. On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about argocd) as the host and will show how to:
  • use its ingress (in our case traefik) to access the API of the virtual one (removes the need of having to use the vcluster connect command to access it with kubectl),
  • publish the ingress objects deployed on the virtual cluster on the host ingress, and
  • use the sealed-secrets of the host cluster to manage the virtual cluster secrets.

Creating the virtual cluster

Installing the vcluster applicationTo create the virtual clusters we need the vcluster command, we can install it with arkade:
  arkade get vcluster

The vcluster.yaml fileTo create the cluster we are going to use the following vcluster.yaml file (you can find the documentation about all its options here):
controlPlane:
  proxy:
    # Extra hostnames to sign the vCluster proxy certificate for
    extraSANs:
    - my-vcluster-api.lo.mixinet.net
exportKubeConfig:
  context: my-vcluster_k3d-argocd
  server: https://my-vcluster-api.lo.mixinet.net:8443
  secret:
    name: my-vcluster-kubeconfig
sync:
  toHost:
    ingresses:
      enabled: true
    serviceAccounts:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true
    nodes:
      enabled: true
      clearImageStatus: true
    secrets:
      enabled: true
      mappings:
        byName:
          # sync all Secrets from the 'my-vcluster-default' namespace to the
          # virtual "default" namespace.
          "my-vcluster-default/*": "default/*"
          # We could add other namespace mappings if needed, i.e.:
          # "my-vcluster-kube-system/*": "kube-system/*"
On the controlPlane section we ve added the proxy.extraSANs entry to add an extra host name to make sure it is added to the cluster certificates if we use it from an ingress. The exportKubeConfig section creates a kubeconfig secret on the virtual cluster namespace using the provided host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine. On the sync section we enable the synchronization of Ingress objects and ServiceAccounts from the virtual to the host cluster:
  • We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside world.
  • The service account synchronization is not really needed, but we enable it because if we test this configuration with EKS it would be useful if we use IAM roles for the service accounts.
On the opposite direction (from the host to the virtual cluster) we synchronize:
  • The IngressClass objects, to be able to use the host ingress server(s).
  • The Nodes (we are not using the info right now, but it could be interesting if we want to have the real information of the nodes running pods of the virtual cluster).
  • The Secrets from the my-vcluster-default host namespace to the default of the virtual cluster; that synchronization allows us to deploy SealedSecrets on the host that generate secrets that are copied automatically to the virtual one. Initially we only copy secrets for one namespace but if the virtual cluster needs others we can add namespaces on the host and their mappings to the virtual one on the vcluster.yaml file.

Creating the virtual clusterTo create the virtual cluster we run the following command:
vcluster create my-vcluster --namespace my-vcluster --upgrade --connect=false \
  --values vcluster.yaml
It creates the virtual cluster on the my-vcluster namespace using the vcluster.yaml file shown before without connecting to the cluster from our local machine (if we don t pass that option the command adds an entry on our kubeconfig and launches a proxy to connect to the virtual cluster that we don t plan to use).

Adding an ingress TCP route to connect to the vcluster apiAs explained before, we need to create an IngressTcpRoute object to be able to connect to the vcluster API, we use the following definition:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: my-vcluster-api
  namespace: my-vcluster
spec:
  entryPoints:
    - websecure
  routes:
    - match: HostSNI( my-vcluster-api.lo.mixinet.net )
      services:
        - name: my-vcluster
          port: 443
  tls:
    passthrough: true
Once we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted it on the vcluster.yaml file, as explained before).

Getting the kubeconfig for the vclusterOnce the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig secret on its namespace on the host cluster. To dump it to the ~/.kube/my-vcluster-config we can do the following:
  kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
    --template=" .data.config "   base64 -d > ~/.kube/my-vcluster-config
Once available we can define the vkubectl alias to adjust the KUBECONFIG variable to access it:
alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"
Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx or a similar tool to change the context (for our vcluster the context will be my-vcluster_k3d-argocd). If the KUBECONFIG variable is defined and only has the PATH to a single file the merge can be done running the following:
KUBECONFIG="$KUBECONFIG:~/.kube/my-vcluster-config" kubectl config view \
  --flatten >"$KUBECONFIG.new"
mv "$KUBECONFIG.new" "$KUBECONFIG"
On the rest of this post we will use the vkubectl alias when connecting to the virtual cluster, i.e. to check that it works we can run the cluster-info subcommand:
  vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Installing the dummyhttpd applicationTo test the virtual cluster we are going to install the dummyhttpd application using the following kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
# Add the config map
configMapGenerator:
  - name: dummyhttp-configmap
    literals:
      - CM_VAR="Vcluster Test Value"
    behavior: create
    options:
      disableNameSuffixHash: true
patches:
  # Change the ingress host name
  - target:
      kind: Ingress
      name: dummyhttp
    patch:  -
      - op: replace
        path: /spec/rules/0/host
        value: vcluster-dummyhttp.lo.mixinet.net
  # Add reloader annotations -- it will only work if we install reloader on the
  # virtual cluster, as the one on the host cluster doesn't see the vcluster
  # deployment objects
  - target:
      kind: Deployment
      name: dummyhttp
    patch:  -
      - op: add
        path: /metadata/annotations
        value:
          reloader.stakater.com/auto: "true"
          reloader.stakater.com/rollout-strategy: "restart"
It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run kustomize and vkubectl:
  kustomize build .   vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created
We can check that everything worked using curl:
  curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/   jq -cM .
 "c": "Vcluster Test Value","s": "" 
The objects available on the vcluster now are:
  vkubectl get all,configmap,ingress
NAME                             READY   STATUS    RESTARTS   AGE
pod/dummyhttp-55569589bc-9zl7t   1/1     Running   0          24s
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/dummyhttp    ClusterIP   10.43.51.39    <none>        80/TCP    24s
service/kubernetes   ClusterIP   10.43.153.12   <none>        443/TCP   14m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dummyhttp   1/1     1            1           24s
NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dummyhttp-55569589bc   1         1         1       24s
NAME                            DATA   AGE
configmap/dummyhttp-configmap   1      24s
configmap/kube-root-ca.crt      1      14m
NAME                                CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    24s
While we have the following ones on the my-vcluster namespace of the host cluster:
  kubectl get all,configmap,ingress -n my-vcluster
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster   1/1     Running   0          18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster    1/1     Running   0          45s
pod/my-vcluster-0                                         1/1     Running   0          19m
NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
service/dummyhttp-x-default-x-my-vcluster      ClusterIP   10.43.51.39     <none>        80/TCP                   45s
service/kube-dns-x-kube-system-x-my-vcluster   ClusterIP   10.43.91.198    <none>        53/UDP,53/TCP,9153/TCP   18m
service/my-vcluster                            ClusterIP   10.43.153.12    <none>        443/TCP,10250/TCP        19m
service/my-vcluster-headless                   ClusterIP   None            <none>        443/TCP                  19m
service/my-vcluster-node-k3d-argocd-agent-1    ClusterIP   10.43.189.188   <none>        10250/TCP                18m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     19m
NAME                                                     DATA   AGE
configmap/coredns-x-kube-system-x-my-vcluster            2      18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster    1      45s
configmap/kube-root-ca.crt                               1      19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster       1      11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster   1      18m
configmap/vc-coredns-my-vcluster                         1      19m
NAME                                                        CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    45s
As shown, we have copies of the Service, Pod, Configmap and Ingress objects, but there is no copy of the Deployment or ReplicaSet.

Creating a sealed secret for dummyhttpdTo use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default namespace and add there the sealed secrets we want to have available as secrets on the default namespace of the virtual cluster:
  kubectl create namespace my-vcluster-default
  echo -n "Vcluster Boo"   kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
  kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
  kubectl apply -f dummyhttp-sealed-secret.yaml
  rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
After running the previous commands we have the following objects available on the host cluster:
  kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     34s
NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      34s
And we can see that the secret is also available on the virtual cluster with the content we expected:
  vkubectl get secrets
NAME               TYPE     DATA   AGE
dummyhttp-secret   Opaque   1      34s
  vkubectl get secret/dummyhttp-secret --template=" .data.SECRET_VAR " \
    base64 -d
Vcluster Boo
But the output of the curl command has not changed because, although we have the reloader controller deployed on the host cluster, it does not see the Deployment object of the virtual one and the pods are not touched:
  curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/   jq -cM .
 "c": "Vcluster Test Value","s": "" 

Installing the reloader applicationTo make reloader work on the virtual cluster we just need to install it as we did on the host using the following kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch:  -
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'
We deploy it with kustomize and vkubectl:
  kustomize build .   vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created
As the controller was not available when the secret was created the pods linked to the Deployment are not updated, but we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed version and copied to the virtual cluster where the reloader controller updates the pod and the curl command shows the new output:
  kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
  sleep 2
  vkubectl get pods
NAME                         READY   STATUS        RESTARTS   AGE
dummyhttp-78bf5fb885-fmsvs   1/1     Terminating   0          6m33s
dummyhttp-c68684bbf-nx8f9    1/1     Running       0          6s
  curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/   jq -cM .
 "c":"Vcluster Test Value","s":"Vcluster Boo" 
If we change the secret on the host systems things get updated pretty quickly now:
  echo -n "New secret"   kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
  kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
  kubectl apply -f dummyhttp-sealed-secret.yaml
  rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
  curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/   jq -cM .
 "c":"Vcluster Test Value","s":"New secret" 

Pause and restore the vclusterThe status of pods and statefulsets while the virtual cluster is active can be seen using kubectl:
  kubectl get pods,statefulsets -n my-vcluster
NAME                                                                 READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster              1/1     Running   0          127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster               1/1     Running   0          4m39s
pod/my-vcluster-0                                                    1/1     Running   0          128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster   1/1     Running   0          60m
NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     128m

Pausing the vclusterIf we don t need to use the virtual cluster we can pause it and after a small amount of time all Pods are gone because the statefulSet is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that now can be used for other things):
  vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
  kubectl get pods,statefulsets -n my-vcluster
NAME                           READY   AGE
statefulset.apps/my-vcluster   0/0     130m
Now the curl command fails:
  curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found
Although the ingress is still available (it returns a 404 because there is no pod behind the service):
  kubectl get ingress -n my-vcluster
NAME                                CLASS     HOSTS                               ADDRESS                            PORTS   AGE
dummyhttp-x-default-x-my-vcluster   traefik   vcluster-dummyhttp.lo.mixinet.net   172.20.0.2,172.20.0.3,172.20.0.4   80      120m
In fact, the same problem happens when we try to connect to the vcluster API; the error shown by kubectl is related to the TLS certificate because the 404 page uses the wildcard certificate instead of the self signed one:
  vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
  curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
  curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1   grep subject
*  subject: CN=lo.mixinet.net
*  subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"

Resuming the vclusterWhen we want to use the virtual cluster again we just need to use the resume command:
  vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vcluster
Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of course.

Cleaning upThe virtual cluster can be removed using the delete command:
  vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted
That removes everything we used on this post except the sealed secrets and secrets that we put on the my-vcluster-default namespace because it was created by us. If we delete the namespace all the secrets and sealed secrets on it are also removed:
  kubectl delete namespace my-vcluster-default
namespace "my-vcluster-default" deleted

ConclusionsI believe that the use of virtual clusters can be a good option for two of the proposed use cases that I ve encountered in real projects in the past:
  • need of short lived clusters for developers or teams,
  • execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual clusters that are created on demand or paused and resumed when needed).
For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform offering could be interesting. In any case when everything is not done inside kubernetes we will also have to check how to manage the external services (i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way of creating, deleting or pause and resume those services).

5 May 2025

Sergio Talens-Oliag: Argo CD Usage Examples

As a followup of my post about the use of argocd-autopilot I m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post. For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the argocd-server (the resource was updated but the application Pod was not because there was no change on the argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we want to avoid. The proposed solution to this kind of issues on the helm documentation is to add annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is triggered. On this post we will install a couple of controllers and an application to show how we can handle Secrets with argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks:
  1. Deploy the Reloader controller to our cluster. It is a tool that watches changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment, StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some annotations to the objects to make things work).
  2. Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its job when we add or update a ConfigMap.
  3. Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.

Creating the test project for argocd-autopilotAs we did our installation using argocd-autopilot we will use its structure to manage the applications. The first thing to do is to create a project (we will name it test) as follows:
  argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'
Now that the test project is available we will use it on our argocd-autopilot invocations when creating applications.

Installing the reloader controllerTo add the reloader application to the test project as a kustomize application and deploy it on the tools namespace with argocd-autopilot we do the following:
  argocd-autopilot app create reloader \
    --app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
    --project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader
That command creates four files on the argocd repository:
  1. One to create the tools namespace:
    bootstrap/cluster-resources/in-cluster/tools-ns.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        argocd.argoproj.io/sync-options: Prune=false
      creationTimestamp: null
      name: tools
    spec:  
    status:  
  2. Another to include the reloader base application from the upstream repository:
    apps/reloader/base/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
  3. The kustomization.yaml file for the test project (by default it includes the same configuration used on the base definition, but we could make other changes if needed):
    apps/reloader/overlays/test/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namespace: tools
    resources:
    - ../../base
  4. The config.json file used to define the application on argocd for the test project (it points to the folder that includes the previous kustomization.yaml file):
    apps/reloader/overlays/test/config.json
     
      "appName": "reloader",
      "userGivenName": "reloader",
      "destNamespace": "tools",
      "destServer": "https://kubernetes.default.svc",
      "srcPath": "apps/reloader/overlays/test",
      "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
      "srcTargetRevision": "",
      "labels": null,
      "annotations": null
     
We can check that the application is working using the argocd command line application:
  argocd app get argocd/test-reloader -o tree
Name:               argocd/test-reloader
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          tools
URL:                https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/reloader/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (2893b56)
Health Status:      Healthy
KIND/NAME                                          STATUS  HEALTH   MESSAGE
ClusterRole/reloader-reloader-role                 Synced
ClusterRoleBinding/reloader-reloader-role-binding  Synced
ServiceAccount/reloader-reloader                   Synced           serviceaccount/reloader-reloader created
Deployment/reloader-reloader                       Synced  Healthy  deployment.apps/reloader-reloader created
 ReplicaSet/reloader-reloader-5b6dcc7b6f                  Healthy
   Pod/reloader-reloader-5b6dcc7b6f-vwjcx                 Healthy

Adding flags to the reloader serverThe runtime configuration flags for the reloader server are described on the project README.md file, in our case we want to adjust three values:
  • We want to enable the option to reload a workload when a ConfigMap or Secret is created,
  • We want to enable the option to reload a workload when a ConfigMap or Secret is deleted,
  • We want to use the annotations strategy for reloads, as it is the recommended mode of operation when using argocd.
To pass them we edit the apps/reloader/overlays/test/kustomization.yaml file to patch the pod container template, the text added is the following:
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch:  -
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'
After committing and pushing the updated file the system launches the application with the new options.

The dummyhttp applicationTo do a quick test we are going to deploy the dummyhttp web server using an image generated using the following Dockerfile:
# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp>
# This arg could be passed by the container build command (used with mirrors)
ARG OCI_REGISTRY_PREFIX
# Latest tested version of alpine
FROM $ OCI_REGISTRY_PREFIX alpine:3.21.3
# Tool versions
ARG DUMMYHTTP_VERS=1.1.1
# Download binary
RUN ARCH="$(apk --print-arch)" && \
  VERS="$DUMMYHTTP_VERS" && \
  URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl" && \
  wget "$URL" -O "/tmp/dummyhttp" && \
  install /tmp/dummyhttp /usr/local/bin && \
  rm -f /tmp/dummyhttp
# Set the entrypoint to /usr/local/bin/dummyhttp
ENTRYPOINT [ "/usr/local/bin/dummyhttp" ]
The kustomize base application is available on a monorepo that contains the following files:
  1. A Deployment definition that uses the previous image but uses /bin/sh -c as its entrypoint (command in the k8s Pod terminology) and passes as its argument a string that runs the eval command to be able to expand environment variables passed to the pod (the definition includes two optional variables, one taken from a ConfigMap and another one from a Secret):
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dummyhttp
      labels:
        app: dummyhttp
    spec:
      selector:
        matchLabels:
          app: dummyhttp
      template:
        metadata:
          labels:
            app: dummyhttp
        spec:
          containers:
          - name: dummyhttp
            image: forgejo.mixinet.net/oci/dummyhttp:1.0.0
            command: [ "/bin/sh", "-c" ]
            args:
            - 'eval dummyhttp -b \" \\\"c\\\": \\\"$CM_VAR\\\", \\\"s\\\": \\\"$SECRET_VAR\\\" \"'
            ports:
            - containerPort: 8080
            env:
            - name: CM_VAR
              valueFrom:
                configMapKeyRef:
                  name: dummyhttp-configmap
                  key: CM_VAR
                  optional: true
            - name: SECRET_VAR
              valueFrom:
                secretKeyRef:
                  name: dummyhttp-secret
                  key: SECRET_VAR
                  optional: true
  2. A Service that publishes the previous Deployment (the only relevant thing to mention is that the web server uses the port 8080 by default):
    apiVersion: v1
    kind: Service
    metadata:
      name: dummyhttp
    spec:
      selector:
        app: dummyhttp
      ports:
      - name: http
        port: 80
        targetPort: 8080
  3. An Ingress definition to allow access to the application from the outside:
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: dummyhttp
      annotations:
        traefik.ingress.kubernetes.io/router.tls: "true"
    spec:
      rules:
        - host: dummyhttp.localhost.mixinet.net
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: dummyhttp
                    port:
                      number: 80
  4. And the kustomization.yaml file that includes the previous files:
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - deployment.yaml
    - service.yaml
    - ingress.yaml

Deploying the dummyhttp application from argocdWe could create the dummyhttp application using the argocd-autopilot command as we ve done on the reloader case, but we are going to do it manually to show how simple it is. First we ve created the apps/dummyhttp/base/kustomization.yaml file to include the application from the previous repository:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
As a second step we create the apps/dummyhttp/overlays/test/kustomization.yaml file to include the previous file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
And finally we add the apps/dummyhttp/overlays/test/config.json file to configure the application as the ApplicationSet defined by argocd-autopilot expects:
 
  "appName": "dummyhttp",
  "userGivenName": "dummyhttp",
  "destNamespace": "default",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/dummyhttp/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
 
Once we have the three files we commit and push the changes and argocd deploys the application; we can check that things are working using curl:
  curl -s https://dummyhttp.lo.mixinet.net:8443/   jq -M .
 
  "c": "",
  "s": ""
 

Patching the applicationNow we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml file:
  • One to add annotations for reloader (one to enable it and another one to set the roll out strategy to restart to avoid touching the deployments, as that can generate issues with argocd).
  • Another to change the ingress hostname (not really needed, but something quite reasonable for a specific project).
The file diff is as follows:
--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,3 +2,22 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+patches:
+# Add reloader annotations
+- target:
+    kind: Deployment
+    name: dummyhttp
+  patch:  -
+    - op: add
+      path: /metadata/annotations
+      value:
+        reloader.stakater.com/auto: "true"
+        reloader.stakater.com/rollout-strategy: "restart"
+# Change the ingress host name
+- target:
+    kind: Ingress
+    name: dummyhttp
+  patch:  -
+    - op: replace
+      path: /spec/rules/0/host
+      value: test-dummyhttp.lo.mixinet.net
After committing and pushing the changes we can use the argocd cli to check the status of the application:
  argocd app get argocd/test-dummyhttp -o tree
Name:               argocd/test-dummyhttp
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          default
URL:                https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/dummyhttp/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (fbc6031)
Health Status:      Healthy
KIND/NAME                           STATUS  HEALTH   MESSAGE
Deployment/dummyhttp                Synced  Healthy  deployment.apps/dummyhttp configured
 ReplicaSet/dummyhttp-55569589bc           Healthy
   Pod/dummyhttp-55569589bc-qhnfk          Healthy
Ingress/dummyhttp                   Synced  Healthy  ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp                   Synced  Healthy  service/dummyhttp unchanged
 Endpoints/dummyhttp
 EndpointSlice/dummyhttp-x57bl
As we can see, the Deployment and Ingress where updated, but the Service is unchanged. To validate that the ingress is using the new hostname we can use curl:
  curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
  curl -s https://test-dummyhttp.lo.mixinet.net:8443/
 "c": "", "s": "" 

Adding a ConfigMapNow that the system is adjusted to reload the application when the ConfigMap or Secret is created, deleted or updated we are ready to add one file and see how the system reacts. We modify the apps/dummyhttp/overlays/test/kustomization.yaml file to create the ConfigMap using the configMapGenerator as follows:
--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+  literals:
+  - CM_VAR="Default Test Value"
+  behavior: create
+  options:
+    disableNameSuffixHash: true
 patches:
 # Add reloader annotations
 - target:
After committing and pushing the changes we can see that the ConfigMap is available, the pod has been deleted and started again and the curl output includes the new value:
  kubectl get configmaps,pods
NAME                             READY   STATUS        RESTARTS   AGE
configmap/dummyhttp-configmap   1      11s
configmap/kube-root-ca.crt      1      4d7h
NAME                            DATA   AGE
pod/dummyhttp-779c96c44b-pjq4d   1/1     Running       0          11s
pod/dummyhttp-fc964557f-jvpkx    1/1     Terminating   0          2m42s
  curl -s https://test-dummyhttp.lo.mixinet.net:8443   jq -M .
 
  "c": "Default Test Value",
  "s": ""
 

Using helm with argocd-autopilotRight now there is no direct support in argocd-autopilot to manage applications using helm (see the issue #38 on the project), but we want to use a chart in our next example. There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot is to use kustomize applications that call helm as described here. The only thing needed before being able to use the approach is to add the kustomize.buildOptions flag to the argocd-cm on the bootstrap/argo-cd/kustomization.yaml file, its contents now are follows:
bootstrap/argo-cd/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  # Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)
  - kustomize.buildOptions="--enable-helm"
  -  
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://forgejo.mixinet.net/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  name: argocd-cm
  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
- behavior: merge
  literals:
  - "server.insecure=true"
  name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
- ingress_route.yaml
On the following section we will explain how the application is defined to make things work.

Installing the sealed-secrets controllerTo manage secrets in our cluster we are going to use the sealed-secrets controller and to install it we are going to use its chart. As we mentioned on the previous section, the idea is to create a kustomize application and use that to deploy the chart, but we are going to create the files manually, as we are not going import the base kustomization files from a remote repository. As there is no clear way to override helm Chart values using overlays we are going to use a generator to create the helm configuration from an external resource and include it from our overlays (the idea has been taken from this repository, which was referenced from a comment on the kustomize issue #38 mentioned earlier).

The sealed-secrets applicationWe have created the following files and folders manually:
apps/sealed-secrets/
  helm
    chart.yaml
    kustomization.yaml
  overlays
      test
          config.json
          kustomization.yaml
          values.yaml
The helm folder contains the generator template that will be included from our overlays. The kustomization.yaml includes the chart.yaml as a resource:
apps/sealed-secrets/helm/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- chart.yaml
And the chart.yaml file defines the HelmChartInflationGenerator:
apps/sealed-secrets/helm/chart.yaml
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
  name: sealed-secrets
releaseName: sealed-secrets
name: sealed-secrets
namespace: kube-system
repo: https://bitnami-labs.github.io/sealed-secrets
version: 2.17.2
includeCRDs: true
# Add common values to all argo-cd projects inline
valuesInline:
  fullnameOverride: sealed-secrets-controller
# Load a values.yaml file from the same directory that uses this generator
valuesFile: values.yaml
For this chart the template adjusts the namespace to kube-system and adds the fullnameOverride on the valuesInline key because we want to use those settings on all the projects (they are the values expected by the kubeseal command line application, so we adjust them to avoid the need to add additional parameters to it). We adjust global values as inline to be able to use a the valuesFile from our overlays; as we are using a generator the path is relative to the folder that contains the kustomization.yaml file that calls it, in our case we will need to have a values.yaml file on each overlay folder (if we don t want to overwrite any values for a project we can create an empty file, but it has to exist). Finally, our overlay folder contains three files, a kustomization.yaml file that includes the generator from the helm folder, the values.yaml file needed by the chart and the config.json file used by argocd-autopilot to install the application. The kustomization.yaml file contents are:
apps/sealed-secrets/overlays/test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Uncomment if you want to add additional resources using kustomize
#resources:
#- ../../base
generators:
- ../../helm
The values.yaml file enables the ingress for the application and adjusts its hostname:
apps/sealed-secrets/overlays/test/values.yaml
ingress:
  enabled: true
  hostname: test-sealed-secrets.lo.mixinet.net
And the config.json file is similar to the ones used with the other applications we have installed:
apps/sealed-secrets/overlays/test/config.json
 
  "appName": "sealed-secrets",
  "userGivenName": "sealed-secrets",
  "destNamespace": "kube-system",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/sealed-secrets/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
 
Once we commit and push the files the sealed-secrets application is installed in our cluster, we can check it using curl to get the public certificate used by it:
  curl -s https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----

The dummyhttp-secretTo create sealed secrets we need to install the kubeseal tool:
  arkade get kubeseal
Now we create a local version of the dummyhttp-secret that contains some value on the SECRET_VAR key (the easiest way for doing it is to use kubectl):
  echo -n "Boo"   kubectl create secret generic dummyhttp-secret \
    --dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
    >/tmp/dummyhttp-secret.yaml
The secret definition in yaml format is:
apiVersion: v1
data:
  SECRET_VAR: Qm9v
kind: Secret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret
To create a sealed version using the kubeseal tool we can do the following:
  kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml
That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart to use the kube-system namespace and set the controller name to sealed-secrets-controller as the tool expects. If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the public key:
  kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
    --cert https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
Or, if we don t have access to the ingress address, we can save the certificate on a file and use it instead of the URL. The sealed version of the secret looks like this:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret
  namespace: default
spec:
  encryptedData:
    SECRET_VAR: [...]
  template:
    metadata:
      creationTimestamp: null
      name: dummyhttp-secret
      namespace: default
This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd application), but before doing that we are going to check the output of our dummyhttp service and get the list of Secrets and SealedSecrets in the default namespace:
  curl -s https://test-dummyhttp.lo.mixinet.net:8443   jq -M .
 
  "c": "Default Test Value",
  "s": ""
 
  kubectl get sealedsecrets,secrets
No resources found in default namespace.
Now we add the SealedSecret to the dummyapp copying the file and adding it to the kustomization.yaml file:
--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+- dummyhttp-sealed-secret.yaml
 # Create the config map value
 configMapGenerator:
 - name: dummyhttp-configmap
Once we commit and push the files Argo CD creates the SealedSecret and the controller generates the Secret:
  kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
  kubectl get sealedsecrets,secrets
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     3s
NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      3s
If we check the command output we can see the new value of the secret:
  curl -s https://test-dummyhttp.lo.mixinet.net:8443   jq -M .
 
  "c": "Default Test Value",
  "s": "Boo"
 

Using sealed-secrets in production clustersIf you plan to use sealed-secrets look into its documentation to understand how it manages the private keys, how to backup things and keep in mind that, as the documentation explains, you can rotate your sealed version of the secrets, but that doesn t change the actual secrets. If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing both things at the same time).

Final remarksOn this post we have seen how to deploy applications using the argocd-autopilot model, including the use of helm charts inside kustomize applications and how to install and use the sealed-secrets controller. It has been interesting and I ve learnt a lot about argocd in the process, but I believe that if I ever want to use it in production I will also review the native helm support in argocd using a separate repository to manage the applications, at least to be able to compare it to the model explained here.

2 May 2025

Daniel Lange: Creating iPhone/iPod/iPad notes from the shell

I found a very nice script to create Notes on the iPhone from the command line by hossman over at Perlmonks. For some weird reason Perlmonks does not allow me to reply with amendments even after I created an account. I can "preview" a reply at Perlmonks but after "create" I get "Permission Denied". Duh. vroom, if you want screenshots, contact me on IRC :-). As I wrote everything up for the Perlmonks reply anyways, I'll post it here instead. Against hossman's version 32 from 2011-02-22 I changed the following: I /msg'd hossman the URL of this blog entry. Continue reading "Creating iPhone/iPod/iPad notes from the shell"

30 April 2025

Simon Josefsson: Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:
Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It only had to orchestrate building up to around 500 packages for each distribution and per architecture. Differential reproducible rebuilds doesn t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs. Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline. One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build. My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org. I realized that these rebuilds would be not be sufficient for me: it doesn t solve the problem of how to trust the toolchain. Let s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would only have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code that s what I think we all would prefer to focus on, to be able to improve upstream source code. The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version. See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters. The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal. My approach to reach trusted binaries on my laptop appears to be a three-step effort: How to go about achieving this? Today s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project? If you want to contribute to some GitHub or GitLab project, you click the Fork button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we ve seen with many software supply-chain security incidents for the past years, where the magic is involved is a good place to introduce malicious code. To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project. Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one build job definition and one deploy job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually. The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.
sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/
The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:
if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE"   cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE"   cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "$ DDB_GIT_TOKEN:- " = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi
That s it! The actual implementation is a bit longer, but the major difference is for log and error handling. You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands. There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS. To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven t been bothered to set up non-public access. Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit. What s next in this venture? Some ideas include: What do you think?

28 April 2025

Valhalla's Things: POLARVIDE modular jacket

Posted on April 28, 2025
Tags: madeof:atoms, craft:sewing
A woman with early morning hair wearing a knee-length grey polar fleece jacket; it is closed at the waist with a twine belt that makes the bound front edge go in a smooth curve between the one side of the neck to the other side of the waist and then flare back outwards on the hips. The sleeves are long enough to go over the hands, and both them and the hem are cut Years ago I made myself a quick dressing gown from a white fleece IKEA throw and often wore it in the morning between waking up and changing into day clothes. One day I want to make myself a fancy victorian wrapper, to use in its place, but that s still in the early planning stage, and will require quite some work. a free cat sitting half asleep on an old couch, with a formerly white piece of fabric draped between the armrest and the seat. A piece of cardboard between two seat pillows provides additional protection from the wind. Then last autumn I discovered that the taxes I owed to the local lord (who provides protection from mice and other small animals) included not just a certain amount of kibbles, but also some warm textiles, and the dressing gown (which at this time was definitely no longer pristine) had to go. For a while I had to do without a dressing gown, but then in the second half of this winter I had some time for a quick machine sewing project. I could not tackle the big victorian thing, but I still had a second POLARVIDE throw from IKEA (this time in a more sensible dark grey) I had bought with sewing intents. The fabric in a throw isn t that much, so I needed something pretty efficient, and rather than winging it as I had done the first time I decided I wanted to try the Modular Jacket from A Year of Zero Waste Sewing (which I had bought in the zine instalments: the jacket is in the March issue). After some measuring and decision taking, I found that I could fit most of the pieces and get a decent length, but I had no room for the collar, and probably not for the belt nor the pockets, but I cut all of the main pieces. I had a possible idea for a contrasting collar, but I decided to start sewing the main pieces and decide later, before committing to cutting the other fabric. As I was assembling the jacket I decided that as a dressing gown I could do without the collar, and noticed that with the fraying-free plastic fleece I didn t really need the front facings, so I cut those in half lengthwise, pieced them together, and used them as binding to finish the front end. the back of the worn jacket, other than being clinched in by the belt it is pretty straight. Since I didn t have enough fabric for the belt I also skipped the belt loops, but I have been wearing this with random belts and I don t feel the need for them anyway. I ve also been thinking about adding a button just above the bust and use that to keep it closed, but I m still not 100% sure about it. Another thing I still need to do is to go through the few scraps of fleece that are left and see if I can piece together a serviceable pocket or two. folding the sleeves back by a good 10 cm to show the hands. Because of the size of the fabric, I ended up having quite long sleeves: I m happy with them because they mean that I can cover my hands when it s cold, or fold them back to make a nice cuff. If I ll make a real jacket with this patter I ll have to take this in consideration, and either make the sleeves shorter or finish the seam in a way that looks nice when folded back. Will I make a real jacket? I m not sure, it s not really my style of outer garment, but as a dressing gown it has already been used quite a bit (as in, almost every morning since I ve made it :) ) and will continue to be used until too worn to be useful, and that s a good thing.

27 April 2025

Valhalla's Things: Stickerses

Posted on April 27, 2025
Tags: madeof:atoms, madeof:bits, craft:graphics, topic:stickers
After just a few years of procrastination, I ve given a wash of git-filter-repo to the repository where I keep my hexagonal sticker designs, removed a few failed experiments and stuff with dubious licensing and was able to finally publish it among my public git repositories This repo includes the template I m using, most of the stickers I ve had printed, some that have been published elsewhere and have been printed by other people, as well as some that have never been printed and I may or may not print in the future. The licensing details are in the metadata of each file, and mostly depend on the logos or cliparts used. Most, but not all, are under a free culture license. My server is not setup to correctly serve the SVG files, yet: downloading them (from the plain links) should work, but I need to fix the content type that is provided. I will probably procrastinate doing it for quite some time, but eventually it will be done. Of course cloning the repository from the public https URL also works. BRB, need to add MOAR stickerses.

12 April 2025

Kalyani Kenekar: Nextcloud Installation HowTo: Secure Your Data with a Private Cloud

Logo NGinx Nextcloud is an open-source software suite that enables you to set up and manage your own cloud storage and collaboration platform. It offers a range of features similar to popular cloud services like Google Drive or Dropbox but with the added benefit of complete control over your data and the server where it s hosted. I wanted to have a look at Nextcloud and the steps to setup a own instance with a PostgreSQL based database together with NGinx as the webserver to serve the WebUI. Before doing a full productive setup I wanted to play around locally with all the needed steps and worked out all the steps within KVM machine. While doing this I wrote down some notes to mostly document for myself what I need to do to get a Nextcloud installation running and usable. So this manual describes how to setup a Nextcloud installation on Debian 12 Bookworm based on NGinx and PostgreSQL.

Nextcloud Installation

Install PHP and PHP extensions for Nextcloud Nextcloud is basically a PHP application so we need to install PHP packages to get it working in the end. The following steps are based on the upstream documentation about how to install a own Nextcloud instance. Installing the virtual package package php on a Debian Bookworm system would pull in the depending meta package php8.2. This package itself would then pull also the package libapache2-mod-php8.2 as an dependency which then would pull in also the apache2 webserver as a depending package. This is something I don t wanted to have as I want to use NGinx that is already installed on the system instead. To get this we need to explicitly exclude the package libapache2-mod-php8.2 from the list of packages which we want to install, to achieve this we have to append a hyphen - at the end of the package name, so we need to use libapache2-mod-php8.2- within the package list that is telling apt to ignore this package as an dependency. I ended up with this call to get all needed dependencies installed.
$ sudo apt install php php-cli php-fpm php-json php-common php-zip \
  php-gd php-intl php-curl php-xml php-mbstring php-bcmath php-gmp \
  php-pgsql libapache2-mod-php8.2-
  • Check php version (optional step) $ php -v
PHP 8.2.28 (cli) (built: Mar 13 2025 18:21:38) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.28, Copyright (c) Zend Technologies
    with Zend OPcache v8.2.28, Copyright (c), by Zend Technologies
  • After installing all the packages, edit the php.ini file: $ sudo vi /etc/php/8.2/fpm/php.ini
  • Change the following settings per your requirements:
max_execution_time = 300
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
  • To make these settings effective, restart the php-fpm service $ sudo systemctl restart php8.2-fpm

Install PostgreSQL, Create a database and user This manual assumes we will use a PostgreSQL server on localhost, if you have a server instance on some remote site you can skip the installation step here. $ sudo apt install postgresql postgresql-contrib postgresql-client
  • Check version after installation (optinal step): $ sudo -i -u postgres $ psql -version
  • This output will be seen: psql (15.12 (Debian 15.12-0+deb12u2))
  • Exit the PSQL shell by using the command \q. postgres=# \q
  • Exit the CLI of the postgres user: postgres@host:~$ exit

Create a PostgreSQL Database and User:
  1. Create a new PostgreSQL user (Use a strong password!): $ sudo -u postgres psql -c "CREATE USER nextcloud_user PASSWORD '1234';"
  2. Create new database and grant access: $ sudo -u postgres psql -c "CREATE DATABASE nextcloud_db WITH OWNER nextcloud_user ENCODING=UTF8;"
  3. (Optional) Check if we now can connect to the database server and the database in detail (you will get a question about the password for the database user!). If this is not working it makes no sense to proceed further! We need to fix first the access then! $ psql -h localhost -U nextcloud_user -d nextcloud_db or $ psql -h 127.0.0.1 -U nextcloud_user -d nextcloud_db
  • Log out from postgres shell using the command \q.

Download and install Nextcloud
  • Use the following command to download the latest version of Nextcloud: $ wget https://download.nextcloud.com/server/releases/latest.zip
  • Extract file into the folder /var/www/html with the following command: $ sudo unzip latest.zip -d /var/www/html
  • Change ownership of the /var/www/html/nextcloud directory to www-data. $ sudo chown -R www-data:www-data /var/www/html/nextcloud

Configure NGinx for Nextcloud to use a certificate In case you want to use self signed certificate, e.g. if you play around to setup Nextcloud locally for testing purposes you can do the following steps.
  • Generate the private key and certificate: $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nextcloud.key -out nextcloud.crt $ sudo cp nextcloud.crt /etc/ssl/certs/ && sudo cp nextcloud.key /etc/ssl/private/
  • If you want or need to use the service of Let s Encrypt (or similar) drop the step above and create your required key data by using this command: $ sudo certbot --nginx -d nextcloud.your-domain.com You will need to adjust the path to the key and certificate in the next step!
  • Change the NGinx configuration: $ sudo vi /etc/nginx/sites-available/nextcloud.conf
  • Add the following snippet into the file and save it.
# /etc/nginx/sites-available/nextcloud.conf
upstream php-handler  
    #server 127.0.0.1:9000;
    server unix:/run/php/php8.2-fpm.sock;
 

# Set the  immutable  cache control options only for assets with a cache
# busting  v  argument

map $arg_v $asset_immutable  
    "" "";
    default ", immutable";
 

server  
    listen 80;
    listen [::]:80;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # Enforce HTTPS
    return 301 https://$server_name$request_uri;
 

server  
    listen 443      ssl http2;
    listen [::]:443 ssl http2;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Path to the root of your installation
    root /var/www/html/nextcloud;

    # Use Mozilla's guidelines for SSL/TLS settings
    # https://mozilla.github.io/server-side-tls/ssl-config-generator/
    # Adjust the usage and paths of the correct key data! E.g. it you want to use Let's Encrypt key material!
    ssl_certificate /etc/ssl/certs/nextcloud.crt;
    ssl_certificate_key /etc/ssl/private/nextcloud.key;
    # ssl_certificate /etc/letsencrypt/live/nextcloud.your-domain.com/fullchain.pem; 
    # ssl_certificate_key /etc/letsencrypt/live/nextcloud.your-domain.com/privkey.pem;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # HSTS settings
    # WARNING: Only add the preload option once you read about
    # the consequences in https://hstspreload.org/. This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;

    # set max upload size and increase upload timeout:
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # Pagespeed is not supported by Nextcloud, so if your server is built
    # with the  ngx_pagespeed  module, uncomment this line to disable it.
    #pagespeed off;

    # The settings allows you to optimize the HTTP2 bandwidth.
    # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
    # for tuning hints
    client_body_buffer_size 512k;

    # HTTP response headers borrowed from Nextcloud  .htaccess 
    add_header Referrer-Policy                   "no-referrer"       always;
    add_header X-Content-Type-Options            "nosniff"           always;
    add_header X-Frame-Options                   "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies "none"              always;
    add_header X-Robots-Tag                      "noindex, nofollow" always;
    add_header X-XSS-Protection                  "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Set .mjs and .wasm MIME types
    # Either include it in the default mime.types list
    # and include that list explicitly or add the file extension
    # only for Nextcloud like below:
    include mime.types;
    types  
        text/javascript js mjs;
        application/wasm wasm;
     

    # Specify how to handle directories -- specifying  /index.php$request_uri 
    # here as the fallback means that NGinx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    #  /updater ,  /ocs-provider ), and thus
    #  try_files $uri $uri/ /index.php$request_uri 
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from  .htaccess  to handle Microsoft DAV clients
    location = /  
        if ( $http_user_agent ~ ^DavClnt )  
            return 302 /remote.php/webdav/$is_args$args;
         
     

    location = /robots.txt  
        allow all;
        log_not_found off;
        access_log off;
     

    # Make a regex exception for  /.well-known  so that clients can still
    # access it despite the existence of the regex rule
    #  location ~ /(\. autotest ...)  which would otherwise handle requests
    # for  /.well-known .
    location ^~ /.well-known  
        # The rules in this block are an adaptation of the rules
        # in  .htaccess  that concern  /.well-known .

        location = /.well-known/carddav   return 301 /remote.php/dav/;  
        location = /.well-known/caldav    return 301 /remote.php/dav/;  

        location /.well-known/acme-challenge      try_files $uri $uri/ =404;  
        location /.well-known/pki-validation      try_files $uri $uri/ =404;  

        # Let Nextcloud's API for  /.well-known  URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;
     

    # Rules borrowed from  .htaccess  to hide certain paths from clients
    location ~ ^/(?:build tests config lib 3rdparty templates data)(?:$ /)    return 404;  
    location ~ ^/(?:\. autotest occ issue indie db_ console)                  return 404;  

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then NGinx will encounter an infinite rewriting loop when it prepend  /index.php 
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$ /)  
        # Required for legacy support
        rewrite ^/(?!index remote public cron core\/ajax\/update status ocs\/v[12] updater\/.+ ocs-provider\/.+ .+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

        fastcgi_max_temp_file_size 0;
     

    # Serve static files
    location ~ \.(?:css js mjs svg gif png jpg ico wasm tflite map ogg flac)$  
        try_files $uri /index.php$request_uri;
        # HTTP response headers borrowed from Nextcloud  .htaccess 
        add_header Cache-Control                     "public, max-age=15778463$asset_immutable";
        add_header Referrer-Policy                   "no-referrer"       always;
        add_header X-Content-Type-Options            "nosniff"           always;
        add_header X-Frame-Options                   "SAMEORIGIN"        always;
        add_header X-Permitted-Cross-Domain-Policies "none"              always;
        add_header X-Robots-Tag                      "noindex, nofollow" always;
        add_header X-XSS-Protection                  "1; mode=block"     always;
        access_log off;     # Optional: Don't log access to assets
     

    location ~ \.woff2?$  
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from  .htaccess 
        access_log off;     # Optional: Don't log access to assets
     

    # Rule borrowed from  .htaccess 
    location /remote  
        return 301 /remote.php$request_uri;
     

    location /  
        try_files $uri $uri/ /index.php$request_uri;
     
 
  • Symlink configuration site available to site enabled. $ ln -s /etc/nginx/sites-available/nextcloud.conf /etc/nginx/sites-enabled/
  • Restart NGinx and access the URI in the browser.
  • Go through the installation of Nextcloud.
  • The user data on the installation dialog should point e.g to administrator or similar, that user will become administrative access rights in Nextcloud!
  • To adjust the database connection detail you have to edit the file $install_folder/config/config.php. Means here in the example within this post you would need to modify /var/www/html/nextcloud/config/config.php to control or change the database connection.
---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(Or your remote PostgreSQL server address if you have.)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for database user.)
--->%---
After the installation and setup of the Nextcloud PHP application there are more steps to be done. Have a look into the WebUI what you will need to do as additional steps like create a cronjob or tuning of some more PHP configurations. If you ve done all things correct you should see a login page similar to this: Login Page of your Nextcloud instance

Optional other steps for more enhanced configuration modifications

Move the data folder to somewhere else The data folder is the root folder for all user content. By default it is located in $install_folder/data, so in our case here it is in /var/www/html/nextcloud/data.
  • Move the data directory outside the web server document root. $ sudo mv /var/www/html/nextcloud/data /var/nextcloud_data
  • Ensure access permissions, mostly not needed if you move the folder. $ sudo chown -R www-data:www-data /var/nextcloud_data $ sudo chown -R www-data:www-data /var/www/html/nextcloud/
  • Update the Nextcloud configuration:
    1. Open the config/config.php file of your Nextcloud installation. $ sudo vi /var/www/html/nextcloud/config/config.php
    2. Update the datadirectory parameter to point to the new location of your data directory.
  ---%<---
     'datadirectory' => '/var/nextcloud_data'
  --->%---
  • Restart NGinx service: $ sudo systemctl restart nginx

Make the installation available for multiple FQDNs on the same server
  • Adjust the Nextcloud configuration to listen and accept requests for different domain names. Configure and adjust the key trusted_domains accordingly. $ sudo vi /var/www/html/nextcloud/config/config.php
  ---%<---
    'trusted_domains' => 
    array (
      0 => 'domain.your-domain.com',
      1 => 'domain.other-domain.com',
    ),
  --->%---
  • Create and adjust the needed site configurations for the webserver.
  • Restart the NGinx unit.

An error message about .ocdata might occur
  • .ocdata is not found inside the data directory
    • Create file using touch and set necessary permissions. $ sudo touch /var/nextcloud_data/.ocdata $ sudo chown -R www-data:www-data /var/nextcloud_data/

The password for the administrator user is unknown
  1. Log in to your server:
    • SSH into the server where your PostgreSQL database is hosted.
  2. Switch to the PostgreSQL user:
    • $ sudo -i -u postgres
  3. Access the PostgreSQL command line
    • psql
  4. List the databases: (If you re unsure which database is being used by Nextcloud, you can list all the databases by the list command.)
    • \l
  5. Switch to the Nextcloud database:
    • Switch to the specific database that Nextcloud is using.
    • \c nextclouddb
  6. Reset the password for the Nextcloud database user:
    • ALTER USER nextcloud_user WITH PASSWORD 'new_password';
  7. Exit the PostgreSQL command line:
    • \q
  8. Verify Database Configuration:
    • Check the database connection details in the config.php file to ensure they are correct. sudo vi /var/www/html/nextcloud/config/config.php
    • Replace nextcloud_db, nextcloud_user, and your_password with your actual database name, user, and password.
---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(or your PostgreSQL server address)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for nextcloud_user.)
--->%---
  1. Restart NGinx and access the UI through the browser.

1 April 2025

Michael Ablassmeier: qmpbackup 0.46 - add image fleecing

I ve released qmpbackup 0.46 which now utilizes the image fleecing technique for backup. Usually, during backup, Qemu will use a so called copy-before-write filter so that data for new guest writes is sent to the backup target first, the guest write blocks until this operation is finished. If the backup target is flaky, or becomes unavailable during backup operation, this could lead to high I/O wait times or even complete VM lockups. To fix this, a so called fleecing image is introduced during backup being used as temporary cache for write operations by the guest. This image can be placed on the same storage as the virtual machine disks, so is independent from the backup target performance. The documentation on which steps are required to get this going, using the Qemu QMP protocol is, lets say.. lacking.. The following examples show the general functionality, but should be enhanced to use transactions where possible. All commands are in qmp-shell command format. Lets start with a full backup:
# create a new bitmap
block-dirty-bitmap-add node=disk1 name=bitmap persistent=true
# add the fleece image to the virtual machine (same size as original disk required)
blockdev-add driver=qcow2 node-name=fleecie file= "driver":"file","filename":"/tmp/fleece.qcow2" 
# add the backup target file to the virtual machine
blockdev-add driver=qcow2 node-name=backup-target-file file= "driver":"file","filename":"/tmp/backup.qcow2" 
# enable the copy-before-writer for the first disk attached, utilizing the fleece image
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie
# "blockdev-replace": make the copy-before-writer filter the major device (use "query-block" to get path parameter value, qdev node)
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter backing the copy-before-writer
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# create a full backup
blockdev-backup device=snapshot-backup-source target=backup-target-file sync=full job-id=test
[ wait until block job finishes]
# remove the snapshot access filter from the virtual machine
blockdev-del node-name=snapshot-backup-source
# switch back to the regular disk
qom-set path=/machine/unattached/device[20] property=drive value=node-disk1
# remove the copy-before-writer
blockdev-del node-name=cbw
# remove the backup-target-file
blockdev-del node-name=backup-target-file
# detach the fleecing image
blockdev-del node-name=fleecie
After this process, the temporary fleecing image can be deleted/recreated. Now lets go for a incremental backup:
# add the fleecing and backup target image, like before
blockdev-add driver=qcow2 node-name=fleecie file= "driver":"file","filename":"/tmp/fleece.qcow2" 
blockdev-add driver=qcow2 node-name=backup-target-file file= "driver":"file","filename":"/tmp/backup-incremental.qcow2" 
# add the copy-before-write filter, but utilize the bitmap created during full backup
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie bitmap= "node":"disk1","name":"bitmap" 
# switch device to the copy-before-write filter
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# merge the bitmap created during full backup to the snapshot-access device so
# the backup operation can access it. (you better use an transaction here)
block-dirty-bitmap-add node=snapshot-backup-source name=bitmap
block-dirty-bitmap-merge node=snapshot-backup-source target=bitmap bitmaps=[ "node":"disk1","name":"bitmap" ]
# create incremental backup (you better use an transaction here)
blockdev-backup device=snapshot-backup-source target=backup-target-file job-id=test sync=incremental bitmap=bitmap
 [ wait until backup has finished ]
 [ cleanup like before ]
# clear the dirty bitmap (you better use an transaction here)
block-dirty-bitmap-clear node=disk1 name=bitmap
Or, use a simple reproducer by directly passing qmp commands via stdio:
#!/usr/bin/bash
qemu-img create -f raw disk 1M
qemu-img create -f raw fleece 1M
qemu-img create -f raw backup 1M
qemu-system-x86_64 -drive node-name=disk,file=disk,format=file -qmp stdio -nographic -nodefaults <<EOF
 "execute": "qmp_capabilities" 
 "execute": "block-dirty-bitmap-add", "arguments":  "node": "disk", "name": "bitmap" 
 "execute": "blockdev-add", "arguments":  "node-name": "fleece", "driver": "file", "filename": "fleece" 
 "execute": "blockdev-add", "arguments":  "node-name": "backup", "driver": "file", "filename": "backup" 
 "execute": "blockdev-add", "arguments":  "node-name": "cbw", "driver": "copy-before-write", "file": "disk", "target": "fleece", "bitmap":  "node": "disk", "name": "bitmap" 
 "execute": "query-block" 
 "execute": "qom-set", "arguments":  "path": "/machine/unattached/device[4]", "property": "drive", "value": "cbw" 
 "execute": "blockdev-add", "arguments":  "node-name": "snapshot", "driver": "snapshot-access", "file": "cbw" 
 "execute": "block-dirty-bitmap-add", "arguments":  "node": "snapshot", "name": "tbitmap" 
 "execute": "block-dirty-bitmap-merge", "arguments":  "node": "snapshot", "target": "tbitmap", "bitmaps": [ "node": "disk", "name": "bitmap" ] 
[..]
 "execute": "quit" 
EOF

9 March 2025

Niels Thykier: Improving Debian packaging in Kate

The other day, I noted that the emacs integration with debputy stopped working. After debugging for a while, I realized that emacs no longer sent the didOpen notification that is expected of it, which confused debputy. At this point, I was already several hours into the debugging and I noted there was some discussions on debian-devel about emacs and byte compilation not working. So I figured I would shelve the emacs problem for now. But I needed an LSP capable editor and with my vi skills leaving much to be desired, I skipped out on vim-youcompleteme. Instead, I pulled out kate, which I had not been using for years. It had LSP support, so it would fine, right? Well, no. Turns out that debputy LSP support had some assumptions that worked for emacs but not kate. Plus once you start down the rabbit hole, you stumble on things you missed previously.
Getting started First order of business was to tell kate about debputy. Conveniently, kate has a configuration tab for adding language servers in a JSON format right next to the tab where you can see its configuration for built-in LSP (also in JSON format9. So a quick bit of copy-paste magic and that was done. Yesterday, I opened an MR against upstream to have the configuration added (https://invent.kde.org/utilities/kate/-/merge_requests/1748) and they already merged it. Today, I then filed a wishlist against kate in Debian to have the Debian maintainers cherry-pick it, so it works out of the box for Trixie (https://bugs.debian.org/1099876). So far so good.
Inlay hint woes Since July (2024), debputy has support for Inlay hints. They are basically small bits of text that the LSP server can ask the editor to inject into the text to provide hints to the reader. Typically, you see them used to provide typing hints, where the editor or the underlying LSP server has figured out the type of a variable or expression that you did not explicitly type. Another common use case is to inject the parameter name for positional arguments when calling a function, so the user do not have to count the position to figure out which value is passed as which parameter. In debputy, I have been using the Inlay hints to show inherited fields in debian/control. As an example, if you have a definition like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin
Architecture: any
Then foo-bin inherits the Section and Priority field since it does not supply its own. Previously, debputy would that by injecting the fields themselves and their value just below the Package field as if you had typed them out directly. The editor always renders Inlay hints distinctly from regular text, so there was no risk of confusion and it made the text look like a valid debian/control file end to end. The result looked something like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin
Section: devel
Priority: optional
Architecture: any
With the second instances of Section and Priority being rendered differently than its surrendering (usually faded or colorlessly). Unfortunately, kate did not like injecting Inlay hints with a newline in them, which was needed for this trick. Reading into the LSP specs, it says nothing about multi-line Inlay hints being a thing and I figured I would see this problem again with other editors if I left it be. I ended up changing the Inlay hints to be placed at the end of the Package field and then included surrounding () for better visuals. So now, it looks like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin  (Section: devel)  (Priority: optional)
Architecture: any
Unfortunately, it is no longer 1:1 with the underlying syntax which I liked about the previous one. But it works in more editors and is still explicit. I also removed the Inlay hint for the Homepage field. It takes too much space and I have yet to meet someone missing it in the binary stanza. If you have any better ideas for how to render it, feel free to reach out to me.
Spurious completion and hover As I was debugging the Inlay hints, I wanted to do a quick restart of debputy after each fix. Then I would trigger a small change to the document to ensure kate would request an update from debputy to render the Inlay hints with the new code. The full outgoing payloads are sent via the logs to the client, so it was really about minimizing which LSP requests are sent to debputy. Notably, two cases would flood the log:
  • Completion requests. These are triggered by typing anything at all and since I wanted to a change, I could not avoid this. So here it was about making sure there would be nothing to complete, so the result was a small as possible.
  • Hover doc requests. These are triggered by mouse hovering over field, so this was mostly about ensuring my mouse movement did not linger over any field on the way between restarting the LSP server and scrolling the log in kate.
In my infinite wisdom, I chose to make a comment line where I would do the change. I figured it would neuter the completion requests completely and it should not matter if my cursor landed on the comment as there would be no hover docs for comments either. Unfortunately for me, debputy would ignore the fact that it was on a comment line. Instead, it would find the next field after the comment line and try to complete based on that. Normally you do not see this, because the editor correctly identifies that none of the completion suggestions start with a \#, so they are all discarded. But it was pretty annoying for the debugging, so now debputy has been told to explicitly stop these requests early on comment lines.
Hover docs for packages I added a feature in debputy where you can hover over package names in your relationship fields (such as Depends) and debputy will render a small snippet about it based on data from your local APT cache. This doc is then handed to the editor and tagged as markdown provided the editor supports markdown rendering. Both emacs and kate support markdown. However, not all markdown renderings are equal. Notably, emacs's rendering does not reformat the text into paragraphs. In a sense, emacs rendering works a bit like <pre>...</pre> except it does a bit of fancy rendering inside the <pre>...</pre>. On the other hand, kate seems to convert the markdown to HTML and then throw the result into an HTML render engine. Here it is important to remember that not all newlines are equal in markdown. A Foo<newline>Bar is treated as one "paragraph" (<p>...</p>) and the HTML render happily renders this as single line Foo Bar provided there is sufficient width to do so. A couple of extra newlines made wonders for the kate rendering, but I have a feeling this is not going to be the last time the hover docs will need some tweaking for prettification. Feel free to reach out if you spot a weirdly rendered hover doc somewhere.
Making quickfixes available in kate Quickfixes are treated as generic code actions in the LSP specs. Each code action has a "type" (kind in the LSP lingo), which enables the editor to group the actions accordingly or filter by certain types of code actions. The design in the specs leads to the following flow:
  1. The LSP server provides the editor with diagnostics (there are multiple ways to trigger this, so we will keep this part simple).
  2. The editor renders them to the user and the user chooses to interact with one of them.
  3. The interaction makes the editor asks the LSP server, which code actions are available at that location (optionally with filter to only see quickfixes).
  4. The LSP server looks at the provided range and is expected to return the relevant quickfixes here.
This flow is really annoying from a LSP server writer point of view. When you do the diagnostics (in step 1), you tend to already know what the possible quickfixes would be. The LSP spec authors realized this at some point, so there are two features the editor provides to simplify this.
  1. In the editor request for code actions, the editor is expected to provide the diagnostics that they received from the server. Side note: I cannot quite tell if this is optional or required from the spec.
  2. The editor can provide support for remembering a data member in each diagnostic. The server can then store arbitrary information in that member, which they will see again in the code actions request. Again, provided that the editor supports this optional feature.
All the quickfix logic in debputy so far has hinged on both of these two features. As life would have it, kate provides neither of them. Which meant I had to teach debputy to keep track of its diagnostics on its own. The plus side is that makes it easier to support "pull diagnostics" down the line, since it requires a similar feature. Additionally, it also means that quickfixes are now available in more editors. For consistency, debputy logic is now always used rather than relying on the editor support when present. The downside is that I had to spend hours coming up with and debugging a way to find the diagnostics that overlap with the range provided by the editor. The most difficult part was keeping the logic straight and getting the runes correct for it.
Making the quickfixes actually work With all of that, kate would show the quickfixes for diagnostics from debputy and you could use them too. However, they would always apply twice with suboptimal outcome as a result. The LSP spec has multiple ways of defining what need to be changed in response to activating a code action. In debputy, all edits are currently done via the WorkspaceEdit type. It has two ways of defining the changes. Either via changes or documentChanges with documentChanges being the preferred one if both parties support this. I originally read that as I was allowed to provide both and the editor would pick the one it preferred. However, after seeing kate blindly use both when they are present, I reviewed the spec and it does say "The edit should either provide changes or documentChanges", so I think that one is on me. None of the changes in debputy currently require documentChanges, so I went with just using changes for now despite it not being preferred. I cannot figure out the logic of whether an editor supports documentChanges. As I read the notes for this part of the spec, my understanding is that kate does not announce its support for documentChanges but it clearly uses them when present. Therefore, I decided to keep it simple for now until I have time to dig deeper.
Remaining limitations with kate There is one remaining limitation with kate that I have not yet solved. The kate program uses KSyntaxHighlighting for its language detection, which in turn is the basis for which LSP server is assigned to a given document. This engine does not seem to support as complex detection logic as I hoped from it. Concretely, it either works on matching on an extension / a basename (same field for both cases) or mime type. This combined with our habit in Debian to use extension less files like debian/control vs. debian/tests/control or debian/rules or debian/upstream/metadata makes things awkward a best. Concretely, the syntax engine cannot tell debian/control from debian/tests/control as they use the same basename. Fortunately, the syntax is close enough to work for both and debputy is set to use filename based lookups, so this case works well enough. However, for debian/rules and debian/upstream/metadata, my understanding is that if I assign these in the syntax engine as Debian files, these rules will also trigger for any file named foo.rules or bar.metadata. That seems a bit too broad for me, so I have opted out of that for now. The down side is that these files will not work out of the box with kate for now. The current LSP configuration in kate does not recognize makefiles or YAML either. Ideally, we would assign custom languages for the affected Debian files, so we do not steal the ID from other language servers. Notably, kate has a built-in language server for YAML and debputy does nothing for a generic YAML document. However, adding YAML as a supported language for debputy would cause conflict and regressions for users that are already happy with their generic YAML language server from kate. So there are certainly still work to be done. If you are good with KSyntaxHighlighting and know how to solve some of this, I hope you will help me out.
Changes unrelated to kate While I was working on debputy, I also added some other features that I want to mention.
  1. The debputy lint command will now show related context to diagnostic in its terminal report when such information is available and is from the same file as the diagnostic itself (cross file cases are rendered without related information). The related information is typically used to highlight a source of a conflict. As an example, if you use the same field twice in a stanza of debian/control, then debputy will add a diagnostic to the second occurrence. The related information for that diagnostic would provide the position of the first occurrence. This should make it easier to find the source of the conflict in the cases where debputy provides it. Let me know if you are missing it for certain diagnostics.

  2. The diagnostics analysis of debian/control will now identify and flag simple duplicated relations (complex ones like OR relations are ignored for now). Thanks to Matthias Geiger for suggesting the feature and Otto Kek l inen for reporting a false positive that is now fixed.

Closing I am glad I tested with kate to weed out most of these issues in time before the freeze. The Debian freeze will start within a week from now. Since debputy is a part of the toolchain packages it will be frozen from there except for important bug fixes.

7 March 2025

Valhalla's Things: MOAR Slippers

Posted on March 7, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
A pair of espadrille-style slippers in black denim with a shiny black design on the uppers and twine soles. A couple of years ago, I made myself a pair of slippers in linen with a braided twine sole and then another pair of hiking slippers: I am happy to report that they have been mostly a success. Now, as I feared, the white linen fabric wasn t a great choice: not only it became dirt-grey linen fabric in a very short time, the area under the ball of the foot was quickly consumed by friction, just as it usually happens with bought slippers. I have no pictures for a number of reasons, but trust me when I say that they look pretty bad. The same slippers, one of them is turned upside down to show the sole made from a twine braid, sewn in a spiral until it is the shape of a sole. However, the sole is still going strong, and the general concept has proved valid, so when I needed a second pair of slippers I used the same pattern, with a sole made from the same twine but this time with denim taken from the legs of an old pair of jeans. To make them a bit nicer, and to test the technique, I also added a design with a stencil and iridescent black acrylic paint (with fabric medium): I like the tone-on-tone effect, as it s both (relatively) subtle and shiny. A pair of open-heeled slippers in faded blue jeans. Then, my partner also needed new slippers, and I wanted to make his too. His preference, however, is for open heeled slippers, so I adjusted the pattern into a new one, making it from an old pair of blue jeans, rather than black as mine. A braided twine sole, showing how an heel has been made in the same technique and sewn under the sole with blanket stitches. He also finds completely flat soles a bit uncomfortable, so I made an heel with the same braided twine technique: this also seems to be working fine, and I ve also added these instructions to the braided soles ones Both of these have now been work for a few months: the jeans is working much better than the linen (which isn t a complete surprise) and we re both finding them comfortable, so if we ll ever need new slippers I think I ll keep using this pattern. Now the plan is to wash the linen slippers, and then look into repairing them, either with just a new fabric inner sole + padding, or if washing isn t as successful as I d like by making a new fabric part in a different material and reusing just the twine sole. Either way they are going back into use.

24 February 2025

Valhalla's Things: Hexagonal Pattern Weights

Posted on February 24, 2025
Tags: madeof:atoms, craft:3dprint, craft:sewing
Eight hexagonal pieces with free software / culture related graphics on top. For quite a few years, I ve been using pattern weights instead of pins when cutting fabric, starting with random objects and then mostly using some big washers from the local hardware store. However, at about 22 g per washer, I needed quite a few of them, and dealing with them tended to get unwieldy; I don t remember how it happened, but one day I decided to make myself some bigger weights with a few washers each. I suspect I had seen somebody online with some nice hexagonal pattern weights, and hexagonal of course reminded me of the Stickers Standard, so of course I settled on an hexagon 5 cm tall and I decided I could 3D-print it in a way that could be filled with washers for weight. Rather than bothering with adding a lid (and fitting it), I decided to close the bottom by gluing a piece of felt, with the added advantage that it would protect whatever the weight was being used on. And of course the top could be decorated with a nerdish sticker, because, well, I am a nerd. I made a few of these pattern weights, used them for a while, was happy with them, and then a few days ago I received some new hexagonal stickers I had had printed, and realized that while I had taken a picture with all of the steps in assembling them, I had never published any kind of instructions on how to make them and I had not even pushed the source file on the craft tools git repository. And yesterday I fixed that: the instructions are now on my craft pattern website, with generated STL files, the git repository has been updated with the current sources, and now I ve even written this blog post :)

23 February 2025

Valhalla's Things: Water Resistant Hood

Posted on February 23, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
a person wearing a relatively boxy water resistant jacket with pockets and a zipper, and a detached hood with a big square cowl that reaches mid-torso. Many years ago I made myself a vest with lots of pockets 1 in a few layers of cheap cotton, and wore the hell out of it, for the added warmth, but most importantly for the convenience provided by the pockets. the same person showing just the vest, with two applied pockets on the bust, closed with buttons, and two big flaps covering two welted pockets at waist level, plus a strip of fabric with loops where things may be attached. Then a few years ago the cheap cotton had started to get worn, and I decided I needed to replace it. I found a second choice (and thus cheaper :) ) version of a water-repellent cotton and made another vest, lined with regular cotton, for a total of just two layers. the same person, this time there are also two sleeves, attached to the vest with big snaps, the outline of which can be seen on the vest. they are significantly less faded than the vest. This time I skipped a few pockets that I had found I didn t use that much, and I didn t add a hood, which didn t play that well when worn over a hoodie, but I added some detached sleeves, for additional wind protection. This left about 60 cm and some odd pieces of leftover fabric in my stash, for which I had no plan. the hood pulled down on the back, showing the big square cowl. And then February2 came, and I needed a quick simple mindless handsewing projects for the first weekend, I saw the vest (which I m wearing as much as the old one), the sleeves (which have been used much less, but I d like to change this) and thought about making a matching hood for it, using my square hood pattern. Since the etaproof is a bit stiff and not that nice to the touch I decide to line3 it with the same cotton as the vest and sleeves, and in the style of the pattern I did so by finishing each panel with its own lining (with regular cotton thread) and then whipstitching the panels together with the corespun cotton/poly thread recommended by the seller of the fabric. I m not sure this is the best way to construct something that is supposed to resist the rain, but if I notice issues I can always add some sealing tape afterwards. I do have a waterproof cape to wear in case of real rain, so this is only supposed to work for light rain anyway, and it may prove not to be an issue. As something designed to be worn in light rain, this is also something likely to be worn in low light conditions, where 100% black may not be the wisest look. On the vest I had added reflective piping to the armscyes, but I was out of the same piping. from the front; a flash was used to take the picture, making the border of the cowl very visible. I did however have a spool of reflector thread made of glass fibre by Rico Design, which I think was originally sold to be worked into knitting or crochet projects (it is now discontinued) and I had never used. I decided to try and sew a decorative blanket stitch border, a decision I had reasons to regret, since the thread broke and tangled like crazy, but in the end it was done, I like how it looks, and it seems pretty functional. I hope it won t break with time and use, and if it does I ll either fix it or try to redo with something else. Of course, the day I finished sewing the reflective border it stopped raining, so I haven t worn it yet, but I hope I ll be able to, and if it is an horrible failure I ll make sure to update this post.

  1. and I ve just realized that I haven t migrated that pattern to my pattern website, and I should do that. just don t hold your breath for it to happen O:-). And for the time being it will not have step-by-step pictures, as I currently don t need another vest.
  2. and February of course means a weekend in front of a screen that is showing a live-streamed conference.
  3. and of course I updated the pattern with instructions on how to add a lining.

4 February 2025

Valhalla's Things: Conference Talk Timeout Ring, Part One

Posted on February 4, 2025
Tags: madeof:atoms, madeof:bits
low quality video of a ring of rgb LED in front of a computer: the LED light up one at a time in colours that go from yellow to red. A few ago I may have accidentally bought a ring of 12 RGB LEDs; I soldered temporary leads on it, connected it to a CircuitPython supported board and played around for a while. They we had a couple of friends come over to remote FOSDEM together, and I had talked with one of them about WS2812 / NeoPixels, so I brought them to the living room, in case there was a chance to show them in sort-of-use. Then I was dealing with playing the various streams as we moved from one room to the next, which lead to me being called video team , which lead to me wearing a video team shirt (from an old DebConf, not FOSDEM, but still video team), which lead to somebody asking me whether I also had the sheet with the countdown to the end of the talk, and the answer was sort-of-yes (I should have the ones we used to use for our Linux Day), but not handy. But I had a thing with twelve things in a clock-like circle. A bit of fiddling on the CircuitPython REPL resulted, if I remember correctly, in something like:
import board
import neopixel
import time
num_pixels = 12
pixels = neopixel.NeoPixel(board.GP0, num_pixels)
pixels.brightness = 0.1
def end(min):
    pixels.fill((0, 0, 0))
    for i in range(12):
        pixels[i] = (127 + 10 * i, 8 * (12 - i), 0)
        pixels[i-1] = (0, 0, 0)
        time.sleep(min * 5)  # min * 60 / 12
Now, I wasn t very consistent in running end, especially since I wasn t sure whether I wanted to run it at the beginning of the talk with the full duration or just in the last 5 - 10 minutes depending of the length of the slot, but I ve had at least one person agree that the general idea has potential, so I m taking these notes to be able to work on it in the future. One thing that needs to be fixed is the fact that with the ring just attached with temporary wires and left on the table it isn t clear which LED is number 0, so it will need a bit of a case or something, but that s something that can be dealt with before the next fosdem. And I should probably add some input interface, so that it is self-contained and not tethered to a computer and run from the REPL. (And then I may also have a vague idea for putting that ring into some wearable thing: good thing that I actually bought two :D )

29 January 2025

Sergio Talens-Oliag: Testing DeepSeek with Ollama and Open WebUI

With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci   grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)
For the installation I initially I looked into the approach suggested on this article, but after reviewing it I decided to go for a docker only approach, as it leaves my system clean and updates are easier.

Step 0: Install dockerI already had it on my machine, so nothing to do here.

Step 1: Install the nvidia-container-toolkit packageAs it is needed to use the NVIDIA GPU with docker I followed the instructions to install the package using apt from the NVIDIA website.

Step 2: Run the Open WebUI container bundled with OllamaI could install ollama directly on linux or run it on docker, but I found out that there is a container with Open WebUI bundled with Ollama, so I decided to use it instead. To start the container I ve executed the following command:
docker run -d \
  -e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
  -p 127.0.0.1:3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --gpus=all --name open-webui --restart always \
  ghcr.io/open-webui/open-webui:ollama
Once the container is up and running, I can access the Open WebUI on http://localhost:3000/ and the Ollama API on http://localhost:11434/ (I added the OLLAMA_HOST environment variable to be bind the server to all the container IPs, as it is needed to export it to the host).

Step 3: Install DeepSeek-R1 model from Open WebUIAfter creating an admin user on the Open WebUI, I went to the select a model button and asked to download the deepseek-r1 model (at the time of writing this it matches version deepseek-r1:7b, which uses 4.7GB of space). After the download was completed I was able to start chatting with the model on my local machine.

Step 4: Ask a questionThe first question I asked was Can you tell me what is Debian? and the answer I got was:
Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:

1. **History**:
   - Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
     developer from Denmark.
   - It was initially forked from the Linux distribution Red Hat's RH-Debian to
     gain independence and reduce reliance on Red Hat.

2. **Use Cases**:
   - Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
   - It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
   - Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.

3. **Supported Architectures**:
   - Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.

4. **Package Management**:
   - Debian comes with a robust package management system called **Debian Policy
     Language (DPL)**, which allows users to easily install, update, and manage
     software packages.
   - This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.

5. **Security Focus**:
   - Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
   - The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.

6. **Compatibility with Servers**:
   - Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
   - Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.

7. **Popularity**:
   - Debian has a large and active community of users and contributors.
   - It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.
Haven t tried the same question with ChatGPT, but the History, Package Management and Comparison with Other Distributions sections are hilarious, but they provided me with important data:
  • Now I know that APT is the real Debian Project Leader, what don t know is why we have elections each year ;)
  • Linux Distributions are recursive Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of Red Hat,
As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer was really funny.

Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUIAs done before, to download the models I went to the select a model button and asked to download the deepseek-coder and deepseek-coder-v2 models (the default version of version one is said to be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install both for testing).

Step 6: Integrate Ollama with NeovimSince some months ago I ve been using Github Copilot with Neovim; I don t feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform repetitive tasks when programming. It seems that there are multiple neovim plugins that support ollama, for now I ve installed and configured the codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
  [...]
  -- Codecompanion plugin
  use  
    "olimorris/codecompanion.nvim",
    requires =  
      "nvim-lua/plenary.nvim",
      "nvim-treesitter/nvim-treesitter",
     
   
  [...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup( 
  adapters =  
    ollama = function()
      return require('codecompanion.adapters').extend('ollama',  
        schema =  
          model =  
            default = 'deepseek-coder-v2:latest',
           
         ,
       )
    end,
   ,
  strategies =  
    chat =   adapter = 'ollama',  ,
    inline =   adapter = 'ollama',  ,
   ,
 )
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
I ve tested it a little bit and it seems to work fine, but I ll have to test it more to see if it is really useful, I ll try to do it on future projects.

ConclusionAt a personal level I don t like nor trust AI systems, but as long as they are treated as tools and not as a magical thing you must trust they have their uses and I m happy to see that open source tools like Ollama and models like DeepSeek available for everyone to use.

10 January 2025

Valhalla's Things: Winter

Posted on January 10, 2025
Tags: madeof:atoms, craft:painting, medium:acrylic
An acrylic painting in blue and greenish grey vaguely resembling rocky slopes at the bottom right and a cloudy sky at the top left. A few days ago1 I wanted to paint, but I didn t know what to paint, so I did a few more colour tests to find out which green combinations I can get out of the available yellows and blues (and greens) acrylic I have (from a cheap student grade line). A sheet of colour tests with mixes of various yellows and blues. I liked the cool grey tones in the second to last line, from 200 Naples yellow (PY3 PY83 PW6) and 410 Ultramarine blue (PB29), so I went up a step on the scale of things to do when having a desire to paint, but the utter inability to actually paint something and did a bit of a study with those two colours plus titanium white. The study above over a sheet of scrap paper: at one of the edge the excess paint has connected the two together. And btw, painting with acrylics on watercolour paper taped to a sheet of scrap paper works just fine for stuff like this that is not supposed to last, but the work will get glued to the paper below when (not if) the paint overflows :D

  1. i.e. almost three months, and then I wrote 95% of this post and forgot to finish and publish it.

9 January 2025

Valhalla's Things: Poor Man Media Server

Posted on January 9, 2025
Tags: madeof:bits
Some time ago I installed minidlna on our media server: it was pretty easy to do, but quite limited in its support for the formats I use most, so I ended up using other solutions such as mounting the directory with sshfs. Now, doing that from a phone, even a pinephone running debian, may not be as convenient as doing it from the laptop where I already have my ssh key :D and I needed to listed to music from the pinephone. So, in anger, I decided to configure a web server to serve the files. I installed lighttpd because I already had a role for this kind of configuration in my ansible directory, and configured it to serve the relevant directory in /etc/lighttpd/conf-available/20-music.conf:
$HTTP["host"] =~ "music.example.org"  
    server.name          = "music.example.org"
    server.document-root = "/path/to/music"
 
the domain was already configured in my local dns (since everything is only available to the local network), and I enabled both 20-music.conf and 10-dir-listing.conf. And. That s it. It works. I can play my CD rips on a single flac exactly in the same way as I was used to (by ssh-ing to the media server and using alsaplayer). Then this evening I was talking to normal people1, and they mentioned that they wouldn t mind being able to skip tracks and fancy things like those :D and I ve found one possible improvement. For the directories with the generated single-track ogg files I ve added some playlists with the command ls *.ogg > playlist.m3u, then in the directory above I ve run ls */*.m3u > playlist.m3u and that also works. With vlc I can now open http://music.example.org/band/album/playlist.m3u to listen to an album that I have in ogg, being able to move between tracks, or I can open http://music.example.org/band/playlist.m3u and in the playlist view I can browse between the different albums. Left as an exercise to the reader2 are writing a bash script to generate all of the playlist.m3u files (and running it via some git hook when the files change) or writing a php script to generate them on the fly.
Update 2025-01-10: another reader3 wrote the php script and has authorized me to post it here.
<?php
define("MUSIC_FOLDER", __DIR__);
define("ID3v2", false);


function dd()  
    echo "<pre>"; call_user_func_array("var_dump", func_get_args());
    die();
 

function getinfo($file)  
    $cmd = 'id3info "' . MUSIC_FOLDER . "/" . $file . '"';
    exec($cmd, $output);
    $res = [];
    foreach($output as $line)  
    if (str_starts_with($line, "=== "))  
        $key = explode(" ", $line)[1];
        $val = end(explode(": ", $line, 2));
        $res[$key] = $val;
     
     
    if (isset($res['TPE1'])   isset($res['TIT2']))
    echo "#EXTINF: , " . ($res['TPE1'] ?? "Unk") . " - " . ($res['TIT2'] ?? "Untl") . "\r\n";
    if (isset($res['TALB']))
    echo "#EXTALB: " . $res['TALB'] . "\r\n";
 


function pathencode($path, $name)  
    $path = urlencode($path);
    $path =  str_replace("%2F", "/", $path);
    $name = urlencode($name);
    if ($path != "") $path = "/" . $path;
    return $path . "/" . $name;
 

function serve_playlist($path)  
    echo "#EXTM3U";
    echo "# PATH: $path\n\r";
    foreach (glob(MUSIC_FOLDER . "/$path/*") as $filename)  
    $name = basename($filename);
    if (is_dir($filename))  
        echo pathencode($path, $name) . ".m3u\r\n";
     
    $t = explode(".", $filename);
    $ext = array_pop($t);
    if (in_array($ext, ["mp3", "ogg", "flac", "mp4", "m4a"]))  
        if (ID3v2)  
 	   getinfo($path . "/" . $name);
          else  
 	   echo "#EXTINF: , " . $path . "/" . $name . "\r\n";
         
        echo pathencode($path, $name) . "\r\n";
     
     
    die();
 



$path = $_SERVER["REQUEST_URI"];
$path = urldecode($path);
$path = trim($path, "/");

if (str_ends_with($path, ".m3u"))  
    $path = str_replace(".m3u", "", $path);

    serve_playlist($path);
 

$path = MUSIC_FOLDER . "/" . $path;
if (file_exists($path) && is_file($path))  
    header('Content-Description: File Transfer');
    header('Content-Type: application/octet-stream');
    header('Expires: 0');
    header('Cache-Control: must-revalidate');
    header('Pragma: public');
    header('Content-Length: ' . filesize($path));
    readfile($path);
 
It s php, so I assume no responsability for it :D

  1. as much as the members of our LUG can be considered normal.
  2. i.e. the person in the LUG who wanted me to share what I had done.
  3. i.e. the other person in the LUG who was in that conversation and suggested the php script option.

31 December 2024

Fran ois Marier: Monitoring and Time-Shifting YouTube Podcasts

While most podcasts are available on multiple platforms and either offer an RSS feed or have one that can be discovered, some are only available in the form of a YouTube channel. Thankfully, it's possible to both monitor them for new episodes (i.e. new videos), and time-shift the audio for later offline listening. Subscribing to a channel via RSS is possible thanks to the built-in, but not easily discoverable, RSS feeds. See these instructions for how to do it. As an example, the RSS feed for the official Government of BC channel is https://www.youtube.com/feeds/videos.xml?channel_id=UC6n9tFQOVepHP3TIeYXnhSA. When it comes to downloading the audio, the most reliable tool I have found is yt-dlp. Since the exact arguments needed to download just the audio as an MP3 are a bit of a mouthful, I wrote a wrapper script which also does a few extra things: If you find that script handy, you may also want to check out the script I have in the same GitHub repo to turn arbitrary video files into a podcast.

30 November 2024

Dima Kogan: Strava track filtering validation

After years of seeing people's strava tracks, I became convinced that they insufficiently filter the data, resulting in over-estimating the effort. Today I did a bit of lazy analysis, and half-confirmed this: in the one case I looked at, strava reported reasonable elevation gain numbers, but greatly overestimated the distance traveled. I looked at a single gps track of a long bike ride. This was uploaded to strava manually, as a .gpx file. I can imagine that different things happen if you use the strava app or some device that integrates with the service (the filtering might happen before the data hits the server, and the server could decide to not apply any more filtering). I processed the data with a simple hysteretic filter, ignoring small changes in position and elevation, trying out different thresholds in the process. I completely ignore the timestamps, and only look at the differences between successive points. This handles the usual GPS noise; it does not handle GPS jumps, which I completely ignore in this analysis. Ignoring these would produce inflated elevation/gain numbers, but I'm working with a looong track, so hopefully this is a small effect. Clearly this is not scientific, but it's something.

The code
Parsing .gpx is slow (this is a big file), so I cache that into a .vnl:
import sys
import gpxpy
filename_in  = 'INPUT.gpx'
filename_out = 'OUTPUT.gpx'
with open(filename_in, 'r') as f:
    gpx = gpxpy.parse(f)
f_out = open(filename_out, 'w')
tracks = gpx.tracks
if len(tracks) != 1:
    print("I want just one track", file=sys.stderr)
    sys.exit(1)
track = tracks[0]
segments = track.segments
if len(segments) != 1:
    print("I want just one segment", file=sys.stderr)
    sys.exit(1)
segment = segments[0]
time0 = segment.points[0].time
print("# time lat lon ele_m")
for p in segment.points:
    print(f" (p.time - time0).seconds   p.latitude   p.longitude   p.elevation ",
          file = f_out)
And I process this data with the different filters (this is a silly Python loop, and is slow):
#!/usr/bin/python3
import sys
import numpy as np
import numpysane as nps
import gnuplotlib as gp
import vnlog
import pyproj
geod = None
def dist_ft(lat0,lon0, lat1,lon1):
    global geod
    if geod is None:
        geod = pyproj.Geod(ellps='WGS84')
    return \
        geod.inv(lon0,lat0, lon1,lat1)[2] * 100./2.54/12.
f = 'OUTPUT.gpx'
track,list_keys,dict_key_index = \
    vnlog.slurp(f)
t      = track[:,dict_key_index['time' ]]
lat    = track[:,dict_key_index['lat'  ]]
lon    = track[:,dict_key_index['lon'  ]]
ele_ft = track[:,dict_key_index['ele_m']] * 100./2.54/12.
@nps.broadcast_define( ( (), ()),
                       (2,))
def filter_track(ele_hysteresis_ft,
                 dxy_hysteresis_ft):
    dist        = 0.0
    ele_gain_ft = 0.0
    lon_accepted = None
    lat_accepted = None
    ele_accepted = None
    for i in range(len(lat)):
        if ele_accepted is not None:
            dxy_here  = dist_ft(lat_accepted,lon_accepted, lat[i],lon[i])
            dele_here = np.abs( ele_ft[i] - ele_accepted )
            if dxy_here < dxy_hysteresis_ft and dele_here < ele_hysteresis_ft:
                continue
            if ele_ft[i] > ele_accepted:
                ele_gain_ft += dele_here;
            dist += np.sqrt(dele_here * dele_here +
                            dxy_here  * dxy_here)
        lon_accepted = lon[i]
        lat_accepted = lat[i]
        ele_accepted = ele_ft[i]
    # lose the last point. It simply doesn't matter
    dist_mi = dist / 5280.
    return np.array((ele_gain_ft, dist_mi))
Nele_hysteresis_ft    = 20
ele_hysteresis0_ft    = 5
ele_hysteresis1_ft    = 100
ele_hysteresis_ft_all = np.linspace(ele_hysteresis0_ft,
                                    ele_hysteresis1_ft,
                                    Nele_hysteresis_ft)
Ndxy_hysteresis_ft = 20
dxy_hysteresis0_ft = 5
dxy_hysteresis1_ft = 1000
dxy_hysteresis_ft  = np.linspace(dxy_hysteresis0_ft,
                                 dxy_hysteresis1_ft,
                                 Ndxy_hysteresis_ft)
# shape (Nele,Ndxy,2)
gain,distance = \
    nps.mv( filter_track( nps.dummy(ele_hysteresis_ft_all,-1),
                          dxy_hysteresis_ft),
            -1,0 )
# Stolen from mrcal
def options_heatmap_with_contours( plotoptions, # we update this on output
                                   *,
                                   contour_min           = 0,
                                   contour_max,
                                   contour_increment     = None,
                                   do_contours           = True,
                                   contour_labels_styles = 'boxed',
                                   contour_labels_font   = None):
    r'''Update plotoptions, return curveoptions for a contoured heat map'''
    gp.add_plot_option(plotoptions,
                       'set',
                       ('view equal xy',
                        'view map'))
    if do_contours:
        if contour_increment is None:
            # Compute a "nice" contour increment. I pick a round number that gives
            # me a reasonable number of contours
            Nwant = 10
            increment = (contour_max - contour_min)/Nwant
            # I find the nearest 1eX or 2eX or 5eX
            base10_floor = np.power(10., np.floor(np.log10(increment)))
            # Look through the options, and pick the best one
            m   = np.array((1., 2., 5., 10.))
            err = np.abs(m * base10_floor - increment)
            contour_increment = -m[ np.argmin(err) ] * base10_floor
        gp.add_plot_option(plotoptions,
                           'set',
                           ('key box opaque',
                            'style textbox opaque',
                            'contour base',
                            f'cntrparam levels incremental  contour_max , contour_increment , contour_min '))
        if contour_labels_font is not None:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%d" font " contour_labels_font "' )
        else:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%.0f"' )
        plotoptions['cbrange'] = [contour_min, contour_max]
        # I plot 3 times:
        # - to make the heat map
        # - to make the contours
        # - to make the contour labels
        _with = np.array(('image',
                          'lines nosurface',
                          f'labels  contour_labels_styles  nosurface'))
    else:
        gp.add_plot_option(plotoptions, 'unset', 'key')
        _with = 'image'
    using = \
        f'( dxy_hysteresis0_ft +$1* float(dxy_hysteresis1_ft-dxy_hysteresis0_ft)/(Ndxy_hysteresis_ft-1) ):' + \
        f'( ele_hysteresis0_ft +$2* float(ele_hysteresis1_ft-ele_hysteresis0_ft)/(Nele_hysteresis_ft-1) ):3'
    plotoptions['_3d']     = True
    plotoptions['_xrange'] = [dxy_hysteresis0_ft,dxy_hysteresis1_ft]
    plotoptions['_yrange'] = [ele_hysteresis0_ft,ele_hysteresis1_ft]
    plotoptions['ascii']   = True # needed for using to work
    gp.add_plot_option(plotoptions, 'unset', 'grid')
    return \
        dict( tuplesize=3,
              legend = "", # needed to force contour labels
              using = using,
              _with=_with)
contour_granularity = 1000
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(gain) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(gain) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(gain,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Elevation gain (ft)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed gain vs filtering parameters')
contour_granularity = 10
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(distance) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(distance) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(distance,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Distance (miles)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed distance vs filtering parameters')

Results: gain
Strava says the gain was 46307ft. The analysis says:
strava-gain.png
strava-gain-zoom.png
These show the filtered gain for different values of the distance and gain hysteresis thresholds. The same data is shown at diffent zoom levels. There's no sweet spot, but we get 46307ft with a reasonable amount of filtering. Maybe 46307ft is a bit low even.

Results: distance
Strava says the distance covered was 322 miles. The analysis says:
strava-distance.png
strava-distance-zoom.png
Once again, there's no sweet spot, but we get 322 miles only if we apply no filtering at all. That's clearly too high, and is not reasonable. From the map (and from other people's strava routes) the true distance is closer to 305 miles. Why those people's strava numbers are more believable is anybody's guess.

27 November 2024

Bits from Debian: OpenStreetMap migrates to Debian 12

You may have seen this toot announcing OpenStreetMap's migration to Debian on their infrastructure.
After 18 years on Ubuntu, we've upgraded the @openstreetmap servers to Debian 12 (Bookworm). openstreetmap.org is now faster using Ruby 3.1. Onward to new mapping adventures! Thank you to the team for the smooth transition. #OpenStreetMap #Debian
We spoke with Grant Slater, the Senior Site Reliability Engineer for the OpenStreetMap Foundation. Grant shares: Why did you choose Debian?
There is a large overlap between OpenStreetMap mappers and the Debian community. Debian also has excellent coverage of OpenStreetMap tools and utilities, which helped with the decision to switch to Debian. The Debian package maintainers do an excellent job of maintaining their packages - e.g.: osm2pgsql, osmium-tool etc. Part of our reason to move to Debian was to get closer to the maintainers of the packages that we depend on. Debian maintainers appear to be heavily invested in the software packages that they support and we see critical bugs get fixed.
What drove this decision to migrate?
OpenStreetMap.org is primarily run on actual physical hardware that our team manages. We attempt to squeeze as much performance from our systems as possible, with some services being particularly I/O bound. We ran into some severe I/O performance issues with kernels ~6.0 to < ~6.6 on systems with NVMe storage. This pushed us onto newer mainline kernels, which led us toward Debian. On Debian 12 we could simply install the backport kernel and the performance issues were solved.
How was the transition managed?
Thankfully we manage our server setup nearly completely with code. We also use Test Kitchen with inspec to test this infrastructure code. Tests run locally using Podman or Docker containers, but also run as part of our git code pipeline. We added Debian as a test target platform and fixed up the infrastructure code until all the tests passed. The changes required were relatively small, simple package name or config filename changes mostly.
What was your timeline of transition?
In August 2024 we moved the www.openstreetmap.org Ruby on Rails servers across to Debian. We haven't yet finished moving everything across to Debian, but we will upgrade the rest when it makes sense. Some systems may wait until the next hardware upgrade cycle. Our focus is to build a stable and reliable platform for OpenStreetMap mappers.
How has the transition from another Linux distribution to Debian gone?
We are still in the process of fully migrating between Linux distributions, but we can share that we recently moved our frontend servers to Debian 12 (from Ubuntu 22.04) which bumped the Ruby version from 3.0 to 3.1 which allowed us to also upgrade the version of Ruby on Rails that we use for www.openstreetmap.org. We also changed our chef code for managing the network interfaces from using netplan (default in Ubuntu, made by Canonical) to directly using systemd-networkd to manage the network interfaces, to allow commonality between how we manage the interfaces in Ubuntu and our upcoming Debian systems. Over the years we've standardised our networking setup to use 802.3ad bonded interfaces for redundancy, with VLANs to segment traffic; this setup worked well with systemd-networkd. We use netboot.xyz for PXE networking booting OS installers for our systems and use IPMI for the out-of-band management. We remotely re-installed a test server to Debian 12, and fixed a few minor issues missed by our chef tests. We were pleasantly surprised how smoothly the migration to Debian went. In a few limited cases we've used Debian Backports for a few packages where we've absolutely had to have a newer feature. The Debian package maintainers are fantastic. What definitely helped us is our code is libre/free/open-source, with most of the core OpenStreetMap software like osm2pgsql already in Debian and well packaged. In some cases we do run pre-release or custom patches of OpenStreetMap software; with Ubuntu we used launchpad.net's Personal Package Archives (PPA) to build and host deb repositories for these custom packages. We were initially perplexed by the myriad of options in Debian (see this list - eeek!), but received some helpful guidance from a Debian contributor and we now manage our own deb repository using aptly. For the moment we're currently building deb packages locally and pushing to aptly; ideally we'd like to replace this with a git driven pipeline for building the custom packages in the future.
Thank you for taking the time to share your experience with us.
Thank you to all the awesome people who make Debian!

We are overjoyed to share this in-use case which demonstrates our commitment to stability, development, and long term support. Debian offers users, companies, and organisations the ability to plan, scope, develop, and maintain at their own pace using a rock solid stable Linux distribution with responsive developers. Does your organisation use Debian in some capacity? We would love to hear about it and your use of 'The Universal Operating System'. Reach out to us at Press@debian.org - we would be happy to add your organisation to our 'Who's Using Debian?' page and to share your story! About Debian The Debian Project is an association of individuals who have made common cause to create a free operating system. This operating system that we have created is called Debian. Installers and images, such as live systems, offline installers for systems without a network connection, installers for other CPU architectures, or cloud instances, can be found at Getting Debian.

24 October 2024

Valhalla's Things: Asemic Writing, a Zine

Posted on October 24, 2024
Tags: madeof:atoms, madeof:bits, craft:zine
An open booklet with lines that look like some kind of cursive non-alphabetic script, framed by a border in the same script and four symbols in the corners. I have no idea either. The front of that booklet, with three lines of fake text in different sizes and a circle of the same. Happy Maladay1 to those who celebrate it, I guess.
A template on white paper with pencil lines where text is supposed to go. Multiple A4 sheet of tracing paper with fake text, plus an A6 sheet and a white A6 sheet with a stamp impression. If you care about the how, it started as china ink on tracing paper, with the help of a template (and a correction sheet for one page where I used the wrong line on the template). alt A rubber stamp was carved with the author s signature and stamped on white paper because the ink from the pad wasn t working well on tracing paper. Then everything was scanned (with the correction on top of the wrong page) asemic_zine_scans.tar. Imported in Inkscape and traced asemic_zine_svg.tar. Printed, cut in half, folded and stapled. The magenta lines weren t by design, but are there because my printer is currently2 cursed. And finally, asemic_zine.pdf was created, joining the pages together with pdfjam, for convenience in case somebody wants to download the full thing. All the .tar and .pdf downloads from this page are released under the WTFPL, or All Rites Reversed..

  1. it s still technically Maladay when I write this, even if by the time you ll get this it s probably the 6th of The Aftermath.
  2. I mean, all printers are always cursed, but at different times they can be cursed in different and novel ways.

Next.