Scarlett Gately Moore: KDE Snaps and life. Spirits are up, but I need a little help please


For a long time I ve been wanting to try GitOps tools, but I haven t had the chance to try them for real on the projects
I was working on.
As now I have some spare time I ve decided I m going to play a little with Argo CD,
Flux and Kluctl to test them and be able to use one of them in a real project
in the future if it looks appropriate.
On this post I will use Argo-CD Autopilot to install argocd on a
k3d local cluster installed using OpenTofu to test the autopilot approach of
managing argocd and test the tool (as it manages argocd using a git repository it can be used to test argocd as
well).
arkadeRecently I ve been using the arkade tool to install kubernetes related
applications on Linux servers and containers, I usually get the applications with it and install them on the
/usr/local/bin folder.
For this post I ve created a simple script that checks if the tools I ll be using are available and installs them on the
$HOME/.arkade/bin folder if missing (I m assuming that docker is already available, as it is not installable with
arkade):
#!/bin/sh
# TOOLS LIST
ARKADE_APPS="argocd argocd-autopilot k3d kubectl sops tofu"
# Add the arkade binary directory to the path if missing
case ":$ PATH :" in
*:"$ HOME /.arkade/bin":*) ;;
*) export PATH="$ PATH :$ HOME /.arkade/bin" ;;
esac
# Install or update arkade
if command -v arkade >/dev/null; then
echo "Trying to update the arkade application"
sudo arkade update
else
echo "Installing the arkade application"
curl -sLS https://get.arkade.dev sudo sh
fi
echo ""
echo "Installing tools with arkade"
echo ""
for app in $ARKADE_APPS; do
app_path="$(command -v $app)" true
if [ "$app_path" ]; then
echo "The application '$app' already available on '$app_path'"
else
arkade get "$app"
fi
done
cat <<EOF
Add the ~/.arkade/bin directory to your PATH if tools have been installed there
EOFopentofuAlthough using k3d directly will be a good choice for the creation of the cluster, I m using tofu to do it because
that will probably be the tool used to do it if we were working with Cloud Platforms like AWS or Google.
The main.tf file is as follows:
terraform
required_providers
k3d =
source = "moio/k3d"
version = "0.0.12"
sops =
source = "carlpett/sops"
version = "1.2.0"
data "sops_file" "secrets"
source_file = "secrets.yaml"
resource "k3d_cluster" "argocd_cluster"
name = "argocd"
servers = 1
agents = 2
image = "rancher/k3s:v1.31.5-k3s1"
network = "argocd"
token = data.sops_file.secrets.data["token"]
port
host_port = 8443
container_port = 443
node_filters = [
"loadbalancer",
]
k3d
disable_load_balancer = false
disable_image_volume = false
kubeconfig
update_default_kubeconfig = true
switch_current_context = true
runtime
gpu_request = "all"
k3d configuration is quite simple, as I plan to use the default traefik ingress controller with TLS I publish
the 443 port on the hosts 8443 port, I ll explain how I add a valid certificate on the next step.
I ve prepared the following script to initialize and apply the changes:
#!/bin/sh
set -e
# VARIABLES
# Default token for the argocd cluster
K3D_CLUSTER_TOKEN="argocdToken"
# Relative PATH to install the k3d cluster using terr-iaform
K3D_TF_RELPATH="k3d-tf"
# Secrets yaml file
SECRETS_YAML="secrets.yaml"
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."
# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"
# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":$ PATH :" in
*:"$ HOME /.arkade/bin":*) ;;
*) export PATH="$ PATH :$ HOME /.arkade/bin" ;;
esac
# Go to the k3d-tf dir
cd "$WORK_DIR/$K3D_TF_RELPATH" exit 1
# Create secrets.yaml file and encode it with sops if missing
if [ ! -f "$SECRETS_YAML" ]; then
echo "token: $K3D_CLUSTER_TOKEN" >"$SECRETS_YAML"
sops encrypt -i "$SECRETS_YAML"
fi
# Initialize terraform
tofu init
# Apply the configuration
tofu applyk3d ingressAs an optional step, after creating the k3d cluster I m going to add a default wildcard certificate for the traefik
ingress server to be able to use everything with HTTPS without certificate issues.
As I manage my own DNS domain I ve created the lo.mixinet.net and *.lo.mixinet.net DNS entries on my public and
private DNS servers (both return 127.0.0.1 and ::1) and I ve created a TLS certificate for both entries using
Let s Encrypt with Certbot.
The certificate is updated automatically on one of my servers and when I need it I copy the contents of the
fullchain.pem and privkey.pem files from the /etc/letsencrypt/live/lo.mixinet.net server directory to the local
files lo.mixinet.net.crt and lo.mixinet.net.key.
After copying the files I run the following file to install or update the certificate and configure it as the default
for traefik:
#!/bin/sh
# Script to update the
secret="lo-mixinet-net-ingress-cert"
cert="$ 1:-lo.mixinet.net.crt "
key="$ 2:-lo.mixinet.net.key "
if [ -f "$cert" ] && [ -f "$key" ]; then
kubectl -n kube-system create secret tls $secret \
--key=$key \
--cert=$cert \
--dry-run=client --save-config -o yaml kubectl apply -f -
kubectl apply -f - << EOF
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
namespace: kube-system
spec:
defaultCertificate:
secretName: $secret
EOF
else
cat <<EOF
To add or update the traefik TLS certificate the following files are needed:
- cert: '$cert'
- key: '$key'
Note: you can pass the paths as arguments to this script.
EOF
fiargocd with argocd-autopilotautopilotI ll be using a project on my forgejo instance to manage argocd, the repository I ve created is on the URL
https://forgejo.mixinet.net/blogops/argocd and I ve created a private user named argocd that only has write access to
that repository.
Logging as the argocd user on forgejo I ve created a token with permission to read and write repositories that I ve
saved on my pass password store on the mixinet.net/argocd@forgejo/repository-write
entry.GIT_REPO and GIT_TOKEN values):
#!/bin/sh
set -e
# VARIABLES
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."
# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"
# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":$ PATH :" in
*:"$ HOME /.arkade/bin":*) ;;
*) export PATH="$ PATH :$ HOME /.arkade/bin" ;;
esac
# Go to the working directory
cd "$WORK_DIR" exit 1
# Set GIT variables
if [ -z "$GIT_REPO" ]; then
export GIT_REPO="https://forgejo.mixinet.net/blogops/argocd.git"
fi
if [ -z "$GIT_TOKEN" ]; then
GIT_TOKEN="$(pass mixinet.net/argocd@forgejo/repository-write)"
export GIT_TOKEN
fi
argocd-autopilot repo bootstrap --provider gitea bin/argocd-bootstrap.sh
INFO cloning repo: https://forgejo.mixinet.net/blogops/argocd.git
INFO empty repository, initializing a new one with specified remote
INFO using revision: "", installation path: ""
INFO using context: "k3d-argocd", namespace: "argocd"
INFO applying bootstrap manifests to cluster...
namespace/argocd created
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
secret/autopilot-secret created
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
application.argoproj.io/autopilot-bootstrap created
INFO running argocd login to initialize argocd config
Context 'autopilot' updated
INFO argocd initialized. password: XXXXXXX-XXXXXXXX
INFO run:
kubectl port-forward -n argocd svc/argocd-server 8080:80argocd installed and running, it can be checked using the port-forward and connecting to
https://localhost:8080/ (the certificate will be wrong, we are going to fix that in the next step).argocd installation in gitNow that we have the application deployed we can clone the argocd repository and edit the deployment to disable TLS
for the argocd server (we are going to use TLS termination with traefik and that needs the server running as insecure,
see the Argo CD documentation)
ssh clone ssh://git@forgejo.mixinet.net/blogops/argocd.git
cd argocd
edit bootstrap/argo-cd/kustomization.yaml
git commit -m 'Disable TLS for the argocd-server'kustomization.yaml file are the following:
--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -11,6 +11,11 @@ configMapGenerator:
key: git_username
name: autopilot-secret
name: argocd-cm
+ # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
+- behavior: merge
+ literals:
+ - "server.insecure=true"
+ name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:argo-cd application manually to make sure they are applied:

argocd-cmd-params-cm ConfigMap to make sure everything is OK:
apiVersion: v1
data:
server.insecure: "true"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration:
"apiVersion":"v1","data": "server.insecure":"true" ,"kind":"ConfigMap","metadata": "annotations": ,"labels": "app.kubernetes.io/instance":"argo-cd","app.kubernetes.io/name":"argocd-cmd-params-cm","app.kubernetes.io/part-of":"argocd" ,"name":"argocd-cmd-params-cm","namespace":"argocd"
creationTimestamp: "2025-04-27T17:31:54Z"
labels:
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/name: argocd-cmd-params-cm
app.kubernetes.io/part-of: argocd
name: argocd-cmd-params-cm
namespace: argocd
resourceVersion: "16731"
uid: a460638f-1d82-47f6-982c-3017699d5f14ConfigMap we have to restart the argocd-server to read it again, to do it we delete the
server pods so they are re-created using the updated resource:
kubectl delete pods -n argocd -l app.kubernetes.io/name=argocd-serverport-forward command is killed automatically, if we run it again the connection to get to the
argocd-server has to be done using HTTP instead of HTTPS.
Instead of testing that we are going to add an ingress definition to be able to connect to the server using HTTPS and
GRPC against the address argocd.lo.mixinet.net using the wildcard TLS certificate we installed earlier.
To do it we to edit the bootstrap/argo-cd/kustomization.yaml file to add the ingress_route.yaml file to the
deployment:
--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -20,3 +20,4 @@ kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
+- ingress_route.yamlingress_route.yaml file contents are the following:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-server
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host( argocd.lo.mixinet.net )
priority: 10
services:
- name: argocd-server
port: 80
- kind: Rule
match: Host( argocd.lo.mixinet.net ) && Header( Content-Type , application/grpc )
priority: 11
services:
- name: argocd-server
port: 80
scheme: h2c
tls:
certResolver: default argocd --grpc-web login argocd.lo.mixinet.net:8443
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.lo.mixinet.net:8443' updated
argocd app list -o name
argocd/argo-cd
argocd/autopilot-bootstrap
argocd/cluster-resources-in-cluster
argocd/root
https://kde.org/announcements/gear/25.04.0
Which can be downloaded here: https://snapcraft.io/publisher/kde !
After careful consideration, I ve decided to embark on a new chapter in my professional journey. I ve left my position at AWS to dedicate at least the next six months to developing open source software and strengthening digital ecosystems. My focus will be on contributing to Linux distributions (primarily Debian) and other critical infrastructure components that our modern society depends on, but which may not receive adequate attention or resources.
Best way to reach me is by e-mail otto at debian.org. You can also book a 15-minute chat with me for a quick introduction.
Icy morning Witch Wells Az
Kubuntu:
While testing Beta I came across some crashy apps ( Namely PIM ) due to apparmor. I have uploaded fixed profiles for kmail, akregator, akonadiconsole, konqueror, tellico
KDE Snaps:
Added sctp support in Qt https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/commit/bbcb1dc39044b930ab718c8ffabfa20ccd2b0f75
This will allow me to finish a pyside6 snap and fix FreeCAD build.
Changed build type to Release in the kf6-core24-sdk which will reduce the size of kf6-core24 significantly.
Fixed a few startup errors in kf5-core24 and kf6-core24 snapcraft-desktop-integration.
Soumyadeep fixed wayland icons in https://invent.kde.org/neon/snap-packaging/kf6-core-sdk/-/merge_requests/3
KDE Applications 25.03.90 RC released to candidate ( I know it says 24.12.3, version won t be updated until 25.04.0 release )
Kasts core24 fixed in candidate
Kate now core24 with Breeze theme! candidate
Neochat: Fixed missing QML and 25.04 dependencies in candidate
Kdenlive now with Galxnimate animations! candidate
Digikam 8.6.0 now with scanner support in stable
Kstars 3.7.6 released to stable for realz, removed store rejected plugs.
Thanks for stopping by!

Are you a student aspiring to participate in the Google Summer of Code 2025? Would you like to improve the continuous integration pipeline used at salsa.debian.org, the Debian GitLab instance, to help improve the quality of tens of thousands of software packages in Debian?
This summer 2025, I and Emmanuel Arias will be participating as mentors in the GSoC program. We are available to mentor students who propose and develop improvements to the Salsa CI pipeline, as we are members of the Debian team that maintains it.
A post by Santiago Ruano Rinc n in the GitLab blog explains what Salsa CI is and its short history since inception in 2018. At the time of the article in fall 2023 there were 9000+ source packages in Debian using Salsa CI. Now in 2025 there are over 27,000 source packages in Debian using it, and since summer 2024 some Ubuntu developers have started using it for enhanced quality assurance of packaging changes before uploading new package revisions to Ubuntu. Personally, I have been using Salsa CI since its inception, and contributing as a team member since 2019. See my blog post about GitLab CI for MariaDB in Debian for a description of an advanced and extensive use case.
Helping Salsa CI is a great way to make a global impact, as it will help avoid regressions and improve the quality of Debian packages. The benefits reach far beyond just Debian, as it will also help hundreds of Debian derivatives, such as Ubuntu, Linux Mint, Tails, Purism PureOS, Pop!_OS, Zorin OS, Raspberry Pi OS, a large portion of Docker containers, and even the Windows Subsystem for Linux.
As promised on my previous post, on this entry I ll explain how I ve set up forgejo
actions on the source repository of this site to build it using a runner instead of doing it on the public server using
a webhook to trigger the operation.
curl to send a notification to an instance of the webhook server installed on the
remote server that triggers a script that updates the site using the git branch.webhook serviceOn the server machine we have installed and configured the webhook service to run a script that updates the site.
To install the application and setup the configuration we have used the following script:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
ARCH="$(dpkg --print-architecture)"
WEBHOOK_VERSION="2.8.2"
DOWNLOAD_URL="https://github.com/adnanh/webhook/releases/download"
WEBHOOK_TGZ_URL="$DOWNLOAD_URL/$WEBHOOK_VERSION/webhook-linux-$ARCH.tar.gz"
WEBHOOK_SERVICE_NAME="webhook"
# Files
WEBHOOK_SERVICE_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.service"
WEBHOOK_SOCKET_FILE="/etc/systemd/system/$WEBHOOK_SERVICE_NAME.socket"
WEBHOOK_TML_TEMPLATE="/srv/blogops/action/webhook.yml.envsubst"
WEBHOOK_YML="/etc/webhook.yml"
# Config file values
WEBHOOK_USER="$(id -u)"
WEBHOOK_GROUP="$(id -g)"
WEBHOOK_LISTEN_STREAM="172.31.31.1:4444"
# ----
# MAIN
# ----
# Install binary from releases (on Debian only version 2.8.0 is available, but
# I need the 2.8.2 version to support the systemd activation mode).
curl -fsSL -o "/tmp/webhook.tgz" "$WEBHOOK_TGZ_URL"
tar -C /tmp -xzf /tmp/webhook.tgz
sudo install -m 755 "/tmp/webhook-linux-$ARCH/webhook" /usr/local/bin/webhook
rm -rf "/tmp/webhook-linux-$ARCH" /tmp/webhook.tgz
# Service file
sudo sh -c "cat >'$WEBHOOK_SERVICE_FILE'" <<EOF
[Unit]
Description=Webhook server
[Service]
Type=exec
ExecStart=webhook -nopanic -hooks $WEBHOOK_YML
User=$WEBHOOK_USER
Group=$WEBHOOK_GROUP
EOF
# Socket config
sudo sh -c "cat >'$WEBHOOK_SOCKET_FILE'" <<EOF
[Unit]
Description=Webhook server socket
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to the IP and port you want to listen on
ListenStream=$WEBHOOK_LISTEN_STREAM
[Install]
WantedBy=multi-user.target
EOF
# Config file
BLOGOPS_TOKEN="$(uuid)" \
envsubst <"$WEBHOOK_TML_TEMPLATE" sudo sh -c "cat >$WEBHOOK_YML"
chmod 0640 "$WEBHOOK_YML"
chwon "$WEBHOOK_USER:$WEBHOOK_GROUP" "$WEBHOOK_YML"
# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$WEBHOOK_SERVICE_NAME.socket"
sudo systemctl start "$WEBHOOK_SERVICE_NAME.socket"
sudo systemctl enable "$WEBHOOK_SERVICE_NAME.socket"
# ----
# vim: ts=2:sw=2:et:ai:sts=2systemd with socket activation.
The configuration file template is the following one:
- id: "update-blogops"
execute-command: "/srv/blogops/action/bin/update-blogops.sh"
command-working-directory: "/srv/blogops"
trigger-rule:
match:
type: "value"
value: "$BLOGOPS_TOKEN"
parameter:
source: "header"
name: "X-Blogops-Token"/etc/webhook.yml has the BLOGOPS_TOKEN adjusted to a random value that has to exported as a secret on
the forgejo project (see later).
Once the service is started each time the action is executed the webhook daemon will get a notification and will run
the following update-blogops.sh script to publish the updated version of the site:
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Values
REPO_URL="ssh://git@forgejo.mixinet.net/mixinet/blogops.git"
REPO_BRANCH="html"
REPO_DIR="public"
MAIL_PREFIX="[BLOGOPS-UPDATE-ACTION] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# Directories
BASE_DIR="/srv/blogops"
PUBLIC_DIR="$BASE_DIR/$REPO_DIR"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"
ACTION_BASE_DIR="$BASE_DIR/action"
ACTION_LOG_DIR="$ACTION_BASE_DIR/log"
# Files
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
ACTION_LOGFILE_PATH="$ACTION_LOG_DIR/$OUTPUT_BASENAME.log"
# ---------
# Functions
# ---------
action_log()
echo "$(date -R) $*" >>"$ACTION_LOGFILE_PATH"
action_check_directories()
for _d in "$ACTION_BASE_DIR" "$ACTION_LOG_DIR"; do
[ -d "$_d" ] mkdir "$_d"
done
action_clean_directories()
# Try to remove empty dirs
for _d in "$ACTION_LOG_DIR" "$ACTION_BASE_DIR"; do
if [ -d "$_d" ]; then
rmdir "$_d" 2>/dev/null true
fi
done
mail_success()
to_addr="$MAIL_TO_ADDR"
if [ "$to_addr" ]; then
subject="OK - updated blogops site"
mail -s "$ MAIL_PREFIX $ subject " "$to_addr" <"$ACTION_LOGFILE_PATH"
fi
mail_failure()
to_addr="$MAIL_TO_ADDR"
if [ "$to_addr" ]; then
subject="KO - failed to update blogops site"
mail -s "$ MAIL_PREFIX $ subject " "$to_addr" <"$ACTION_LOGFILE_PATH"
fi
exit 1
# ----
# MAIN
# ----
ret="0"
# Check directories
action_check_directories
# Go to the base directory
cd "$BASE_DIR"
# Remove the old build dir if present
if [ -d "$PUBLIC_DIR" ]; then
rm -rf "$PUBLIC_DIR"
fi
# Update the repository checkout
action_log "Updating the repository checkout"
git fetch --all >>"$ACTION_LOGFILE_PATH" 2>&1 ret="$?"
if [ "$ret" -ne "0" ]; then
action_log "Failed to update the repository checkout"
mail_failure
fi
# Get it from the repo branch & extract it
action_log "Downloading and extracting last site version using 'git archive'"
git archive --remote="$REPO_URL" "$REPO_BRANCH" "$REPO_DIR" \
tar xf - >>"$ACTION_LOGFILE_PATH" 2>&1 ret="$?"
# Fail if public dir was missing
if [ "$ret" -ne "0" ] [ ! -d "$PUBLIC_DIR" ]; then
action_log "Failed to download or extract site"
mail_failure
fi
# Remove old public_html copies
action_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
-exec rm -rf \; >>"$ACTION_LOGFILE_PATH" 2>&1 ret="$?"
if [ "$ret" -ne "0" ]; then
action_log "Removal of old site versions failed"
mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
action_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$ACTION_LOGFILE_PATH" 2>&1
ret="$?"
fi
if [ "$ret" -eq "0" ]; then
action_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$ACTION_LOGFILE_PATH" 2>&1
ret="$?"
fi
if [ "$ret" -ne "0" ]; then
action_log "Site switch failed"
mail_failure
else
action_log "Site updated successfully"
mail_success
fi
# ----
# vim: ts=2:sw=2:et:ai:sts=2hugo-adoc workflowThe workflow is defined in the .forgejo/workflows/hugo-adoc.yml file and looks like this:
name: hugo-adoc
# Run this job on push events to the main branch
on:
push:
branches:
- 'main'
jobs:
build-and-push:
if: $ vars.BLOGOPS_WEBHOOK_URL != '' && secrets.BLOGOPS_TOKEN != ''
runs-on: docker
container:
image: forgejo.mixinet.net/oci/hugo-adoc:latest
# Allow the job to write to the repository (not really needed on forgejo)
permissions:
contents: write
steps:
- name: Checkout the repo
uses: actions/checkout@v4
with:
submodules: 'true'
- name: Build the site
shell: sh
run:
rm -rf public
hugo
- name: Push compiled site to html branch
shell: sh
run:
# Set the git user
git config --global user.email "blogops@mixinet.net"
git config --global user.name "BlogOps"
# Create a new orphan branch called html (it was not pulled by the
# checkout step)
git switch --orphan html
# Add the public directory to the branch
git add public
# Commit the changes
git commit --quiet -m "Updated site @ $(date -R)" public
# Push the changes to the html branch
git push origin html --force
# Switch back to the main branch
git switch main
- name: Call the blogops update webhook endpoint
shell: sh
run:
HEADER="X-Blogops-Token: $ secrets.BLOGOPS_TOKEN "
curl --fail -k -H "$HEADER" $ vars.BLOGOPS_WEBHOOK_URL BLOGOPS_TOKEN variable to the project secrets (its value is the one
included on the /etc/webhook.yml file created when installing the webhook service) and the BLOGOPS_WEBHOOK_URL
project variable (its value is the URL of the webhook server, in my case
http://172.31.31.1:4444/hooks/update-blogops); note that the job includes the -k flag on the curl command just in
case I end up using TLS on the webhook server in the future, as discussed previously.webhook server which, IMHO, is a more secure setup.
Last week I decided I wanted to try out forgejo actions to build this blog instead of using
webhooks, so I looked the documentation and started playing with it until I had it working as I wanted.
This post is to describe how I ve installed and configured a forgejo runner, how I ve added an
oci organization to my instance to build, publish and mirror container images and added a couple of
additional organizations (actions and docker for now) to mirror interesting
actions.
The changes made to build the site using actions will be documented on a separate post, as I ll be using this entry to
test the new setup on the blog project.
$ cd /srv
$ git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
$ cd forgejo-runner
$ sh ./bin/setup-runner.shsetup-runner.sh script does multiple things:
forgejo-runner user and group.runner file with a predefined secret and the docker labelsetup-runner.sh code is available here.
After running the script the runner has to be registered with the forgejo server, it can be done using the following
command:
$ forgejo forgejo-cli actions register --name "$RUNNER_NAME" \
--secret "$FORGEJO_SECRET"RUNNER_NAME variable is defined on the setup-runner.sh script and the FORGEJO_SECRET must match the value used
on the .runner file.
docker-composeTo launch the runner I m going to use a docker-compose.yml file that starts two containers, a docker in docker
service to run the containers used by the workflow jobs and another one that runs the forgejo-runner itself.
The initial version used a TCP port to communicate with the dockerd server from the runner, but when I tried to build
images from a workflow I noticed that the containers launched by the runner were not going to be able to execute
another dockerd inside the dind one and, even if they were, it was going to be expensive computationally.
To avoid the issue I modified the dind service to use a unix socket on a shared volume that can be used by the
runner service to communicate with the daemon and also re-shared with the job containers so the dockerd server can
be used from them to build images.
dind service, but just in case I want to run
other containers on the host I prefer to keep the one used for the runner isolated from it).
For those concerned about sharing the same server an alternative would be to launch a second dockerd only for the jobs
(i.e. actions-dind) using the same approach (the volume with its socket will have to be shared with the runner
service so it can be re-shared, but the runner does not need to use it).docker-compose.yaml file is as follows:
services:
dind:
image: docker:dind
container_name: 'dind'
privileged: 'true'
command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
restart: 'unless-stopped'
volumes:
- ./dind:/dind
runner:
image: 'data.forgejo.org/forgejo/runner:6.2.2'
links:
- dind
depends_on:
dind:
condition: service_started
container_name: 'runner'
environment:
DOCKER_HOST: 'unix:///dind/docker.sock'
user: $RUNNER_UID:$RUNNER_GID
volumes:
- ./config.yaml:/config.yaml
- ./data:/data
- ./dind:/dind
restart: 'unless-stopped'
command: '/bin/sh -c "sleep 5; forgejo-runner daemon -c /config.yaml"'dockerd server is started with the -H unix:///dind/docker.sock flag to use the unix socket to communicate
with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the
containers started by the runner).dockerd daemon with the RUNNER_GID group so the runner can communicate with it (the socket
gets that group which is the same used by the runner).data directory, the dind folder where docker creates the unix
socket and a config.yaml file used by us to change the default runner configuration.config.yaml file was originally created using the forgejo-runner:
$ docker run --rm data.forgejo.org/forgejo/runner:6.2.2 \
forgejo-runner generate-config > config.yamlcapacity has been increased to 2 (that allows it to run two jobs at the
same time) and the /dind/docker.sock value has been added to the valid_volumes key to allow the containers launched
by the runner to mount it when needed; the diff against the default version is as follows:
@@ -13,7 +13,8 @@
# Where to store the registration result.
file: .runner
# Execute how many tasks concurrently at the same time.
- capacity: 1
+ # STO: Allow 2 concurrent tasks
+ capacity: 2
# Extra environment variables to run jobs.
envs:
A_TEST_ENV_NAME_1: a_test_env_value_1
@@ -87,7 +88,9 @@
# If you want to allow any volume, please use the following configuration:
# valid_volumes:
# - '**'
- valid_volumes: []
+ # STO: Allow to mount the /dind/docker.sock on the containers
+ valid_volumes:
+ - /dind/docker.sock
# overrides the docker client host with the specified one.
# If "-" or "", an available docker host will automatically be found.
# If "automount", an available docker host will automatically be found and ...RUNNER_UID and RUNNER_GID variables and call docker-compose up to start the
containers on the background:
$ RUNNER_UID="$(id -u forgejo-runner)" RUNNER_GID="$(id -g forgejo-runner)" \
docker compose up -dcode one for them):
uses keyword) we have added the
following section to the app.ini file of our forgejo server:
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://forgejo.mixinet.netoci organization I ve created a token with package:write permission for my own
user because I m a member of the organization and I m authorized to publish packages on it (a different user could be
created, but as I said this is for personal use, so there is no need to complicate things for now).
To allow the use of those credentials on the actions I have added a secret (REGISTRY_PASS) and a variable
(REGISTRY_USER) to the oci organization to allow the actions to use them.
I ve also logged myself on my local docker client to be able to push images to the oci group by hand, as I it is
needed for bootstrapping the system (as I m using local images on the worflows I need to push them to the server before
running the ones that are used to build the images).oci/images projectThe images project is a monorepo that contains the source files for the images we are going to build and a couple of
actions.
The image sources are on sub directories of the repository, to be considered an image the folder has to contain a
Dockerfile that will be used to build the image.
The repository has two workflows:
build-image-from-tag: Workflow to build, tag and push an image to the oci organizationmulti-semantic-release: Workflow to create tags for the images using the multi-semantic-release tool.registry="forgejo.mixinet.net/oci"
for img in alpine-mixinet node-mixinet multi-semantic-release; do
docker build -t $registry/$img:1.0.0 $img
docker tag $registry/$img:1.0.0 $registry/$img:latest
docker push $registry/$img:1.0.0
docker push $registry/$img:latest
donebuild-image-from-tag workflowThis workflow uses a docker client to build an image from a tag on the repository with the format
image-name-v[0-9].[0-9].[0-9]+.
As the runner is executed on a container (instead of using lxc) it seemed unreasonable to run another dind
container from that one, that is why, after some tests, I decided to share the dind service server socket with the
runner container and enabled the option to mount it also on the containers launched by the runner when needed (I only
do it on the build-image-from-tag action for now).
The action was configured to run using a trigger or when new tags with the right format were created, but when the tag
is created by multi-semantic-release the trigger does not work for some reason, so now it only runs the job on
triggers and checks if it is launched for a tag with the right format on the job itself.
The source code of the action is as follows:
name: build-image-from-tag
on:
workflow_dispatch:
jobs:
build:
# Don't build the image if the registry credentials are not set, the ref is not a tag or it doesn't contain '-v'
if: $ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' && startsWith(github.ref, 'refs/tags/') && contains(github.ref, '-v')
runs-on: docker
container:
image: forgejo.mixinet.net/oci/node-mixinet:latest
# Mount the dind socket on the container at the default location
options: -v /dind/docker.sock:/var/run/docker.sock
steps:
- name: Extract image name and tag from git and get registry name from env
id: job_data
run:
echo "::set-output name=img_name::$ GITHUB_REF_NAME%%-v* "
echo "::set-output name=img_tag::$ GITHUB_REF_NAME##*-v "
echo "::set-output name=registry::$(
echo "$ github.server_url " sed -e 's%https://%%'
)"
echo "::set-output name=oci_registry_prefix::$(
echo "$ github.server_url /oci" sed -e 's%https://%%'
)"
- name: Checkout the repo
uses: actions/checkout@v4
- name: Export build dir and Dockerfile
id: build_data
run:
img="$ steps.job_data.outputs.img_name "
build_dir="$(pwd)/$ img "
dockerfile="$ build_dir /Dockerfile"
if [ -f "$dockerfile" ]; then
echo "::set-output name=build_dir::$build_dir"
echo "::set-output name=dockerfile::$dockerfile"
else
echo "Couldn't find the Dockerfile for the '$img' image"
exit 1
fi
- name: Login to the Container Registry
uses: docker/login-action@v3
with:
registry: $ steps.job_data.outputs.registry
username: $ vars.REGISTRY_USER
password: $ secrets.REGISTRY_PASS
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and Push
uses: docker/build-push-action@v6
with:
push: true
tags:
$ steps.job_data.outputs.oci_registry_prefix /$ steps.job_data.outputs.img_name :$ steps.job_data.outputs.img_tag
$ steps.job_data.outputs.oci_registry_prefix /$ steps.job_data.outputs.img_name :latest
context: $ steps.build_data.outputs.build_dir
file: $ steps.build_data.outputs.dockerfile
build-args:
OCI_REGISTRY_PREFIX=$ steps.job_data.outputs.oci_registry_prefix /if condition of the build job is not perfect, but it is good enough to avoid wrong uses as long as nobody
uses manual tags with the wrong format and expects things to work (it checks if the REGISTRY_USER and
REGISTRY_PASS variables are set, if the ref is a tag and if it contains the -v string).dind socket we mount it on the container using the options key on the container section
of the job (this only works if supported by the runner configuration as explained before).job_data step to get information about the image from the tag and the registry URL from the environment
variables, it is executed first because all the information is available without checking out the repository.job_data step to get the build dir and Dockerfile paths from the repository (right now we are
assuming fixed paths and checking if the Dockerfile exists, but in the future we could use a configuration file to
get them, if needed).OCI_REGISTRY_PREFIX build argument to the Dockerfile to be able to use it
on the FROM instruction (we are using it in our images).multi-semantic-release workflowThis workflow is used to run the multi-semantic-release tool on pushes to the main branch.
It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a
Dockerfile using a couple of template files available on the repository s .forgejo directory) and run the
multi-semantic-release tool to create tags and push them to the repository if new versions are to be built.
Initially we assumed that the tag creation pushed by multi-semantic-release would be enough to run the
build-tagged-image-task action, but as it didn t work we removed the rule to run the action on tag creation and added
code to trigger the action using an api call for the newly created tags (we get them from the output of the
multi-semantic-release execution).
The source code of the action is as follows:
name: multi-semantic-release
on:
push:
branches:
- 'main'
jobs:
multi-semantic-release:
runs-on: docker
container:
image: forgejo.mixinet.net/oci/multi-semantic-release:latest
steps:
- name: Checkout the repo
uses: actions/checkout@v4
- name: Generate multi-semantic-release configuration
shell: sh
run:
# Get the list of images to work with (the folders that have a Dockerfile)
images="$(for img in */Dockerfile; do dirname "$img"; done)"
# Generate a values.yaml file for the main packages.json file
package_json_values_yaml=".package.json-values.yaml"
echo "images:" >"$package_json_values_yaml"
for img in $images; do
echo " - $img" >>"$package_json_values_yaml"
done
echo "::group::Generated values.yaml for the project"
cat "$package_json_values_yaml"
echo "::endgroup::"
# Generate the package.json file validating that is a good json file with jq
tmpl -f "$package_json_values_yaml" ".forgejo/package.json.tmpl" jq . > "package.json"
echo "::group::Generated package.json for the project"
cat "package.json"
echo "::endgroup::"
# Remove the temporary values file
rm -f "$package_json_values_yaml"
# Generate the package.json file for each image
for img in $images; do
tmpl -v "img_name=$img" -v "img_path=$img" ".forgejo/ws-package.json.tmpl" jq . > "$img/package.json"
echo "::group::Generated package.json for the '$img' image"
cat "$img/package.json"
echo "::endgroup::"
done
- name: Run multi-semantic-release
shell: sh
run:
multi-semantic-release tee .multi-semantic-release.log
- name: Trigger builds
shell: sh
run:
# Get the list of tags published on the previous steps
tags="$(
sed -n -e 's/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p' \
.multi-semantic-release.log
)"
rm -f .multi-semantic-release.log
if [ "$tags" ]; then
# Prepare the url for building the images
workflow="build-image-from-tag.yaml"
dispatch_url="$ github.api_url /repos/$ github.repository /actions/workflows/$workflow/dispatches"
echo "$tags" while read -r tag; do
echo "Triggering build for tag '$tag'"
curl \
-H "Content-Type:application/json" \
-H "Authorization: token $ secrets.GITHUB_TOKEN " \
-d " \"ref\":\"$tag\" " "$dispatch_url"
done
fitmpl tool to process the multi-semantic-release configuration templates comes from previous uses,
but on this case we could use a different approach (i.e. envsubst could be used) but we left it because it keeps
things simple and can be useful in the future if we want to do more complex things with the template files.tee to show and dump to a file the output of the multi-semantic-release execution.tags using sed against the output of the multi-semantic-release execution and for
each one found we use curl to call the forgejo API to trigger the build job; as the call is against the same
project we can use the GITHUB_TOKEN generated for the workflow to do it, without creating a user token that has to
be shared as a secret..forgejo/package.json.tmpl file is the following one:
"name": "multi-semantic-release",
"version": "0.0.0-semantically-released",
"private": true,
"multi-release":
"tagFormat": "$ name -v$ version "
,
"workspaces": .images toJson
.forgejo/ws-package.json.tmpl file is the following one:
"name": " .img_name ",
"license": "UNLICENSED",
"release":
"plugins": [
[
"@semantic-release/commit-analyzer",
"preset": "conventionalcommits",
"releaseRules": [
"breaking": true, "release": "major" ,
"revert": true, "release": "patch" ,
"type": "feat", "release": "minor" ,
"type": "fix", "release": "patch" ,
"type": "perf", "release": "patch"
]
],
[
"semantic-release-replace-plugin",
"replacements": [
"files": [ " .img_path /msr.yaml" ],
"from": "^version:.*$",
"to": "version: $ nextRelease.version ",
"allowEmptyPaths": true
]
],
[
"@semantic-release/git",
"assets": [ "msr.yaml" ],
"message": "ci(release): .img_name -v$ nextRelease.version \n\n$ nextRelease.notes "
]
],
"branches": [ "main" ]
oci/mirrors projectThe repository contains a template for the configuration file we are going to use with regsync
(regsync.envsubst.yml) to mirror images from remote registries using a workflow that generates a configuration file
from the template and runs the tool.
The initial version of the regsync.envsubst.yml file is prepared to mirror alpine containers from version 3.21 to
3.29 (we explicitly remove version 3.20) and needs the forgejo.mixinet.net/oci/node-mixinet:latest image to run
(as explained before it was pushed manually to the server):
version: 1
creds:
- registry: "$REGISTRY"
user: "$REGISTRY_USER"
pass: "$REGISTRY_PASS"
sync:
- source: alpine
target: $REGISTRY/oci/alpine
type: repository
tags:
allow:
- "latest"
- "3\\.2\\d+"
- "3\\.2\\d+.\\d+"
deny:
- "3\\.20"
- "3\\.20.\\d+"mirror workflowThe mirror workflow creates a configuration file replacing the value of the REGISTRY environment variable (computed
by removing the protocol from the server_url), the REGISTRY_USER organization value and the REGISTRY_PASS secret
using the envsubst command and running the regsync tool to mirror the images using the configuration file.
The action is configured to run daily, on push events when the regsync.envsubst.yml file is modified on the main
branch and can also be triggered manually.
The source code of the action is as follows:
name: mirror
on:
schedule:
- cron: '@daily'
push:
branches:
- main
paths:
- 'regsync.envsubst.yml'
workflow_dispatch:
jobs:
mirror:
if: $ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != ''
runs-on: docker
container:
image: forgejo.mixinet.net/oci/node-mixinet:latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Sync images
run:
REGISTRY="$(echo "$ github.server_url " sed -e 's%https://%%')" \
REGISTRY_USER="$ vars.REGISTRY_USER " \
REGISTRY_PASS="$ secrets.REGISTRY_PASS " \
envsubst <regsync.envsubst.yml >.regsync.yml
regsync --config .regsync.yml once
rm -f .regsync.ymlforgejo-runner and configured it to run actions for our own server and things are working fine.
This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining
personal projects and playing with things without needing SaaS platforms like github or
gitlab.
In today s digital landscape, social media is more than just a communication tool it is the primary medium for global discourse. Heads of state, corporate leaders and cultural influencers now broadcast their statements directly to the world, shaping public opinion in real time. However, the dominance of a few centralized platforms X/Twitter, Facebook and YouTube raises critical concerns about control, censorship and the monopolization of information. Those who control these networks effectively wield significant power over public discourse.
In response, a new wave of distributed social media platforms has emerged, each built on different decentralized protocols designed to provide greater autonomy, censorship resistance and user control. While Wikipedia maintains a comprehensive list of distributed social networking software and protocols, it does not cover recent blockchain-based systems, nor does it highlight which have the most potential for mainstream adoption.
This post explores the leading decentralized social media platforms and the protocols they are based on: Mastodon (ActivityPub), Bluesky (AT Protocol), Warpcast (Farcaster), Hey (Lens) and Primal (Nostr).
| Protocol | Identity System | Example | Storage model | Cost for end users | Potential |
|---|---|---|---|---|---|
| Mastodon | Tied to server domain | @ottok@mastodon.social |
Federated instances | Free (some instances charge) | High |
| Bluesky | Portable (DID) | ottoke.bsky.social |
Federated instances | Free | Moderate |
| Farcaster | ENS (Ethereum) | @ottok |
Blockchain + off-chain | Small gas fees | Moderate |
| Lens | NFT-based (Polygon) | @ottok |
Blockchain + off-chain | Small gas fees | Niche |
| Nostr | Cryptographic Keys | npub16lc6uhqpg6dnqajylkhwuh3j7ynhcnje508tt4v6703w9kjlv9vqzz4z7f |
Federated instances | Free (some instances charge) | Niche |
Mastodon was created in 2016 by Eugen Rochko, a German software developer who sought to provide a decentralized and user-controlled alternative to Twitter. It was built on the ActivityPub protocol, now standardized by W3C Social Web Working Group, to allow users to join independent servers while still communicating across the broader Mastodon network.
Mastodon operates on a federated model, where multiple independently run servers communicate via ActivityPub. Each server sets its own moderation policies, leading to a decentralized but fragmented experience. The servers can alternatively be called instances, relays or nodes, depending on what vocabulary a protocol has standardized on.
@username@instance.tld.
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Create",
"actor": "https://mastodon.social/users/ottok",
"object":
"type": "Note",
"content": "Hello from #Mastodon!",
"published": "2025-03-03T12:00:00Z",
"to": ["https://www.w3.org/ns/activitystreams#Public"]
Interestingly, Bluesky was conceived within Twitter in 2019 by Twitter founder Jack Dorsey. After being incubated as a Twitter-funded project, it spun off as an independent Public Benefit LLC in February 2022 and launched its public beta in February 2023.
Bluesky runs on top of the Authenticated Transfer (AT) Protocol published at https://github.com/bluesky-social/atproto. The protocol enables portable identities and data ownership, meaning users can migrate between platforms while keeping their identity and content intact. In practice, however, there is only one popular server at the moment, which is Bluesky itself.
@user.bsky.social).
"repo": "did:plc:ottoke.bsky.social",
"collection": "app.bsky.feed.post",
"record":
"$type": "app.bsky.feed.post",
"text": "Hello from Bluesky!",
"createdAt": "2025-03-03T12:00:00Z",
"langs": ["en"]
"fid": 766579,
"username": "ottok",
"custodyAddress": "0x127853e48be3870172baa4215d63b6d815d18f21",
"connectedWallet": "0x3ebe43aa3ae5b891ca1577d9c49563c0cee8da88",
"text": "Hello from Farcaster!",
"publishedAt": 1709424000,
"replyTo": null,
"embeds": []
"profileId": "@ottok",
"contentURI": "ar://QmExampleHash",
"collectModule": "0x23b9467334bEb345aAa6fd1545538F3d54436e96",
"referenceModule": "0x0000000000000000000000000000000000000000",
"timestamp": 1709558400
npub...).
"id": "note1xyz...",
"pubkey": "npub1...",
"kind": 1,
"content": "Hello from Nostr!",
"created_at": 1709558400,
"tags": [],
"sig": "sig1..."
| Platform | Total Accounts | Active Users | Growth Trend |
|---|---|---|---|
| Mastodon | ~10 million | ~1 million | Steady |
| Bluesky | ~33 million | ~1 million | Steady |
| Nostr | ~41 million | ~20 thousand | Steady |
| Farcaster | ~850 thousand | ~50 thousand | Flat |
| Lens | ~140 thousand | ~20 thousand | Flat |
#!/bin/sh
env PGPASSWORD=udd-mirror psql --host=udd-mirror.debian.net --user=udd-mirror udd --command="
select source,
max(version) as ver,
max(date) as uploaded
from upload_history
where distribution='unstable' and
source in (select source
from sources
where release='sid')
group by source
order by max(date) asc
limit 50;"
This will sort all source packages in Debian by upload date, and
list the 50 oldest ones. The end result is a list of packages I
suspect could use some attention:
source ver uploaded -----------------------------+-------------------------+------------------------ xserver-xorg-video-ivtvdev 1.1.2-1 2011-02-09 22:26:27+00 dynamite 0.1.1-2 2011-04-30 16:47:20+00 xkbind 2010.05.20-1 2011-05-02 22:48:05+00 libspctag 0.2-1 2011-09-22 18:47:07+00 gromit 20041213-9 2011-11-13 21:02:56+00 s3switch 0.1-1 2011-11-22 15:47:40+00 cd5 0.1-3 2011-12-07 21:19:05+00 xserver-xorg-video-glide 1.2.0-1 2011-12-30 16:50:48+00 blahtexml 0.9-1.1 2012-04-25 11:32:11+00 aggregate 1.6-7 2012-05-01 00:47:11+00 rtfilter 1.1-4 2012-05-11 12:50:00+00 sic 1.1-5 2012-05-11 19:10:31+00 kbdd 0.6-4 2012-05-12 07:33:32+00 logtop 0.4.3-1 2012-06-05 23:04:20+00 gbemol 0.3.2-2 2012-06-26 17:03:11+00 pidgin-mra 20100304-1 2012-06-29 23:07:41+00 mumudvb 1.7.1-1 2012-06-30 09:12:14+00 libdr-sundown-perl 0.02-1 2012-08-18 10:00:07+00 ztex-bmp 20120314-2 2012-08-18 19:47:55+00 display-dhammapada 1.0-0.1 2012-12-19 12:02:32+00 eot-utils 1.1-1 2013-02-19 17:02:28+00 multiwatch 1.0.0-rc1+really1.0.0-1 2013-02-19 17:02:35+00 pidgin-latex 1.5.0-1 2013-04-04 15:03:43+00 libkeepalive 0.2-1 2013-04-08 22:00:07+00 dfu-programmer 0.6.1-1 2013-04-23 13:32:32+00 libb64 1.2-3 2013-05-05 21:04:51+00 i810switch 0.6.5-7.1 2013-05-10 13:03:18+00 premake4 4.3+repack1-2 2013-05-31 12:48:51+00 unagi 0.3.4-1 2013-06-05 11:19:32+00 mod-vhost-ldap 2.4.0-1 2013-07-12 07:19:00+00 libapache2-mod-ldap-userdir 1.1.19-2.1 2013-07-12 21:22:48+00 w9wm 0.4.2-8 2013-07-18 11:49:10+00 vish 0.0.20130812-1 2013-08-12 21:10:37+00 xfishtank 2.5-1 2013-08-20 17:34:06+00 wap-wml-tools 0.0.4-7 2013-08-21 16:19:10+00 ttysnoop 0.12d-6 2013-08-24 17:33:09+00 libkaz 1.21-2 2013-09-02 16:00:10+00 rarpd 0.981107-9 2013-09-02 19:48:24+00 libimager-qrcode-perl 0.033-1.2 2013-09-04 21:06:31+00 dov4l 0.9+repack-1 2013-09-22 19:33:25+00 textdraw 0.2+ds-0+nmu1 2013-10-07 21:25:03+00 gzrt 0.8-1 2013-10-08 06:33:13+00 away 0.9.5+ds-0+nmu2 2013-10-25 01:18:18+00 jshon 20131010-1 2013-11-30 00:00:11+00 libstar-parser-perl 0.59-4 2013-12-23 21:50:43+00 gcal 3.6.3-3 2013-12-29 18:33:29+00 fonts-larabie 1:20011216-5 2014-01-02 21:20:49+00 ccd2iso 0.3-4 2014-01-28 06:33:35+00 kerneltop 0.91-1 2014-02-04 12:03:30+00 vera++ 1.2.1-2 2014-02-04 21:21:37+00 (50 rows)So there are 8 packages last uploaded to unstable in 2011, 12 packages in 2012 and 26 packages in 2013. I suspect their maintainers need help and we should all offer our assistance. I already contacted two of them and hope the rest of the Debian community will chip in to help too. We should ensure any Debian specific patches are passed upstream if they still exist, that the package is brought up to speed with the latest Debian policy, as well as ensure the source can built with the current compiler set in Debian. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
http://outreachy.debian.net/download, http://outreachy.debian.net/download/alt
debian-cd mirror selections.debian-cd mirror not having the latest version of debian.
We're looking into testing if the delivery of the images/ISOs is also possible to be done through the Fastly CDN, this would prevent us to provide manual mirror selection.
debian-cd mirror for faster installations in my region.
You can connect with me through LinkedIn, or X (formerly Twitter), or reach out via email.
With all the recent buzz about DeepSeek and its capabilities, I ve decided to give it a try
using Ollama and Open WebUI on my work laptop which has an NVIDIA GPU:
$ lspci grep NVIDIA
0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
(rev a1)docker only approach, as it leaves my system clean and updates are easier.
docker I followed the instructions to install the package using
apt
from the NVIDIA website.docker run -d \
-e OLLAMA_HOST="0.0.0.0:11434" -p 127.0.0.1:11434:11434 \
-p 127.0.0.1:3000:8080 \
-v ollama:/root/.ollama \
-v open-webui:/app/backend/data \
--gpus=all --name open-webui --restart always \
ghcr.io/open-webui/open-webui:ollamaOLLAMA_HOST environment variable to be bind the server to all the container IPs,
as it is needed to export it to the host).admin user on the Open WebUI, I went to the select a model button and asked to download the
deepseek-r1 model (at the time of writing this it matches version
deepseek-r1:7b, which uses 4.7GB of space).
After the download was completed I was able to start chatting with the model on my local machine.Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.
### Key Features of Debian:
1. **History**:
- Debian started as a project called "Pop!!" (Pootle) led by Lars Knutsen, a
developer from Denmark.
- It was initially forked from the Linux distribution Red Hat's RH-Debian to
gain independence and reduce reliance on Red Hat.
2. **Use Cases**:
- Debian is commonly used as an operating system for servers due to its
reliability and ease of use in enterprise environments.
- It is also used for individual user machines (laptops, desktops) because it
offers a smooth and stable experience.
- Debian provides the foundation for many popular derivatives like Ubuntu,
which are designed for consumer use.
3. **Supported Architectures**:
- Debian supports all major Linux architectures, including 32-bit and 64-bit
systems, and runs on a wide range of hardware from old servers to modern
smartphones.
4. **Package Management**:
- Debian comes with a robust package management system called **Debian Policy
Language (DPL)**, which allows users to easily install, update, and manage
software packages.
- This makes it easy for users to keep their systems up to date without
needing to download multiple repositories or deal with version conflicts.
5. **Security Focus**:
- Debian is known for its strong security practices. It has a reputation for
being one of the most stable Linux distributions, making it a favorite
among organizations that prioritize security.
- The distribution regularly updates its software packages to patch
vulnerabilities and protect against common attacks.
6. **Compatibility with Servers**:
- Debian is widely used in data centers because it provides a consistent and
reliable environment for running servers and applications.
- Many open-source projects are compatible with Debian, making it an ideal
choice for developers who want to work on servers or distributed systems.
7. **Popularity**:
- Debian has a large and active community of users and contributors.
- It is one of the most widely used Linux distributions in the world, with
many enterprise organizations relying on it as their primary operating
system.
### Comparison with Other Distributions:
- **Ubuntu**: Ubuntu is based on Debian but adds graphical user interfaces (like
Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
Microsoft services.
- **RHEL (Red Hat Enterprise Linux)**: RHEL is a derivative of Red Hat's
distribution, which forked from Debian. It focuses more heavily on enterprise
security and support compared to Debian.
### Why Debian?
Debian is popular because:
- It is free and open-source, making it accessible to anyone.
- It provides excellent documentation and tools for system administrators.
- It has a strong focus on stability and reliability.
- It supports a wide range of hardware and software needs.
If you're looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.select a model button and asked to download the
deepseek-coder and
deepseek-coder-v2 models (the default version of version one is said to
be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install
both for testing).neovim plugins that support ollama, for now I ve installed and configured the
codecompanion plugin on my config.lua file using packer:
require('packer').startup(function()
[...]
-- Codecompanion plugin
use
"olimorris/codecompanion.nvim",
requires =
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
[...]
end)
[...]
-- --------------------------------
-- BEG: Codecompanion configuration
-- --------------------------------
-- Module setup
local codecompanion = require('codecompanion').setup(
adapters =
ollama = function()
return require('codecompanion.adapters').extend('ollama',
schema =
model =
default = 'deepseek-coder-v2:latest',
,
)
end,
,
strategies =
chat = adapter = 'ollama', ,
inline = adapter = 'ollama', ,
,
)
-- --------------------------------
-- END: Codecompanion configuration
-- --------------------------------
Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main
machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don t
believe it is really necessary or justified (I don t need support, I don t provide support to others and I usually test
my shell scripts on multiple systems if needed anyway).
Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old
Dell XPS 13) was stolen from him a year ago.
I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS),
but I don t have a Debian Sid machine anymore and while I could reinstall my work machine, I ve decided I m going to try
to use a system container to run Debian Sid on it.
As I want to use a container instead of a VM I ve narrowed my options to lxc or systemd-nspawn (I have docker and
podman installed, but I don t believe they are good options for running system containers).
As I will want to take snapshots of the container filesystem I ve decided to try
incus instead of systemd-nspawn (I already have
experience with it and while it works well it has less features than incus).
root:
# Get the zabbly repository GPG key
curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc
# Create the zabbly-incus-stable.sources file
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo $ VERSION_CODENAME )
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'incus and the incus-extra packages, but
once things work I ll probably install the incus-ui-canonical package too, at least for testing it:
apt update
apt install incus incus-extraincus-admin groupTo be able to run incus commands as my personal user I ve added it to the incus-admin group:
sudo adduser "$(id -un)" incus-adminincus admin init command and
accepted the defaults for all the questions, as they are good enough for my current use case.debian/trixie image:
incus launch images:debian/trixie debiandebian using the default profile.
The exec command can be used to run a root login shell inside the container:
incus exec debian -- su -lexec we can use the shell alias:
incus shell debiansid changing the /etc/apt/sources.list file and using apt:
root@debian:~# echo "deb http://deb.debian.org/debian sid main contrib non-free" \
>/etc/apt/sources.list
root@debian:~# apt update
root@debian:~# apt dist-upgradedocker installed the apt update command fails because the network does not work, to fix it I ve
executed the commands of the following section and re-run the apt update and apt dist-upgrade commands.docker networking we have to add rules for the incusbr0 bridge to the DOCKER-USER chain as
follows:
sudo iptables -I DOCKER-USER -i incusbr0 -j ACCEPT
sudo iptables -I DOCKER-USER -o incusbr0 -m conntrack \
--ctstate RELATED,ESTABLISHED -j ACCEPTincus documentation I ve installed the iptables-persistent package (my command also purges the
ufw package, as I was not using it) and saved the current rules when installing:
sudo apt install iptables-persistent --purgebr="incusbr0";
br_ipv4="$(incus network get "$br" ipv4.address)";
br_domain="$(incus network get "$br" dns.domain)";
dns_address="$ br_ipv4%/* ";
dns_domain="$ br_domain:=incus ";
resolvectl dns "$br" "$ dns_address ";
resolvectl domain "$br" "~$ dns_domain ";
resolvectl dnssec "$br" off;
resolvectl dnsovertls "$br" off;sh -c "cat <<EOF sudo tee /etc/systemd/system/incus-dns-$ br .service
[Unit]
Description=Incus per-link DNS configuration for $ br
BindsTo=sys-subsystem-net-devices-$ br .device
After=sys-subsystem-net-devices-$ br .device
[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns $ br $ dns_address
ExecStart=/usr/bin/resolvectl domain $ br ~$ dns_domain
ExecStart=/usr/bin/resolvectl dnssec $ br off
ExecStart=/usr/bin/resolvectl dnsovertls $ br off
ExecStopPost=/usr/bin/resolvectl revert $ br
RemainAfterExit=yes
[Install]
WantedBy=sys-subsystem-net-devices-$ br .device
EOF"sudo systemctl daemon-reload
sudo systemctl enable --now incus-dns-$ br .service$ host debian.incus
debian.incus has address 10.149.225.121
debian.incus has IPv6 address fd42:1178:afd8:cc2c:216:3eff:fe2b:5ceaincus exec debian -- addgroup --gid "$(id --group)" --allow-bad-names \
"$(id --group --name)"incus exec debian -- adduser --uid "$(id --user)" --gid "$(id --group)" \
--comment "$(getent passwd "$(id --user -name)" cut -d ':' -f 5)" \
--no-create-home --disabled-password --allow-bad-names \
"$(id --user --name)"shift option to make the
container use the same UID and GID as we do on the host):
incus config device add debian home disk source=$HOME path=$HOME shift=trueshell alias to log with the root account, now we can add another one to log into the container using the
newly created user:
incus alias add ush "exec @ARGS@ -- su -l $(id --user --name)"incus ush debiansudo inside the container we could add our user to the sudo group:
incus exec debian -- adduser "$(id --user --name)" "sudo"/etc/sudoers.d
directory to allow our user to run sudo without a password:
incus exec debian -- \
sh -c "echo '$(id --user --name) ALL = NOPASSWD: ALL' /etc/sudoers.d/user"openssh-server
and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that
allows us to log in without password from the local machine).
Also, to be able to run X11 applications from the container I ve adusted the $HOME/.ssh/config file to always forward
X11 (option ForwardX11 yes for Host debian.incus) and installed the xauth package.
After that I can log into the container running the command ssh debian.incus and start using it after installing other
interesting tools like neovim, rsync, tmux, etc.incus snapshot command; that can be specially
useful to take snapshots before doing a dist-upgrade so we can rollback if something goes wrong.
To work with container snapshots we run use the incus snapshot command, i.e. to create a snapshot we use de create
subcommand:
incus snapshot create debiansnapshot sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.tmux session on the Debian Sid container with multiple zsh windows open
(I ve changed the prompt to be able to notice easily where I am) and it is working as expected.
My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system
without having to reinstall my work machine.
I ll probably write more about it in the future, but for now, I m happy with the results.
Becoming a Debian maintainer is a journey that combines technical expertise, community collaboration, and continuous learning. In this post, I ll share 10 key habits that will both help you navigate the complexities of Debian packaging without getting lost, and also enable you to contribute more effectively to one of the world s largest open source projects.
gbp clone, gbp import-orig, gbp pq, gbp dch, gbp push). See also my post on Debian source package git branch and tags for an easy to understand diagrams.salsa-ci.yml to have more testing coverage.
git log and git blame output is vital in Debian, where packages often have updates from multiple people spanning many years even decades. Debian packagers likely spend more time than the average software developer reading git history.
Make sure you master git commands such as gitk --all, git citool --amend, git commit -a --fixup <commit id>, git rebase -i --autosquash <target branch>, git cherry-pick <commit id 1> <id 2> <id 3>, and git pull --rebase.
If rebasing is not done on your initiative, rest assured others will ask you to do it. Thus, if the commands above are familiar, rebasing will be quick and easy for you.
Forwarded field to the file in debian/patches! As the person building and packaging something in Debian, you automatically become an authority on that software, and the upstream is likely glad to receive your improvements.
While submitting patches upstream is a bit of work initially, getting improvements merged upstream eventually saves time for everyone and makes packaging in Debian easier, as there will be fewer patches to maintain with each new upstream release.
For a long time I ve been using the Terminator terminal emulator on Linux machines, but
last week I read a LWN article about a new emulator called
Ghostty that looked interesting and I decided to give it a try.
The author sells it as a fast, feature-rich and cross-platform terminal emulator that follows the zero configuration
philosophy.
$HOME/.config/ghostty/config file is as simple as:
font-size=14
theme=/usr/share/ghostty/themes/iTerm2 Solarized Lightssh the terminal variable was not known,
but on the help section of the project documentation there was an entry about how to fix
it copying the terminfo configuration to remote machines, it is as simple
as running the following:
infocmp -x ssh YOUR-SERVER -- tic -x -ghostty developer was already working on a fix on the way the terminal handles the keyboard
input on GTK, so I subscribed to the issue and stopped using ghostty until there was something new to try again (I use
an Spanish keyboard map and I can t use a terminal that does not support dead keys).
Yesterday I saw some messages about things being almost fixed, so I pulled the latest changes on my cloned repository,
compiled it and writing accented characters works now, there is a small issue with the cursor (the dead key pressed
is left on the block cursor unless you change the window focus), but that is something manageable for me.ghostty is a good terminal emulator and I m going to keep using it on my laptop unless I find something
annoying that I can t work with (i hope that the cursor issue will be fixed soon and I can live with it as the only
thing I need to do to recover from it is changing the widow focus, and that can be done really quickly using keyboard
shortcuts).
As it is actively maintained and the developer seems to be quite active I don t expect problems and is nice to play
with new things from time to time.english/index.wml -> /index.en.html (with a symlink from index.html to index.en.html) and french/index.wml -> /index.fr.html. In contrast, debianhugo uses en/_index.md -> /index.html and fr/_index.md -> /fr/index.html.
Apache's multilingual content negotiation checks for index.<user preferred lang code>.html in the current directory, which works well with webwml since all related translations are generated in the same directory. However, with debianhugo using subdirectories for languages other than English, we had to set up aliases for every other language page to be generated in the frontmatter. For example, in fr/_index.md, we added this to the front matter:...
aliases:
- /index.fr.html
...
/index.en.html. If it doesn t find it, it defaults to any other language-suffixed file, which can lead to unexpected behavior. For example, if English is set as the preferred language, accessing the site may serve /index.fr.html, which then redirects to /fr/index.html. This was a significant challenge, and you can see a demo of this hosted here.
If I were to start the project over, I would document every decision as I make them in the wiki, no matter how rough the documentation turns out. Waiting until the midpoint of the project to document was not a good idea.
As I move into the second half of my internship, the goals we ve set include improving our project wiki documentation and continuing the migration process while enhancing the user experience of complicated sections. I m looking forward to making even more progress and sharing my journey with you all. Happy coding!
I ve always been a fan of template engines that work with text files, mainly to work with static site generators, but
also to generate code, configuration files, and other text-based files.
For my own web projects I used to go with Jinja2, as all my projects were written
in Python, while for static web sites I used the template engines included with the tools I was
using, i.e. Liquid with Jekyll and
Go Templates (based on the text/template
and the html/template go packages) for Hugo.
When I needed to generate code snippets or configuration files from shell scripts I used to go with
sed and/or
envsubst, but lately things got complicated and I started to use
a command line application called tmpl that uses the Go
Template Language with functions from the Sprig library.
gitlab-ci) to generate configuration files and code snippets because it uses the same syntax used by
helm (easier to use by other DevOps already familiar with the format) and the binary is small and
can be easily included into the docker images used by the pipeline jobs.
One interesting feature of the tmpl tool is that it can read values from command line arguments and from multiple
files in different formats (YAML, JSON, TOML, etc) and merge them into a single object that can be used to render the
templates.
There are alternatives to the tmpl tool and I ve looked at them (i.e. simple ones like
go-template-cli or complex ones like
gomplate), but I haven t found one that fits my needs.
For my next project I plan to evaluate a move to a different tool or template format, as tmpl is not being actively
maintained (as I said, I m using my own fork) and it is not included on existing GNU/Linux distributions (I packaged it
for Debian and Alpine, but I don t want to maintain something like that without an active community and I m not
interested in being the upstream myself, as I m trying to move to Rust instead of
Go as the compiled programming language for my projects).
Next.