Nextcloud is a popular self-hosted solution for file sync and share as well as cloud apps such as document editing, chat and talk, calendar, photo gallery etc. This guide will walk you through setting up Nextcloud AIO using Docker Compose. This blog post would not be possible without immense help from Sahil Dhiman a.k.a. sahilisterThere are various ways in which the installation could be done, in our setup here are the pre-requisites.
A server with docker-compose installed
An existing setup of nginx reverse proxy
A domain name pointing to your server.
Step 1 : The docker-compose file for nextcloud AIOThe original compose.yml file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml here.
services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
ports:
- 8080:8080
environment: # Is needed when using any of the options below
# - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
- APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
- APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
# - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
- NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir. Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
# - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
# - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
# - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
# - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
# - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
# - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
# - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container. Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
# - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
# - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
# networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - SKIP_DOMAIN_VALIDATION=true
# # Uncomment the following line when using SELinux
# security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323 above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml file, you can start the containers using
$ docker-compose -f compose.yml up -d
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.
Step 2: Configuring nginx reverse proxy for our domain on host. A reference nginx configuration for nextcloud AIO is given in the nextcloud git repository here . You can modify the configuration file according to your needs and setup. Here is configuration that we are using
map $http_upgrade $connection_upgrade
default upgrade;
&apos&apos close;
server
listen 80;
#listen [::]:80; # comment to disable IPv6
if ($scheme = "http")
return 301 https://$host$request_uri;
listen 443 ssl http2; # for nginx versions below v1.25.1
#listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
# listen 443 ssl; # for nginx v1.25.1+
# listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
# http2 on; # uncomment to enable HTTP/2 - supported on nginx v1.25.1+
# http3 on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# quic_retry on; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# add_header Alt-Svc &aposh3=":443"; ma=86400' # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
# listen 443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
# listen [::]:443 quic reuseport; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
server_name cloud.example.com;
location /
proxy_pass http://127.0.0.1:32323$request_uri;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
client_body_buffer_size 512k;
proxy_read_timeout 86400s;
client_max_body_size 0;
# Websocket
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers on;
# Optional settings:
# OCSP stapling
# ssl_stapling on;
# ssl_stapling_verify on;
# ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
# replace with the IP address of your resolver
# resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using
$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with
$ sudo nginx -s reload
Step 3: Setup of Nextcloud AIO from the browser.To setup nextcloud AIO, we need to access it using the web browser on URL of our domain.tld:8080, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this. nextcloud AIO initial screen with capsicums as passwordIt will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this. now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like thishere you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in. The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister . Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containersIt will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one. wait patiently for everything to turn green. once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud button or go to your configured domain to access the login screen. You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!
Additional Notes:
How to properly reset Nextcloud setup?While following the above steps, or while following steps from some other tutorial, you may have made a mistake, and want to start everything again from scratch. The instructions for it are present in the Nextcloud documentation here . Here is the TLDR for a docker-compose setup. These steps will delete all data, do not use these steps on an existing nextcloud setup unless you know what you are doing.
Stop your master container.
docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master container
Stop all the child containers that has been started by the master container.
dkg's New OpenPGP certificate in December 2023
In December of 2023, I'm moving to a new OpenPGP certificate.
You might know my old OpenPGP certificate, which had an fingerprint of
C29F8A0C01F35E34D816AA5CE092EB3A5CA10DBA.
My new OpenPGP certificate has a fingerprint of:
D477040C70C2156A5C298549BB7E9101495E6BF7.
Both certificates have the same set of User IDs:
-----BEGIN PGP PUBLIC KEY BLOCK-----xjMEZXEJyxYJKwYBBAHaRw8BAQdA5BpbW0bpl5qCng/RiqwhQINrplDMSS5JsO/YO+5Zi7HCwAsEHxYKAH0FgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmfUAgfN9tyTSxpxhmHA1r63GiI4v6NQmrrWVLOBRJYuhQMVCggCmwECHgEWIQTUdwQMcMIValwphUm7fpEBSV5r9wAAmaEA/3MvYJMxQdLhIG4UDNMVd2bsovwdcTrReJhLYyFulBrwAQD/j/RS+AXQIVtkcO9bl6zZTAO9x6yfkOZbv0g3eNyrAs0QPGRrZ0BkZWJpYW4ub3JnPsLACwQTFgoAfQWCZXEJywMLCQcJELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVvaWEtcGdwLm9yZ4l+Z3i19Uwjw3CfTNFCDjRsoufMoPOM7vM8HoOEdn/vAxUKCAKbAQIeARYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AAALZQEAhJsgouepQVV98BHUH6SvWvcKrb8dQEZOvHFbZQQPNWgA/A/DHkjYKnUkCg8Zc+FonqOS/35sHhNA8CwqSQFrtN4KzRc8ZGtnQGZpZnRoaG9yc2VtYW4ubmV0PsLACgQTFgoAfQWCZXEJywMLCQcJELt+kQFJXmv3RxQAAAAAAB4AIHNhbHRAbm90YXRpb25zLnNlcXVvaWEtcGdwLm9yZxLvwkgnslsAuo+IoSa9rv8+nXpbBdab2Ft7n4H9S+d/AxUKCAKbAQIeARYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AAAtFgD4wqcUfQl7nGLQOcAEHhx8V0Bg8v9ov8GsY1ei1BEFwAD/cxmxmDSO0/tA+x4pd5yIvzgfGYHSTxKS0Ww3hzjuZA7NE0RhbmllbCBLYWhuIEdpbGxtb3LCwA4EExYKAIAFgmVxCcsDCwkHCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmd7X4TgiINwnzh4jar0Pf/b5hgxFPngCFxJSmtr/f0YiQMVCggCmQECmwECHgEWIQTUdwQMcMIValwphUm7fpEBSV5r9wAAMuwBAPtMonKbhGOhOy+8miAb/knJ1cIPBjLupJbjM+NUE1WyAQD1nyGW+XwwMrprMwc320mdJH9B0jdokJZBiN7++0NoBM4zBGVxCcsWCSsGAQQB2kcPAQEHQI19uRatkPSFBXh8usgciEDwZxTnnRZYrhIgiFMybBDQwsC/BBgWCgExBYJlcQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmfCopazDnq6hZUsgVyztl5wmDCmxI169YLNu+IpDzJEtQKbAr6gBBkWCgBvBYJlcQnLCRB3LRYeNc1LgUcUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmcQglI7G7DbL9QmaDkzcEuk3QliM4NmleIRUW7VvIBHMxYhBHS8BMQ9hghL6GcsBnctFh41zUuBAACwfwEAqDULksr8PulKRcIP6N9NI/4KoznyIcuOHi8qGk4qxMkBAIeV20SPEnWSw9MWAb0eKEcfupzr/C+8vDvsRMynCWsDFiEE1HcEDHDCFWpcKYVJu36RAUlea/cAAFD1AP0YsE3Eeig1tkWaeyrvvMf5Kl1tt2LekTNWDnB+FUG9SgD+Ka8vfPR8wuV8D3y5Y9Qq9xGO+QkEBCW0U1qNypg65QHOOARlcQnLEgorBgEEAZdVAQUBAQdAWTLEa0WmnhUmDBdWXX0ZlYAa4g1CK/fXg0NPOQSteA4DAQgHwsAABBgWCgByBYJlcQnLCRC7fpEBSV5r90cUAAAAAAAeACBzYWx0QG5vdGF0aW9ucy5zZXF1b2lhLXBncC5vcmexrMBZe0QdQ+ZJOZxFkAiwCw2I7yTSF2Ox9GVFWKmAmAKbDBYhBNR3BAxwwhVqXCmFSbt+kQFJXmv3AABcJQD/f4ltpSvLBOBEh/C2dIYadgSuqkCqq0B4WOhFRkWJZlcA/AxqLWG4o8UrrmwrmM42FhgxKtEXwCSHE00u8wR4Up8G=9Yc8-----END PGP PUBLIC KEY BLOCK-----
When I have some reasonable number of certifications, i'll update the
certificate associated with my e-mail addresses on
https://keys.openpgp.org, in DANE, and in WKD. Until then, those
lookups should continue to provide the old certificate.
This is a short announcement to say that I have changed my main
OpenPGP key. A signed statement is available with the
cryptographic details but, in short, the reason is that I stopped
using my old YubiKey NEO that I have worn on my keyring since
2015.
I now have a YubiKey 5 which supports ED25519 which features much
shorter keys and faster decryption. It allowed me to move all my
secret subkeys on the key (including encryption keys) while retaining
reasonable performance.
I have written extensive documentation on how to do that OpenPGP key
rotation and also YubiKey OpenPGP operations.
Warning on storing encryption keys on a YubiKey
People wishing to move their private encryption keys to such a
security token should be very careful as there are special
precautions to take for disaster recovery.
I am toying with the idea of writing an article specifically about
disaster recovery for secrets and backups, dealing specifically with
cases of death or disabilities.
Autocrypt changes
One nice change is the impact on Autocrypt headers, which are
considerably shorter.
Before, the header didn't even fit on a single line in an email, it
overflowed to five lines:
Note that I have implemented my own kind of ridiculous Autocrypt
support for the NotmuchEmacs email client I use, see this
elisp code. To import keys, I pipe the message into this
script which is basically just:
sq autocrypt decode gpg --import
... thanks to Sequoia best-of-class Autocrypt support.
Note on OpenPGP usage
While some have claimed OpenPGP's death, I believe those are
overstated. Maybe it's just me, but I still use OpenPGP for my
password management, to authenticate users and messages, and it's the
interface to my YubiKey for authenticating with SSH servers.
I understand people feel that OpenPGP is possibly insecure,
counter-intuitive and full of problems, but I think most of those
problems should instead be attributed to its current flagship
implementation, GnuPG. I have tried to work with GnuPG for years, and
it keeps surprising me with evilness and oddities.
I have high hopes that the Sequoia project can bring some sanity
into this space, and I also hope that RFC4880bis can eventually
get somewhere so we have a more solid specification with more robust
crypto. It's kind of a shame that this has dragged on for so
long, but Update: there's a separate draft called
openpgp-crypto-refresh that might actually be adopted as the
"OpenPGP RFC" soon! And
it doesn't keep real work from happening in Sequoia and other
implementations. Thunderbird rewrote their OpenPGP implementation with
RNP (which was, granted, a bumpy road because it lost
compatibility with GnuPG) and Sequoia now has a certificate store
with trust management (but still no secret storage), preliminary
OpenPGP card support and even a basic GnuPG compatibility
layer. I'm also curious to try out the OpenPGP CA
capabilities.
So maybe it's just because I'm becoming an old fart that doesn't want
to change tools, but so far I haven't seen a good incentive in
switching away from OpenPGP, and haven't found a good set of tools
that completely replace it. Maybe OpenSSH's keys and CA can eventually
replace it, but I suspect they will end up rebuilding most of OpenPGP
anyway, just more slowly. If they do, let's hope they avoid the
mistakes our community has done in the past at least...
While running CI tests for a application that is implemented in C and Java,
some configuration scripts set the current timezone. The C implemented parts
catch the change just nicely, but the java related parts still report the
default image timezone.
A simple example:
importjava.util.*;importjava.text.*;classsimpleTestpublicstaticvoidmain(Stringargs[])Calendarcal=Calendar.getInstance();System.out.println("TIME ZONE :"+cal.getTimeZone().getDisplayName());
Result:
vagrant@vm:~$ sudo timedatectl set-timezone America/Aruba
vagrant@vm:~$ timedatectl
[..]
Time zone: America/Aruba (AST, -0400)[..]
vagrant@vm:~$ java test.java
TIME ZONE :Central European Standard Time
vagrant@vm:~$ ls-alh /etc/localtime
lrwxrwxrwx 1 root root 35 Jul 10 14:41 /etc/localtime -> ../usr/share/zoneinfo/America/Aruba
It appears the Java implementation uses /etc/timezone as source, not
/etc/localtime.
vagrant@vm:~$ echo America/Aruba sudo tee /etc/timezone
America/Aruba
vagrant@vm:~$ java test.java
TIME ZONE :Atlantic Standard Time
dpkg-reconfigure tzdata updates this file as well, so using timedatectl
only won t be enough (at least not on Debian based systems which run java based
applications.)
Tobias Frost
did 16.0h (out of 15.0h assigned and 1.0h from previous period).
Utkarsh Gupta
did 5.5h (out of 5.0h assigned and 26.0h from previous period), thus carrying over 25.5h to the next month.
Evolution of the situation
In May, we have released 34 DLAs.
Several of the DLAs constituted notable security updates to LTS during the month of May. Of particular note were the linux (4.19) and linux-5.10 packages, both of which addressed a considerable number of CVEs. Additionally, the postgresql-11 package was updated by synchronizing it with the 11.20 release from upstream.
Notable non-security updates were made to the distro-info-data database and the timezone database. The distro-info-data package was updated with the final expected release date of Debian 12, made aware of Debian 14 and Ubuntu 23.10, and was updated with the latest EOL dates for Ubuntu releases. The tzdata and libdatetime-timezone-perl packages were updated with the 2023c timezone database. The changes in these packages ensure that in addition to the latest security updates LTS users also have the latest information concerning Debian and Ubuntu support windows, as well as the latest timezone data for accurate worldwide timekeeping.
LTS contributor Anton implemented an improvement to the Debian Security Tracker Unfixed vulnerabilities in unstable without a filed bug view, allowing for more effective management of CVEs which do not yet have a corresponding bug entry in the Debian BTS.
LTS contributor Sylvain concluded an audit of obsolete packages still supported in LTS to ensure that new CVEs are properly associated. In this case, a package being obsolete means that it is no longer associated with a Debian release for which the Debian Security Team has direct responsibility. When this occurs, it is the responsibility of the LTS team to ensure that incoming CVEs are properly associated to packages which exist only in LTS.
Finally, LTS contributors also contributed several updates to packages in unstable/testing/stable to fix CVEs. This helps package maintainers, addresses CVEs in current and future Debian releases, and ensures that the CVEs do not remain open for an extended period of time only for the LTS team to be required to deal with them much later in the future.
Thanks to our sponsors
Sponsors that joined recently are in bold.
I know a few people hold on to the exFAT fuse implementation due the
support for timezone offsets, so here is a small update for you.
Andrew released 1.4.0, which includes the timezone offset support, which
was so far only part of the git master branch. It also fixes a,
from my point of view very minor, security issue
CVE-2022-29973.
In addition to that it's the first build with fuse3 support. If you
still use this driver, pick it up in experimental (we're in the bookworm freeze
right now), and give it a try. I'm personally not using it anymore beyond a very
basic "does it mount" test.
I recently got a new NVME drive. My plan was to create a fresh Debian install on an F2FS root partition with compression for maximum performance. As it turns out, this is not entirely trivil to accomplish.
For one, the Debian installer does not support F2FS (here is my attempt to add it from 2021).
And even if it did, grub does not support F2FS with the extra_attr flag that is required for compression support (at least as of grub 2.06).
Luckily, we can install Debian anyway with all these these shiny new features when we go the manual road with debootstrap and using systemd-boot as bootloader.
We can break down the process into several steps:
Warning: Playing around with partitions can easily result in data if you mess up! Make sure to double check your commands and create a data backup if you don t feel confident about the process.
Creating the partition partble
The first step is to create the GPT partition table on the new drive. There are several tools to do this, I recommend the ArchWiki page on this topic for details.
For simplicity I just went with the GParted since it has an easy GUI, but feel free to use any other tool.
The layout should look like this:
Type Partition Suggested size
EFI /dev/nvme0n1p1 512MiB
Linux swap /dev/nvme0n1p2 1GiB
Linux fs /dev/nvme0n1p3 remainder
Notes:
The disk names are just an example and have to be adjusted for your system.
Don t set disk labels, they don t appear on the new install anyway and some UEFIs might not like it on your boot partition.
The size of the EFI partition can be smaller, in practive it s unlikely that you need more than 300 MiB. However some UEFIs might be buggy and if you ever want to install an additional kernel or something like memtest86+ you will be happy to have the extra space.
The swap partition can be omitted, it is not strictly needed. If you need more swap for some reason you can also add more using a swap file later (see ArchWiki page). If you know you want to use suspend-to-RAM, you want to increase the size to something more than the size of your memory.
If you used GParted, create the EFI partition as FAT32 and set the esp flag. For the root partition use ext4 or F2FS if available.
Creating and mounting the root partition
To create the root partition, we need to install the f2fs-tools first:
sudo apt install f2fs-tools
Now we can create the file system with the correct flags:
--arch sets the CPU architecture (see Debian Wiki).
--components sets the archive components, if you don t want non-free pacakges you might want to remove some entries here.
unstable is the Debian release, you might want to change that to testing or bookworm.
$DFS points to the mounting point of the root partition.
http://deb.debian.org/debian is the Debian mirror, you might want to set that to http://ftp.de.debian.org/debian or similar if you have a fast mirror in you area.
Chrooting into the system
Before we can chroot into the newly created system, we need to prepare and mount virtual kernel file systems. First create the directories:
Then bind-mount the directories from your system to the mount point of the new system:
sudo mount -v -B /dev $DFS/dev
sudo mount -v -B /dev/pts $DFS/dev/pts
sudo mount -v -B /proc $DFS/proc
sudo mount -v -B /sys $DFS/sys
sudo mount -v -B /run $DFS/run
sudo mount -v -B /sys/firmware/efi/efivars $DFS/sys/firmware/efi/efivars
As a last step, we need to mount the EFI partition:
sudo mount -v -B /dev/nvme0n1p1 $DFS/boot/efi
Now we can chroot into new system:
sudo chroot $DFS /bin/bash
Configure the base system
The first step in the chroot is setting the locales. We need this since we might leak the locales from our base system into the chroot and if this happens we get a lot of annoying warnings.
Now you have a fully functional Debian chroot! However, it is not bootable yet, so let s fix that.
Define static file system information
The first step is to make sure the system mounts all partitions on startup with the correct mount flags.
This is done in /etc/fstab (see ArchWiki page).
Open the file and change its content to:
# file system mount point type options dump pass
# NVME efi partition
UUID=XXXX-XXXX /boot/efi vfat umask=0077 0 0
# NVME swap
UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX none swap sw 0 0
# NVME main partition
UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX / f2fs compress_algorithm=zstd:6,compress_chksum,atgc,gc_merge,lazytime 0 1
You need to fill in the UUIDs for the partitions. You can use
ls -lAph /dev/disk/by-uuid/
to match the UUIDs to the more readable disk name under /dev.
Installing the kernel and bootloader
First install the systemd-boot and efibootmgr packages:
apt install systemd-boot efibootmgr
Now we can install the bootloader:
bootctl install --path=/boot/efi
You can verify the procedure worked with
efibootmgr -v
The next step is to install the kernel, you can find a fitting image with:
apt search linux-image-*
In my case:
apt install linux-image-amd64
After the installation of the kernel, apt will add an entry for systemd-boot automatically. Neat!
However, since we are in a chroot the current settings are not bootable.
The first reason is the boot partition, which will likely be the one from your current system.
To change that, navigate to /boot/efi/loader/entries, it should contain one config file.
When you open this file, it should look something like this:
title Debian GNU/Linux bookworm/sid
version 6.1.0-3-amd64
machine-id 2967cafb6420ce7a2b99030163e2ee6a
sort-key debian
options root=PARTUUID=f81d4fae-7dec-11d0-a765-00a0c91e6bf6 ro systemd.machine_id=2967cafb6420ce7a2b99030163e2ee6a
linux /2967cafb6420ce7a2b99030163e2ee6a/6.1.0-3-amd64/linux
initrd /2967cafb6420ce7a2b99030163e2ee6a/6.1.0-3-amd64/initrd.img-6.1.0-3-amd64
The PARTUUID needs to point to the partition equivalent to /dev/nvme0n1p3 on your system. You can use
ls -lAph /dev/disk/by-partuuid/
to match the PARTUUIDs to the more readable disk name under /dev.
The second problem is the ro flag in options which tell the kernel to boot in read-only mode.
The default is rw, so you can just remove the ro flag.
Once this is fixed, the new system should be bootable. You can change the boot order with:
efibootmgr --bootorder
However, before we reboot we might add well add a user and install some basic software.
The local government here has all the schools use an iCalendar feed for things like when school terms start and stop and other school events occur. The department s website also has events like public holidays. The issue is that all of them don t make it an all-day event but one that happens at midnight, or one past midnight.
The events synchronise fine, though Google s calendar is known for synchronising when it feels like it, not at any particular time you would like it to.
Even though a public holiday is all day, they are sent as appointments for midnight.
That means on my phone all the events are these tiny bars that appear right up the top of the screen and are easily missed, especially when the focus of the calendar is during the day.
On the phone, you can see the tiny purple bar at midnight. This is how the events appear. It s not the calendar s fault, as far as it knows the school events are happening at midnight.
You can also see Lunar New Year and Australia Day appear in the all-day part of the calendar and don t scroll away. That s where these events should be.
Why are all the events appearing at midnight? The reason is the feed is incorrectly set up and has the time. The events are sent in an iCalendar format and a typical event looks like this:
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20230206T000000
DTEND;TZID=Australia/Sydney:20230206T000000
SUMMARY:School Term starts
END:VEVENT
The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed!
The Fix
I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.
It s pretty quick and nasty but gets the job done. So what is it doing?
Lines 2-10: Check the given variable s and match it to either site1 or site2 to obtain the URL. If you only had one site to fix you could just set the REMOTE_URL variable.
Lines 12-15: A typical fopen() and nasty error handling.
Line 16: set the content type to a calendar.
Line 17: A while loop to read the contents of the remote site line by line.
Line 18-21: This is where the magic happens, preg_replace is a Perl regular expression replacement. The PCRE is:
Finding lines starting with DTSTART or DTEND and store it in capture 1
Skip everything that isn t a colon. This is the timezone information. I wasn t sure if it was needed and how to combine it so I took it out. All the all-day events I saw don t have a time zone.
Find 8 numerics (this is for YYYYMMDD) and store it in capture 2.
Scan the Time part, a literal T then HHMMSS. Some sites use midnight some use one minute past, so it covers both.
Replace the line with either DTSTART or DTEND (capture 1), set the value type to DATE as the default is date/time and print the date (capture 2).
Line 22: Print either the modified or original line.
You need to save the script on your web server somewhere, possibly with an alias command.
The whole point of this is to change the type from a date/time to a date-only event and only print the date part of it for the start and end of it. The resulting iCalendar event looks like this:
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230206
DTEND;VALUE=DATE:20230206
SUMMARY:School Term starts
END:VEVENT
The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file.
If you re not seeing the right thing then it s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with.
Calendar settings
The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled Use a link to add a public Calendar .
The URL here is not the actual site s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ?s=site1 part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be https://www.example.com/myical.php?s=site1 .
You should then see the events appear as all-day events on your calendar.
I had bought a Thinkpad E470 laptop back in 2018 which was lying unused for
quite some time. Recently when I wanted to use it, I found that the keyboard is
not working, especially some keys and after some time the laptop will hang in
Lenovo boot screen. I came back to Bangalore almost after 2 years from my
hometown (WFH due to Covid) and thought it was the right time to get my laptop
back to normal working state. After getting the keyboard replaced I noticed that
1TB HDD is no longer fast enough for my taste!. I've to admit I never thought I
would start disliking HDD so quickly thanks to modern SSD based work laptops. So
as a second upgrade I got the HDD removed from my laptop and got a 240G SSD.
Yeah I know its reduction from my original size but I intend to continue using
my old HDD via USB SATA enclosure as an external HDD which can house the extra
data which I need to save.
So now that I've a SSD I need to install Debian Unstable again on it and this is
where I tried something new. My colleague (name redacted on request) suggested
to me use GRML live CD and install Debian via debootstrap. And after giving a
thought I decided to try this out. Some reason for going ahead with this are
listed below
Debian Installer does not support a proper BTRFS based root file system. It
just allows btrfs as root but no subvolume support. Also I'm not sure about
the luks support with btrfs as root.
I also wanted to give a try to systemd-boot as my laptop is UEFI capable and
I've slowly started disliking Grub.
I really hate installing task-kde-desktop (Yeah you read it right, I've switched
to be a KDE user for quite some time) which will pull tons of unwanted stuff
and bloat. Well it's not just task-kde-desktop but any other task-desktop
package does similar and I don't want to have too much of unused stuff and
services running.
Disk Preparation
As a first step I went to GRML website and downloaded current pre-release. Frankly, I'm using GRML for first
time and I was not sure what to expect. When I booted it up I was bit taken a
back to see its console based and I did not have a wired lan just a plain
wireless dongle (Jiofi device) and was wondering what it will take to connect.
But surprisingly curses based UI was pretty much straight forward to allow me to
connect to Wifi AP. Another thing was the rescue CD had non-free firmware as the
laptop was using ath10k device and needed non-free blobs to operate.
Once I got shell prompt in rescue CD first thing I did was to reconfigure
console-setup to increase font size which was very very small on default boot.
Once that is done I did the following to create a 1G (FAT32) partition for EFI.
parted -a optimal -s /dev/sda mklabel gpt
parted -a optimal -s /dev/sda mkpart primary vfat 0% 1G
parted -a optimal -s /dev/sda set1 esp on
mkfs.vfat -n boot_disk -F 32 /dev/sda1
So here is what I did: created a 1G vfat type partition and set the esp flag on
it. This will be mounted to /boot/efi for systemd-boot. Next I created a single
partition on the rest of the available free disk which will be used as the root
file system.
Next I encrypted the root parition using LUKS and then created the BTRFS file
system on top of it.
Next is to create subvolumes in BTRFS. I followed suggestion by colleague and
created a top-level @ as subvolume below which created @/home@/var/log@/opt .
Also enabled compression with zstd and level of 1 to avoid battery drain.
Finally marked the @ as default subvolume to avoid adding it to fstab entry.
mount -o compress=zstd:1 /dev/mapper/ENC /mnt
btrfs subvol create /mnt/@
cd /mnt/@
btrfs subvol create ./home
btrfs subvol create ./opt
mkdir -p var
btrfs subvol create ./var/log
btrfs suvol set-default /mnt/@
Bootstrapping Debian
Now that root disk is prepared next step was to bootstrap the root file system.
I used debootstrap for this job. One thing I missed here from installer was
ability to preseed. I tried looking around to figure out if we can preseed
debootstrap but did not find much. If you know the procedure do point it to me.
cd /mnt/
debootstrap --include=dbus,locales,tzdata unstable @/ http://deb.debian.org/debian
Well this just gets a bare minimal installation of Debian I need to install rest
of the things post this step manually by chroot into target folder @/.
I like the grml-chroot command for chroot purpose, it does most of the job of
mounting all required directory like /dev/ /proc /sys etc. But before entering
chroot I need to mount the ESP partition we created to /boot/efi so that I
can finalize the installation of kernel and systemd-boot.
umount /mnt
mount -o compress=zstd:1 /dev/mapper/ENC /mnt
mkdir -p /mnt/boot/efi
mount /dev/sda1 /mnt/boot/efi
grml-chroot /mnt /bin/bash
I remounted the root subvolume @ directly to /mnt now, remember I made @ as
default subvolume before. I also mounted ESP partition with FAT32 file system to
/boot/efi. Finally I used grml-chroot to get into chroot of newly bootstrapped
file system.
Now I will install the kernel and minimal KDE desktop installation and configure
locales and time zone data for the new system. I wanted to use dracut instead of
default initramfs-tools for initrd. I also need to install cryptsetup and
btrfs-progs so I can decrypt and really boot into my new system.
I've not written actual UUID above this is just for the purpose of showing the
content of /etc/crypttab. Once these entries are added we need to recreate
initrd. I just reconfigured the installed kernel package for retriggerring the
recreation of initrd using dracut.
..
Reconfiguration was locales is done by editing /etc/locales.gen to uncomment
en_US.UTF-8 and writing /etc/timezone with Asia/Kolkata. I used
DEBIAN_FRONTEND=noninteractive to avoid another prompt asking for locale and
timezone information.
Added my user using adduser command and also set the root password as well.
Added my user to sudo group so I can use sudo to elevate privileges.
Setting up systemd-boot
So now basic usable system is ready last part is enabling the systemd-boot
configuration as I'm not gonna use grub. I did following to install
systemd-boot. Frankly I'm not expert of this it was colleague's suggestion.
Before installing the systemd-boot I had to setup kernel command line. This can
be done by writing command line to /etc/kernel/cmdline with following contents.
systemd.gpt_auto=no quiet root=LABEL=root_disk
I'm disabling systemd-gpt-generator to avoid race condition between crypttab
entry and auto generated entry by systemd. I faced this mainly because of my
stupidity of not adding entry root=LABEL=root_disk
Finally exit from the chroot and reboot into the freshly installed system.
systemd-boot already ships a hook file zz-systemd-boot under /etc/kernel
so its pretty much usable without any manual intervention. Previously after
kernel installation we had to manually update kernel image in efi partitions
using bootctl
Conclussion
Though installing from live image is not new and debian-installer also does the
same only difference is more control over installation and doing things which is
installer is not letting you do (or should I say is not part of default
installation?). If properly automated using scripts we can leverage this to do
custom installation in large scale environments. I know there is FAI but I've
not explored it and felt there is too much to setup for a simple installations
with specific requirements.
So finally I've a system with Debian which differs from default Debian
installation :-). I should thank my colleague for rekindling nerd inside me who
had stopped experimenting quite a long time back.
A minor routine update 0.0.5 of gettz arrived on CRAN overnight.
gettz provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That happened when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo. Since the package was written (in the fall of 2016), R added a similar extended heuristic approach itself.
This release updates a function signature to satisfy the more stringent tests by clang-15, updates the GitHub Action checkout code to suppress a nag, and changes a few remaining http documentation links to https. As with the previous releses: No functional changes, no new code, or new features.
Courtesy of my CRANberries, there is a comparison to the previous release.
More information is on the gettz page. For questions or comments use the issue tracker off the GitHub repo.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
I self-host some services on virtual machines (VMs), and I m currently using Debian 11.x as the host machine relying on the libvirt infrastructure to manage QEMU/KVM machines. While everything has worked fine for years (including on Debian 10.x), there has always been one issue causing a one-minute delay every time I install a new VM: the default images run a DHCP client that never succeeds in my environment. I never found out a way to disable DHCP in the image, and none of the documented ways through cloud-init that I have tried worked. A couple of days ago, after reading the AlmaLinux wiki I found a solution that works with Debian.
The following commands creates a Debian VM with static network configuration without the annoying one-minute DHCP delay. The three essential cloud-init keywords are the NoCloud meta-data parameters dsmode:local, static network-interfaces setting combined with the user-data bootcmd keyword. I m using a Raptor CS Talos II ppc64el machine, so replace the image link with a genericcloud amd64 image if you are using x86.
Unfortunately virt-install from Debian 11 does not support the cloud-init network-config parameter, so if you want to use a version 2 network configuration with cloud-init (to specify IPv6 addresses, for example) you need to replace the final virt-install command with the following.
There are still some warnings like the following, but it does not seem to cause any problem:
[FAILED] Failed to start Initial cloud-init job (pre-networking).
Finally, if you do not want the cloud-init tools installed in your VMs, I found the following set of additional user-data commands helpful. Cloud-init will not be enabled on first boot and a cron job will be added that purges some unwanted packages.
As part of debugging an upstream connection problem I've been seeing
recently, I wanted to be able to monitor the logs from my Turris
Omnia router. Here's how I
configured it to send its logs to a server I already had on the local
network.
Server setup
The first thing I did was to open up my server's
rsyslog (Debian's default syslog server) to
remote connections since it's going to be the destination host for the
router's log messages.
I added the following to /etc/rsyslog.d/router.conf:
module(load="imtcp")
input(type="imtcp" port="514")
if $fromhost-ip == '192.168.1.1' then
if $syslogseverity <= 5 then
action(type="omfile" file="/var/log/router.log")
stop
This is using the latest rsyslog configuration method: a handy scripting
language called
RainerScript.
Severity level 5
maps to "notice" which consists of unusual non-error conditions, and
192.168.1.1 is of course the IP address of the router on the LAN side.
With this, I'm directing all router log messages to a separate file,
filtering out anything less important than severity 5.
In order for rsyslog to pick up this new configuration file, I restarted it:
systemctl restart rsyslog.service
and checked that it was running correctly (e.g. no syntax errors in the new
config file) using:
systemctl status rsyslog.service
Since I added a new log file, I also setup log rotation for it by putting
the following in /etc/logrotate.d/router:
In addition, since I use
logcheck to monitor my server
logs and email me errors, I had to add /var/log/router.log to
/etc/logcheck/logcheck.logfiles.
Finally I opened the rsyslog port to the router in my server's firewall by
adding the following to /etc/network/iptables.up.rules:
# Allow logs from the router
-A INPUT -s 192.168.1.1 -p tcp --dport 514 -j ACCEPT
and ran iptables-apply.
With all of this in place, it was time to get the router to send messages.
Router setup
As suggested on the Turris
forum, I
ssh'ed into my router and added this in /etc/syslog-ng.d/remote.conf:
Setting the timezone to the same as my server was needed because the router
messages were otherwise sent with UTC timestamps.
To ensure that the destination host always gets the same IP address
(192.168.1.200), I went to the advanced DHCP configuration
page and added a
static lease for the server's MAC address so that it always gets assigned
192.168.1.200. If that wasn't already the server's IP address, you'll have
to restart it for this to take effect.
Finally, I restarted the syslog-ng daemon on the router to pick up the new
config file:
/etc/init.d/syslog-ng restart
Testing
In order to test this configuration, I opened three terminal windows:
tail -f /var/log/syslog on the server
tail -f /var/log/router.log on the server
tail -f /var/log/messages on the router
I immediately started to see messages from the router in the third window
and some of these, not all because of my severity-5 filter, were flowing to
the second window as well. Also important is that none of the messages make
it to the first window, otherwise log messages from the router would be mixed
in with the server's own logs. That's the purpose of the stop command in
/etc/rsyslog.d/router.conf.
To force a log messages to be emitted by the router, simply ssh into it and
issue the following command:
logger Test
It should show up in the second and third windows immediately if you've got
everything setup correctly
As promised, on this post I m going to explain how I ve configured this blog
using hugo, asciidoctor and the papermod theme, how I publish it using
nginx, how I ve integrated the remark42 comment system and how I ve
automated its publication using gitea and json2file-go.
It is a long post, but I hope that at least parts of it can be interesting for
some, feel free to ignore it if that is not your case
Hugo Configuration
Theme settingsThe site is using the PaperMod theme and as I m
using asciidoctor to publish my content I ve adjusted
the settings to improve how things are shown with it.
The current config.yml file is the one shown below (probably some of the
settings are not required nor being used right now, but I m including the
current file, so this post will have always the latest version of it):
config.yml
disableHLJS and assets.disableHLJS are set to true; we plan to use
rouge on adoc and the inclusion of the hljs assets adds styles that
collide with the ones used by rouge.
ShowToc is set to true and the TocOpen setting is set to false to
make the ToC appear collapsed initially. My plan was to use the asciidoctor
ToC, but after trying I believe that the theme one looks nice and I don t
need to adjust styles, although it has some issues with the html5s
processor (the admonition titles use <h6> and they are shown on the ToC,
which is weird), to fix it I ve copied the layouts/partial/toc.html to my
site repository and replaced the range of headings to end at 5 instead of
6 (in fact 5 still seems a lot, but as I don t think I ll use that heading
level on the posts it doesn t really matter).
params.profileMode values are adjusted, but for now I ve left it disabled
setting params.profileMode.enabled to false and I ve set the
homeInfoParams to show more or less the same content with the latest posts
under it (I ve added some styles to my custom.css style sheet to center the
text and image of the first post to match the look and feel of the profile).
On the asciidocExt section I ve adjusted the backend to use html5s,
I ve added the asciidoctor-html5s and asciidoctor-diagram extensions to
asciidoctor and adjusted the workingFolderCurrent to true to make
asciidoctor-diagram work right (haven t tested it yet).
Theme customisationsTo write in asciidoctor using the html5s processor I ve added some files to
the assets/css/extended directory:
As said before, I ve added the file assets/css/extended/custom.css to
make the homeInfoParams look like the profile page and I ve also changed a
little bit some theme styles to make things look better with the html5s
output:custom.css
/* Fix first entry alignment to make it look like the profile */.first-entrytext-align:center;.first-entryimgdisplay:inline;/**
* Remove margin for .post-content code and reduce padding to make it look
* better with the asciidoctor html5s output.
**/.post-contentcodemargin:auto0;padding:4px;
I ve also added the file assets/css/extended/adoc.css with some styles
taken from the asciidoctor-default.css, see this
blog
post about the original file; mine is the same after formatting it with
css-beautify and editing it to use variables for
the colors to support light and dark themes:adoc.css
The previous file uses variables from a partial copy of the theme-vars.css
file that changes the highlighted code background color and adds the color
definitions used by the admonitions:theme-vars.css
The previous styles use font-awesome, so I ve downloaded its resources for
version 4.7.0 (the one used by asciidoctor) storing the
font-awesome.css into on the assets/css/extended dir (that way it is
merged with the rest of .css files) and copying the fonts to the
static/assets/fonts/ dir (will be served directly):
FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
curl "$FA_BASE_URL/css/font-awesome.css"\> assets/css/extended/font-awesome.css
for f in FontAwesome.otf fontawesome-webfont.eot \
fontawesome-webfont.svg fontawesome-webfont.ttf \
fontawesome-webfont.woff fontawesome-webfont.woff2;do
curl "$FA_BASE_URL/fonts/$f">"static/assets/fonts/$f"done
As already said the default highlighter is disabled (it provided a css
compatible with rouge) so we need a css to do the highlight styling; as
rouge provides a way to export them, I ve created the
assets/css/extended/rouge.css file with the thankful_eyes theme:
To support the use of the html5s backend with admonitions I ve added a
variation of the example found on this
blog
post to assets/js/adoc-admonitions.js:adoc-admonitions.js
// replace the default admonitions block with a table that uses a format// similar to the standard asciidoctor ... as we are using fa-icons here there// is no need to add the icons: font entry on the document.window.addEventListener('load',function()constadmonitions=document.getElementsByClassName('admonition-block')for(leti=admonitions.length-1;i>=0;i--)constelm=admonitions[i]consttype=elm.classList[1]consttitle=elm.getElementsByClassName('block-title')[0];constlabel=title.getElementsByClassName('title-label')[0].innerHTML.slice(0,-1);elm.removeChild(elm.getElementsByClassName('block-title')[0]);consttext=elm.innerHTMLconstparent=elm.parentNodeconsttempDiv=document.createElement('div')tempDiv.innerHTML= <div class="admonitionblock $ type">
<table>
<tbody>
<tr>
<td class="icon">
<i class="fa icon-$ type" title="$ label"></i>
</td>
<td class="content">
$ text
</td>
</tr>
</tbody>
</table>
</div> constinput=tempDiv.childNodes[0]parent.replaceChild(input,elm) )
and enabled its minified use on the layouts/partials/extend_footer.html file
adding the following lines to it:
Remark42 configurationTo integrate Remark42 with the PaperMod theme I ve
created the file layouts/partials/comments.html with the following content
based on the remark42
documentation, including extra code to sync the dark/light setting with the
one set on the site:
comments.html
<divid="remark42"></div><script>varremark_config=host:.Site.Params.remark42Url ,site_id:.Site.Params.remark42SiteID ,url:.Permalink ,locale:.Site.Language.Lang ;(function(c)/* Adjust the theme using the local-storage pref-theme if set */if(localStorage.getItem("pref-theme")==="dark")remark_config.theme="dark";elseif(localStorage.getItem("pref-theme")==="light")remark_config.theme="light";/* Add remark42 widget */for(vari=0;i<c.length;i++) vard=document,s=d.createElement('script');s.src=remark_config.host+'/web/'+c[i]+'.js';s.defer=true;(d.headd.body).appendChild(s); )(remark_config.components['embed']);</script>
In development I use it with anonymous comments enabled, but to avoid SPAM
the production site uses social logins (for now I ve only enabled Github
& Google, if someone requests additional services I ll check them, but those
were the easy ones for me initially).
To support theme switching with remark42 I ve also added the following inside
the layouts/partials/extend_footer.html file:
- if (not site.Params.disableThemeToggle)
<script>/* Function to change theme when the toggle button is pressed */document.getElementById("theme-toggle").addEventListener("click",()=>if(typeofwindow.REMARK42!="undefined")if(document.body.className.includes('dark'))window.REMARK42.changeTheme('light');elsewindow.REMARK42.changeTheme('dark'); );</script>
- end
With this code if the theme-toggle button is pressed we change the remark42
theme before the PaperMod one (that s needed here only, on page loads the
remark42 theme is synced with the main one using the code from the
layouts/partials/comments.html shown earlier).
Development setupTo preview the site on my laptop I m using docker-compose with the following
configuration:
docker-compose.yaml
To run it properly we have to create the .env file with the current user ID
and GID on the variables APP_UID and APP_GID (if we don t do it the files
can end up being owned by a user that is not the same as the one running the
services):
$echo"APP_UID=$(id-u)\nAPP_GID=$(id-g)"> .env
The Dockerfile used to generate the sto/hugo-adoc is:
Dockerfile
FROM asciidoctor/docker-asciidoctor:latestRUN gem install--no-document asciidoctor-html5s &&\
apk update && apk add --no-cache curl libc6-compat &&\
repo_path="gohugoio/hugo"&&\
api_url="https://api.github.com/repos/$repo_path/releases/latest"&&\
download_url="$(\
curl -sL"$api_url"\
sed-n"s/^.*download_url\": \"\\(.*.extended.*Linux-64bit.tar.gz\)\"/\1/p"\
)"&&\
curl -sL"$download_url"-o /tmp/hugo.tgz &&\
tar xf /tmp/hugo.tgz hugo &&\
install hugo /usr/bin/ &&\
rm-f hugo /tmp/hugo.tgz &&\
/usr/bin/hugo version &&\
apk del curl &&rm-rf /var/cache/apk/*# Expose port for live serverEXPOSE 1313ENTRYPOINT ["/usr/bin/hugo"]CMD [""]
If you review it you will see that I m using the
docker-asciidoctor image as
the base; the idea is that this image has all I need to work with asciidoctor
and to use hugo I only need to download the binary from their latest release
at github (as we are using an
image based on alpine we also need to install the
libc6-compat package, but once that is done things are working fine for me so
far).
The image does not launch the server by default because I don t want it to; in
fact I use the same docker-compose.yml file to publish the site in production
simply calling the container without the arguments passed on the
docker-compose.yml file (see later).
When running the containers with docker-compose up (or docker compose up if
you have the docker-compose-plugin package installed) we also launch a nginx
container and the remark42 service so we can test everything together.
The Dockerfile for the remark42 image is the original one with an updated
version of the init.sh script:
Dockerfile
FROM umputun/remark42:latestCOPY init.sh /init.sh
The updated init.sh is similar to the original, but allows us to use an
APP_GID variable and updates the /etc/group file of the container so the
files get the right user and group (with the original script the group is
always 1001):
init.sh
#!/sbin/dinit /bin/shuid="$(id-u)"if["$ uid"-eq"0"];then
echo"init container"# set container's time zonecp"/usr/share/zoneinfo/$ TIME_ZONE" /etc/localtime
echo"$ TIME_ZONE">/etc/timezone
echo"set timezone $ TIME_ZONE ($(date))"# set UID & GID for the appif["$ APP_UID"]["$ APP_GID"];then["$ APP_UID"]APP_UID="1001"["$ APP_GID"]APP_GID="$ APP_UID"echo"set custom APP_UID=$ APP_UID & APP_GID=$ APP_GID"sed-i"s/^app:x:1001:1001:/app:x:$ APP_UID:$ APP_GID:/" /etc/passwd
sed-i"s/^app:x:1001:/app:x:$ APP_GID:/" /etc/group
else
echo"custom APP_UID and/or APP_GID not defined, using 1001:1001"fi
chown-R app:app /srv /home/app
fi
echo"prepare environment"# replace % REMARK_URL % by content of REMARK_URL variable
find /srv -regex'.*\.\(html\ js\ mjs\)$'-print\-execsed-i"s % REMARK_URL % $ REMARK_URL g"\;if[-n"$ SITE_ID"];then#replace "site_id: 'remark'" by SITE_IDsed-i"s 'remark' '$ SITE_ID' g" /srv/web/*.html
fi
echo"execute \"$*\""if["$ uid"-eq"0"];then
exec su-exec app "$@"else
exec"$@"fi
The environment file used with remark42 for development is quite minimal:
env.dev
Production setupThe VM where I m publishing the blog runs Debian GNU/Linux
and uses binaries from local packages and applications packaged inside
containers.
To run the containers I m using
docker-ce (I could have used
podman instead, but I already had it installed on the
machine, so I stayed with it).
The binaries used on this project are included on the following packages from
the main Debian repository:
git to clone & pull the repository,
jq to parse json files from shell scripts,
json2file-go to save the webhook messages to files,
inotify-tools to detect when new files are stored by json2file-go and
launch scripts to process them,
nginx to publish the site using HTTPS and work as proxy for
json2file-go and remark42 (I run it using a container),
task-spool to queue the scripts that update the deployment.
And I m using docker and docker compose from the debian packages on the
docker repository:
docker-ce to run the containers,
docker-compose-plugin to run docker compose (it is a plugin, so no - in
the name).
Repository checkoutTo manage the git repository I ve created a deploy key, added it to gitea
and cloned the project on the /srv/blogops PATH (that route is owned by a
regular user that has permissions to run docker, as I said before).
Compiling the site with hugoTo compile the site we are using the docker-compose.yml file seen before, to
be able to run it first we build the container images and once we have them we
launch hugo using docker compose run:
$cd /srv/blogops
$git pull
$docker compose build
$if[-d"./public"];then rm-rf ./public;fi$docker compose run hugo --
The compilation leaves the static HTML on /srv/blogops/public (we remove the
directory first because hugo does not clean the destination folder as
jekyll does).
The deploy script re-generates the site as described and moves the public
directory to its final place for publishing.
Running remark42 with dockerOn the /srv/blogops/remark42 folder I have the following docker-compose.yml:
docker-compose.yml
The ../.env file is loaded to get the APP_UID and APP_GID variables that
are used by my version of the init.sh script to adjust file permissions and
the env.prod file contains the rest of the settings for remark42, including
the social network tokens (see the
remark42 documentation for
the available parameters, I don t include my configuration here because some of
them are secrets).
Nginx configurationThe nginx configuration for the blogops.mixinet.net site is as simple as:
On this configuration the certificates are managed by
certbot and the server root directory is on
/srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason
for that is that I want to be able to compile without affecting the running
site, the deployment script generates the site on /srv/blogops/public and if
all works well we rename folders to do the switch, making the change feel almost
atomic.
json2file-go configurationAs I have a working WireGuard VPN between the
machine running gitea at my home and the VM where the blog is served, I m
going to configure the json2file-go to listen for connections on a high port
using a self signed certificate and listening on IP addresses only reachable
through the VPN.
To do it we create a systemd socket to run json2file-go and adjust its
configuration to listen on a private IP (we use the FreeBind option on its
definition to be able to launch the service even when the IP is not available,
that is, when the VPN is down).
The following script can be used to set up the json2file-go configuration:
setup-json2file.sh
#!/bin/shset-e# ---------# VARIABLES# ---------BASE_DIR="/srv/blogops/webhook"J2F_DIR="$BASE_DIR/json2file"TLS_DIR="$BASE_DIR/tls"J2F_SERVICE_NAME="json2file-go"J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"J2F_BASEDIR_FILE="/etc/json2file-go/basedir"J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"J2F_CRT_FILE="/etc/json2file-go/certfile"J2F_KEY_FILE="/etc/json2file-go/keyfile"J2F_CRT_PATH="$TLS_DIR/crt.pem"J2F_KEY_PATH="$TLS_DIR/key.pem"# ----# MAIN# ----# Install packages used with json2file for the blogops sitesudo apt update
sudo apt install-y json2file-go uuid
if[-z"$(type mkcert)"];then
sudo apt install-y mkcert
fi
sudo apt clean
# Configuration file valuesJ2F_USER="$(id-u)"J2F_GROUP="$(id-g)"J2F_DIRLIST="blogops:$(uuid)"J2F_LISTEN_STREAM="172.31.31.1:4443"# Configure json2file[-d"$J2F_DIR"]mkdir"$J2F_DIR"sudo sh -c"echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"[-d"$TLS_DIR"]mkdir"$TLS_DIR"if[!-f"$J2F_CRT_PATH"][!-f"$J2F_KEY_PATH"];then
mkcert -cert-file"$J2F_CRT_PATH"-key-file"$J2F_KEY_PATH""$(hostname-f)"fi
sudo sh -c"echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"sudo sh -c"echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"sudo sh -c"cat >'$J2F_DIRLIST_FILE'"<<EOF$(echo"$J2F_DIRLIST"tr';''\n')EOF
# Service override[-d"$J2F_SERVICE_DIR"]sudo mkdir"$J2F_SERVICE_DIR"sudo sh -c"cat >'$J2F_SERVICE_OVERRIDE'"<<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUPEOF
# Socket override[-d"$J2F_SOCKET_DIR"]sudo mkdir"$J2F_SOCKET_DIR"sudo sh -c"cat >'$J2F_SOCKET_OVERRIDE'"<<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAMEOF
# Restart and enable servicesudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"sudo systemctl start "$J2F_SERVICE_NAME"sudo systemctl enable"$J2F_SERVICE_NAME"# ----# vim: ts=2:sw=2:et:ai:sts=2
Warning: The script uses mkcert to create the temporary certificates, to install the
package on bullseye the backports repository must be available.
Gitea configurationTo make gitea use our json2file-go server we go to the project and enter into
the hooks/gitea/new page, once there we create a new webhook of type gitea
and set the target URL to https://172.31.31.1:4443/blogops and on the secret
field we put the token generated with uuid by the setup script:
sed-n-e's/blogops://p' /etc/json2file-go/dirlist
The rest of the settings can be left as they are:
Trigger on: Push events
Branch filter: *
Warning: We are using an internal IP and a self signed certificate, that means that we
have to review that the webhook section of the app.ini of our gitea
server allows us to call the IP and skips the TLS verification (you can see the
available options on the
gitea
documentation).
The [webhook] section of my server looks like this:
Once we have the webhook configured we can try it and if it works our
json2file server will store the file on the
/srv/blogops/webhook/json2file/blogops/ folder.
The json2file spooler scriptWith the previous configuration our system is ready to receive webhook calls
from gitea and store the messages on files, but we have to do something to
process those files once they are saved in our machine.
An option could be to use a cronjob to look for new files, but we can do
better on Linux using inotify we will use the inotifywait command from
inotify-tools to watch the json2file output directory and execute a script
each time a new file is moved inside it or closed after writing
(IN_CLOSE_WRITE and IN_MOVED_TO events).
To avoid concurrency problems we are going to use task-spooler to launch the
scripts that process the webhooks using a queue of length 1, so they are
executed one by one in a FIFO queue.
The spooler script is this:
blogops-spooler.sh
#!/bin/shset-e# ---------# VARIABLES# ---------BASE_DIR="/srv/blogops/webhook"BIN_DIR="$BASE_DIR/bin"TSP_DIR="$BASE_DIR/tsp"WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"# ---------# FUNCTIONS# ---------
queue_job()echo"Queuing job to process file '$1'"TMPDIR="$TSP_DIR"TS_SLOTS="1"TS_MAXFINISHED="10"\
tsp -n"$WEBHOOK_COMMAND""$1"# ----# MAIN# ----INPUT_DIR="$1"if[!-d"$INPUT_DIR"];then
echo"Input directory '$INPUT_DIR' does not exist, aborting!"exit 1
fi[-d"$TSP_DIR"]mkdir"$TSP_DIR"echo"Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR"-type f sortwhile read-r _filename;do
queue_job "$_filename"done# Use inotifywatch to process new filesecho"Watching for new files under '$INPUT_DIR'"
inotifywait -q-m-e close_write,moved_to --format"%w%f"-r"$INPUT_DIR"while read-r _filename;do
queue_job "$_filename"done# ----# vim: ts=2:sw=2:et:ai:sts=2
To run it as a daemon we install it as a systemd service using the following
script:
setup-spooler.sh
#!/bin/shset-e# ---------# VARIABLES# ---------BASE_DIR="/srv/blogops/webhook"BIN_DIR="$BASE_DIR/bin"J2F_DIR="$BASE_DIR/json2file"SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"SPOOLER_SERVICE_NAME="blogops-j2f-spooler"SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"# Configuration file valuesJ2F_USER="$(id-u)"J2F_GROUP="$(id-g)"# ----# MAIN# ----# Install packages used with the webhook processorsudo apt update
sudo apt install-y inotify-tools jq task-spooler
sudo apt clean
# Configure process servicesudo sh -c"cat > $SPOOLER_SERVICE_FILE"<<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMANDEOF
# Restart and enable servicesudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME"true
sudo systemctl start "$SPOOLER_SERVICE_NAME"sudo systemctl enable"$SPOOLER_SERVICE_NAME"# ----# vim: ts=2:sw=2:et:ai:sts=2
The gitea webhook processorFinally, the script that processes the JSON files does the following:
First, it checks if the repository and branch are right,
Then, it fetches and checks out the commit referenced on the JSON file,
Once the files are updated, compiles the site using hugo with docker
compose,
If the compilation succeeds the script renames directories to swap the old
version of the site by the new one.
If there is a failure the script aborts but before doing it or if the swap
succeeded the system sends an email to the configured address and/or the user
that pushed updates to the repository with a log of what happened.
The current script is this one:
blogops-webhook.sh
#!/bin/shset-e# ---------# VARIABLES# ---------# ValuesREPO_REF="refs/heads/main"REPO_CLONE_URL="https://gitea.mixinet.net/mixinet/blogops.git"MAIL_PREFIX="[BLOGOPS-WEBHOOK] "# Address that gets all messages, leave it empty if not wantedMAIL_TO_ADDR="blogops@mixinet.net"# If the following variable is set to 'true' the pusher gets mail on failuresMAIL_ERRFILE="false"# If the following variable is set to 'true' the pusher gets mail on successMAIL_LOGFILE="false"# gitea's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains# when the KeepEmailPrivate option is enabled for a userNO_REPLY_ADDRESS="noreply.example.org"# DirectoriesBASE_DIR="/srv/blogops"PUBLIC_DIR="$BASE_DIR/public"NGINX_BASE_DIR="$BASE_DIR/nginx"PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"WEBHOOK_BASE_DIR="$BASE_DIR/webhook"WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"# FilesTODAY="$(date +%Y%m%d)"OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"# Query to get variables from a gitea webhook jsonENV_VARS_QUERY="$(printf"%s"\'(. @sh "gt_ref=\(.ref);"),'\'(. @sh "gt_after=\(.after);"),'\'(.repository @sh "gt_repo_clone_url=\(.clone_url);"),'\'(.repository @sh "gt_repo_name=\(.name);"),'\'(.pusher @sh "gt_pusher_full_name=\(.full_name);"),'\'(.pusher @sh "gt_pusher_email=\(.email);")')"# ---------# Functions# ---------
webhook_log()echo"$(date-R)$*">>"$WEBHOOK_LOGFILE_PATH"
webhook_check_directories()for _d in"$WEBHOOK_SPOOL_DIR""$WEBHOOK_ACCEPTED""$WEBHOOK_DEPLOYED"\"$WEBHOOK_REJECTED""$WEBHOOK_TROUBLED""$WEBHOOK_LOG_DIR";do[-d"$_d"]mkdir"$_d"done
webhook_clean_directories()# Try to remove empty dirsfor _d in"$WEBHOOK_ACCEPTED""$WEBHOOK_DEPLOYED""$WEBHOOK_REJECTED"\"$WEBHOOK_TROUBLED""$WEBHOOK_LOG_DIR""$WEBHOOK_SPOOL_DIR";do
if[-d"$_d"];then
rmdir"$_d" 2>/dev/null true
fi
done
webhook_accept()
webhook_log "Accepted: $*"mv"$WEBHOOK_JSON_INPUT_FILE""$WEBHOOK_ACCEPTED_JSON"mv"$WEBHOOK_LOGFILE_PATH""$WEBHOOK_ACCEPTED_LOGF"WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
webhook_reject()[-d"$WEBHOOK_REJECTED_TODAY"]mkdir"$WEBHOOK_REJECTED_TODAY"
webhook_log "Rejected: $*"if[-f"$WEBHOOK_JSON_INPUT_FILE"];then
mv"$WEBHOOK_JSON_INPUT_FILE""$WEBHOOK_REJECTED_JSON"fi
mv"$WEBHOOK_LOGFILE_PATH""$WEBHOOK_REJECTED_LOGF"exit 0
webhook_deployed()[-d"$WEBHOOK_DEPLOYED_TODAY"]mkdir"$WEBHOOK_DEPLOYED_TODAY"
webhook_log "Deployed: $*"mv"$WEBHOOK_ACCEPTED_JSON""$WEBHOOK_DEPLOYED_JSON"mv"$WEBHOOK_ACCEPTED_LOGF""$WEBHOOK_DEPLOYED_LOGF"WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
webhook_troubled()[-d"$WEBHOOK_TROUBLED_TODAY"]mkdir"$WEBHOOK_TROUBLED_TODAY"
webhook_log "Troubled: $*"mv"$WEBHOOK_ACCEPTED_JSON""$WEBHOOK_TROUBLED_JSON"mv"$WEBHOOK_ACCEPTED_LOGF""$WEBHOOK_TROUBLED_LOGF"WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
print_mailto()_addr="$1"_user_email=""# Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,# which should match the value of that variable on the gitea 'app.ini' (it# is the domain used for emails when the user hides it).# shellcheck disable=SC2154if[-n"$ gt_pusher_email##*@"$ NO_REPLY_ADDRESS""]&&[-z"$ gt_pusher_email##*@*"];then
_user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""fi
if["$_addr"]&&["$_user_email"];then
echo"$_addr,$_user_email"elif["$_user_email"];then
echo"$_user_email"elif["$_addr"];then
echo"$_addr"fi
mail_success()to_addr="$MAIL_TO_ADDR"if["$MAIL_LOGFILE"="true"];then
to_addr="$(print_mailto "$to_addr")"fi
if["$to_addr"];then# shellcheck disable=SC2154subject="OK - $gt_repo_name updated to commit '$gt_after'"
mail -s"$ MAIL_PREFIX $ subject""$to_addr"\
<"$WEBHOOK_LOGFILE_PATH"fi
mail_failure()to_addr="$MAIL_TO_ADDR"if["$MAIL_ERRFILE"=true];then
to_addr="$(print_mailto "$to_addr")"fi
if["$to_addr"];then# shellcheck disable=SC2154subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
mail -s"$ MAIL_PREFIX $ subject""$to_addr"\
<"$WEBHOOK_LOGFILE_PATH"fi# ----# MAIN# ----# Check directories
webhook_check_directories
# Go to the base directorycd"$BASE_DIR"# Check if the file existsWEBHOOK_JSON_INPUT_FILE="$1"if[!-f"$WEBHOOK_JSON_INPUT_FILE"];then
webhook_reject "Input arg '$1' is not a file, aborting"fi# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"eval"$(jq -r"$ENV_VARS_QUERY""$WEBHOOK_JSON_INPUT_FILE")"# Check that the repository clone url is right# shellcheck disable=SC2154if["$gt_repo_clone_url"!="$REPO_CLONE_URL"];then
webhook_reject "Wrong repository: '$gt_clone_url'"fi# Check that the branch is the right one# shellcheck disable=SC2154if["$gt_ref"!="$REPO_REF"];then
webhook_reject "Wrong repository ref: '$gt_ref'"fi# Accept the file# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"# Update the checkoutret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"if["$ret"-ne"0"];then
webhook_troubled "Repository fetch failed"
mail_failure
fi# shellcheck disable=SC2154
git checkout "$gt_after">>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"if["$ret"-ne"0"];then
webhook_troubled "Repository checkout failed"
mail_failure
fi# Remove the build dir if presentif[-d"$PUBLIC_DIR"];then
rm-rf"$PUBLIC_DIR"fi# Build site
docker compose run hugo -->>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"# go back to the main branch
git switch main && git pull
# Fail if public dir was missingif["$ret"-ne"0"][!-d"$PUBLIC_DIR"];then
webhook_troubled "Site build failed"
mail_failure
fi# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR-mindepth 1 -maxdepth 1 -name'public_html-*'-type d \-execrm-rf\;>>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"if["$ret"-ne"0"];then
webhook_troubled "Removal of old site versions failed"
mail_failure
fi# Switch site directoryTS="$(date +%Y%m%d-%H%M%S)"if[-d"$PUBLIC_HTML_DIR"];then
webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"mv"$PUBLIC_HTML_DIR""$PUBLIC_HTML_DIR-$TS">>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"fi
if["$ret"-eq"0"];then
webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"mv"$PUBLIC_DIR""$PUBLIC_HTML_DIR">>"$WEBHOOK_LOGFILE_PATH" 2>&1 ret="$?"fi
if["$ret"-ne"0"];then
webhook_troubled "Site switch failed"
mail_failure
else
webhook_deployed "Site deployed successfully"
mail_success
fi# ----# vim: ts=2:sw=2:et:ai:sts=2
Welcome to the February 2022 report from the Reproducible Builds project. In these reports, we try to round-up the important things we and others have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.
Jiawen Xiong, Yong Shi, Boyuan Chen, Filipe R. Cogo and Zhen Ming Jiang have published a new paper titled Towards Build Verifiability for Java-based Systems (PDF). The abstract of the paper contains the following:
Various efforts towards build verifiability have been made to C/C++-based systems, yet the techniques for Java-based systems are not systematic and are often specific to a particular build tool (eg. Maven). In this study, we present a systematic approach towards build verifiability on Java-based systems.
We first define the problem, and then provide insight into the challenges of making real-world software build in a reproducible manner-this is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA).
In openSUSE, Bernhard M. Wiedemann posted his monthly reproducible builds status report.
On our mailing list this month, Thomas Schmitt started a thread around the SOURCE_DATE_EPOCH specification related to formats that cannot help embedding potentially timezone-specific timestamp. (Full thread index.)
The Yocto Project is pleased to report that it s core metadata (OpenEmbedded-Core) is now reproducible for all recipes (100% coverage) after issues with newer languages such as Golang were resolved. This was announced in their recent Year in Review publication. It is of particular interest for security updates so that systems can have specific components updated but reducing the risk of other unintended changes and making the sections of the system changing very clear for audit.
The project is now also making heavy use of equivalence of build output to determine whether further items in builds need to be rebuilt or whether cached previously built items can be used. As mentioned in the article above, there are now public servers sharing this equivalence information. Reproducibility is key in making this possible and effective to reduce build times/costs/resource usage.
diffoscopediffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 203, 204, 205 and 206 to Debian unstable, as well as made the following changes to the code itself:
Bug fixes:
Fix a file(1)-related regression where Debian .changes files that contained non-ASCII text were not identified as such, therefore resulting in seemingly arbitrary packages not actually comparing the nested files themselves. The non-ASCII parts were typically in the Maintainer or in the changelog text. [][]
Fix a regression when comparing directories against non-directories. [][]
If we fail to scan using binwalk, return False from BinwalkFile.recognizes. []
If we fail to import binwalk, don t report that we are missing the Python rpm module! []
Testsuite improvements:
Add a test for recent file(1) issue regarding .changes files. []
Use our assert_diff utility where we can within the test_directory.py set of tests. []
Don t run our binwalk-related tests as root or fakeroot. The latest version of binwalk has some new security protection against this. []
Codebase improvements:
Drop the _PATH suffix from module-level globals that are not paths. []
Tidy some control flow in Difference._reverse_self. []
Don t print a warning to the console regarding NT_GNU_BUILD_ID changes. []
In addition, Mattia Rizzolo updated the Debian packaging to ensure that diffoscope and diffoscope-minimal packages have the same version. []
Move the contributors.sh Bash/shell script into a Python script. [][][]
Daniel Shahaf:
Try a different Markdown footnote content syntax to work around a rendering issue. [][][]
Holger Levsen:
Make a huge number of changes to the Who is involved? page, including pre-populating a large number of contributors who cannot be identified from the metadata of the website itself. [][][][][]
Improve linking to sponsors in sidebar navigation. []
drop sponsors paragraph as the navigation is clearer now. []
Upstream patches
The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. February s patches included the following:
Testing framework
The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
Daniel Golle:
Update the OpenWrt configuration to not depend on the host LLVM, adding lines to the .config seed to build LLVM for eBPF from source. []
Preserve more OpenWrt-related build artifacts. []
Holger Levsen:
Temporary use a different Git tree when building OpenWrt as our tests had been broken since September 2020. This was reverted after the patch in question was accepted by Paul Spooren into the canonical openwrt.git repository the next day.
Various improvements to debugging OpenWrt reproducibility. [][][][][]
Ignore useradd warnings when building packages. []
Update the script to powercycle armhf architecture nodes to add a hint to where nodes named virt-*. []
Update the node health check to also fix failed logrotate and man-db services. []
Mattia Rizzolo:
Update the website job after contributors.sh script was rewritten in Python. []
Make sure to set the DIFFOSCOPE environment variable when available. []
Vagrant Cascadian:
Various updates to the diffoscope timeouts. [][][]
Node maintenance was also performed by Holger Levsen [] and Vagrant Cascadian [].
Finally
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
I am going to start a new Linux focused FOSS online meeting for people in Australia and nearby areas. People can join from anywhere but the aim will be to support people in nearby areas.
To cover the time zone range for Australia this requires a meeting on a weekend, I m thinking of the first Saturday of the month at 1PM Melbourne/Sydney time, that would be 10AM in WA and 3PM in NZ. We may have corner cases of daylight savings starting and ending on different days, but that shouldn t be a big deal as I think those times can vary by an hour either way without being too inconvenient for anyone.
Note that I describe the meeting as Linux focused because my plans include having a meeting dedicated to different versions of BSD Unix and a meeting dedicated to the HURD. But those meetings will be mainly for Linux people to learn about other Unix-like OSs.
One focus I want to have for the meetings is hands-on work, live demonstrations, and short highly time relevant talks. There are more lectures on YouTube than anyone could watch in a lifetime (see the Linux.conf.au channel for some good ones [1]). So I want to run events that give benefits that people can t gain from watching YouTube on their own.
Russell Stuart and I have been kicking around ideas for this for a while. I think that the solution is to just do it. I know that Saturday won t work for everyone (no day will) but it will work for many people. I am happy to discuss changing the start time by an hour or two if that seems likely to get more people. But I m not particularly interested in trying to make it convenient for people in Hawaii or India, my idea is for an Australia/NZ focused event. I would be more than happy to share lecture notes etc with people in other countries who run similar events. As an aside I d be happy to give a talk for an online meeting at a Hawaiian LUG as the timezone is good for me.
Please pencil in 1PM Melbourne time on the 5th of Feb for the first meeting. The meeting requirements will be a PC with good Internet access running a recent web browser and a ssh client for the hands-on stuff. A microphone or webcam is NOT required, any questions you wish to ask can be done with text if that s what you prefer.
Suggestions for the name of the group are welcome.
A friend recently reminded me of the existence of chrony, a
"versatile implementation of the Network Time Protocol (NTP)". The
excellent introduction is worth quoting in full:
It can synchronise the system clock with NTP servers, reference
clocks (e.g. GPS receiver), and manual input using wristwatch and
keyboard. It can also operate as an NTPv4 (RFC 5905) server and peer
to provide a time service to other computers in the network.
It is designed to perform well in a wide range of conditions,
including intermittent network connections, heavily congested
networks, changing temperatures (ordinary computer clocks are
sensitive to temperature), and systems that do not run continuosly,
or run on a virtual machine.
Typical accuracy between two machines synchronised over the Internet
is within a few milliseconds; on a LAN, accuracy is typically in
tens of microseconds. With hardware timestamping, or a hardware
reference clock, sub-microsecond accuracy may be possible.
Now that's already great documentation right there. What it is, why
it's good, and what to expect from it. I want more. They have a very
handy comparison table between chrony, ntp and
openntpd.
My problem with OpenNTPd
Following concerns surrounding the security (and complexity) of the
venerable ntp program, I have, a long time ago, switched to using
openntpd on all my computers. I hadn't thought about it until I
recently noticed a lot of noise on one of my servers:
jan 18 10:09:49 curie ntpd[1069]: adjusting local clock by -1.604366s
jan 18 10:08:18 curie ntpd[1069]: adjusting local clock by -1.577608s
jan 18 10:05:02 curie ntpd[1069]: adjusting local clock by -1.574683s
jan 18 10:04:00 curie ntpd[1069]: adjusting local clock by -1.573240s
jan 18 10:02:26 curie ntpd[1069]: adjusting local clock by -1.569592s
You read that right, openntpd was constantly rewinding the clock,
sometimes in less than two minutes. The above log was taken while
doing diagnostics, looking at the last 30 minutes of logs. So, on
average, one 1.5 seconds rewind per 6 minutes!
That might be due to a dying real time clock (RTC) or some other
hardware problem. I know for a fact that the CMOS battery on that
computer (curie) died and I wasn't able to replace
it (!). So that's partly garbage-in, garbage-out here. But still, I
was curious to see how chrony would behave... (Spoiler: much better.)
But I also had trouble on another workstation, that one a much more
recent machine (angela). First, it seems OpenNTPd
would just fail at boot time:
anarcat@angela:~(main)$ sudo systemctl status openntpd
openntpd.service - OpenNTPd Network Time Protocol
Loaded: loaded (/lib/systemd/system/openntpd.service; enabled; vendor pres>
Active: inactive (dead) since Sun 2022-01-23 09:54:03 EST; 6h ago
Docs: man:openntpd(8)
Process: 3291 ExecStartPre=/usr/sbin/ntpd -n $DAEMON_OPTS (code=exited, sta>
Process: 3294 ExecStart=/usr/sbin/ntpd $DAEMON_OPTS (code=exited, status=0/>
Main PID: 3298 (code=exited, status=0/SUCCESS)
CPU: 34ms
jan 23 09:54:03 angela systemd[1]: Starting OpenNTPd Network Time Protocol...
jan 23 09:54:03 angela ntpd[3291]: configuration OK
jan 23 09:54:03 angela ntpd[3297]: ntp engine ready
jan 23 09:54:03 angela ntpd[3297]: ntp: recvfrom: Permission denied
jan 23 09:54:03 angela ntpd[3294]: Terminating
jan 23 09:54:03 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 09:54:03 angela systemd[1]: openntpd.service: Succeeded.
After a restart, somehow it worked, but it took a long time to sync
the clock. At first, it would just not consider any peer at all:
anarcat@angela:~(main)$ sudo ntpctl -s all
0/20 peers valid, clock unsynced
peer
wt tl st next poll offset delay jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
1 5 2 6s 6s ---- peer not valid ----
138.197.135.239 from pool 0.debian.pool.ntp.org
1 5 2 6s 7s ---- peer not valid ----
216.197.156.83 from pool 0.debian.pool.ntp.org
1 4 1 2s 9s ---- peer not valid ----
142.114.187.107 from pool 0.debian.pool.ntp.org
1 5 2 5s 6s ---- peer not valid ----
216.6.2.70 from pool 1.debian.pool.ntp.org
1 4 2 2s 8s ---- peer not valid ----
207.34.49.172 from pool 1.debian.pool.ntp.org
1 4 2 0s 5s ---- peer not valid ----
198.27.76.102 from pool 1.debian.pool.ntp.org
1 5 2 5s 5s ---- peer not valid ----
158.69.254.196 from pool 1.debian.pool.ntp.org
1 4 3 1s 6s ---- peer not valid ----
149.56.121.16 from pool 2.debian.pool.ntp.org
1 4 2 5s 9s ---- peer not valid ----
162.159.200.123 from pool 2.debian.pool.ntp.org
1 4 3 1s 6s ---- peer not valid ----
206.108.0.131 from pool 2.debian.pool.ntp.org
1 4 1 6s 9s ---- peer not valid ----
205.206.70.40 from pool 2.debian.pool.ntp.org
1 5 2 8s 9s ---- peer not valid ----
2001:678:8::123 from pool 2.debian.pool.ntp.org
1 4 2 5s 9s ---- peer not valid ----
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
1 4 3 2s 6s ---- peer not valid ----
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
1 4 2 5s 9s ---- peer not valid ----
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
1 4 4 1s 6s ---- peer not valid ----
209.115.181.110 from pool 3.debian.pool.ntp.org
1 5 2 5s 6s ---- peer not valid ----
205.206.70.42 from pool 3.debian.pool.ntp.org
1 4 2 0s 6s ---- peer not valid ----
68.69.221.61 from pool 3.debian.pool.ntp.org
1 4 1 2s 9s ---- peer not valid ----
162.159.200.1 from pool 3.debian.pool.ntp.org
1 4 3 4s 7s ---- peer not valid ----
Then it would accept them, but still wouldn't sync the clock:
anarcat@angela:~(main)$ sudo ntpctl -s all
20/20 peers valid, clock unsynced
peer
wt tl st next poll offset delay jitter
159.203.8.72 from pool 0.debian.pool.ntp.org
1 8 2 5s 6s 0.672ms 13.507ms 0.442ms
138.197.135.239 from pool 0.debian.pool.ntp.org
1 7 2 4s 8s 1.260ms 13.388ms 0.494ms
216.197.156.83 from pool 0.debian.pool.ntp.org
1 7 1 3s 5s -0.390ms 47.641ms 1.537ms
142.114.187.107 from pool 0.debian.pool.ntp.org
1 7 2 1s 6s -0.573ms 15.012ms 1.845ms
216.6.2.70 from pool 1.debian.pool.ntp.org
1 7 2 3s 8s -0.178ms 21.691ms 1.807ms
207.34.49.172 from pool 1.debian.pool.ntp.org
1 7 2 4s 8s -5.742ms 70.040ms 1.656ms
198.27.76.102 from pool 1.debian.pool.ntp.org
1 7 2 0s 7s 0.170ms 21.035ms 1.914ms
158.69.254.196 from pool 1.debian.pool.ntp.org
1 7 3 5s 8s -2.626ms 20.862ms 2.032ms
149.56.121.16 from pool 2.debian.pool.ntp.org
1 7 2 6s 8s 0.123ms 20.758ms 2.248ms
162.159.200.123 from pool 2.debian.pool.ntp.org
1 8 3 4s 5s 2.043ms 14.138ms 1.675ms
206.108.0.131 from pool 2.debian.pool.ntp.org
1 6 1 0s 7s -0.027ms 14.189ms 2.206ms
205.206.70.40 from pool 2.debian.pool.ntp.org
1 7 2 1s 5s -1.777ms 53.459ms 1.865ms
2001:678:8::123 from pool 2.debian.pool.ntp.org
1 6 2 1s 8s 0.195ms 14.572ms 2.624ms
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
1 7 3 6s 9s 2.068ms 14.102ms 1.767ms
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
1 6 2 4s 9s 0.254ms 21.471ms 2.120ms
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
1 7 4 5s 9s -1.706ms 21.030ms 1.849ms
209.115.181.110 from pool 3.debian.pool.ntp.org
1 7 2 0s 7s 8.907ms 75.070ms 2.095ms
205.206.70.42 from pool 3.debian.pool.ntp.org
1 7 2 6s 9s -1.729ms 53.823ms 2.193ms
68.69.221.61 from pool 3.debian.pool.ntp.org
1 7 1 1s 7s -1.265ms 46.355ms 4.171ms
162.159.200.1 from pool 3.debian.pool.ntp.org
1 7 3 4s 8s 1.732ms 35.792ms 2.228ms
It took a solid five minutes to sync the clock, even though the
peers were considered valid within a few seconds:
jan 23 15:58:41 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 15:58:58 angela ntpd[84086]: peer 142.114.187.107 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 198.27.76.102 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 207.34.49.172 now valid
jan 23 15:58:58 angela ntpd[84086]: peer 209.115.181.110 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 159.203.8.72 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 138.197.135.239 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 162.159.200.123 now valid
jan 23 15:58:59 angela ntpd[84086]: peer 2607:5300:201:3100::345c now valid
jan 23 15:59:00 angela ntpd[84086]: peer 2606:4700:f1::1 now valid
jan 23 15:59:00 angela ntpd[84086]: peer 158.69.254.196 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 216.6.2.70 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 68.69.221.61 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.40 now valid
jan 23 15:59:01 angela ntpd[84086]: peer 205.206.70.42 now valid
jan 23 15:59:02 angela ntpd[84086]: peer 162.159.200.1 now valid
jan 23 15:59:04 angela ntpd[84086]: peer 216.197.156.83 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 206.108.0.131 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 2001:678:8::123 now valid
jan 23 15:59:05 angela ntpd[84086]: peer 149.56.121.16 now valid
jan 23 15:59:07 angela ntpd[84086]: peer 2607:5300:205:200::1991 now valid
jan 23 16:03:47 angela ntpd[84086]: clock is now synced
That seems kind of odd. It was also frustrating to have very little
information from ntpctl about the state of the daemon. I understand
it's designed to be minimal, but it could inform me on his known
offset, for example. It does tell me about the offset with the
different peers, but not as clearly as one would expect. It's also
unclear how it disciplines the RTC at all.
Compared to chrony
Now compare with chrony:
jan 23 16:07:16 angela systemd[1]: Starting chrony, an NTP client/server...
jan 23 16:07:16 angela chronyd[87765]: chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
jan 23 16:07:16 angela chronyd[87765]: Initial frequency 3.814 ppm
jan 23 16:07:16 angela chronyd[87765]: Using right/UTC timezone to obtain leap second data
jan 23 16:07:16 angela chronyd[87765]: Loaded seccomp filter
jan 23 16:07:16 angela systemd[1]: Started chrony, an NTP client/server.
jan 23 16:07:21 angela chronyd[87765]: Selected source 206.108.0.131 (2.debian.pool.ntp.org)
jan 23 16:07:21 angela chronyd[87765]: System clock TAI offset set to 37 seconds
First, you'll notice there's none of that "clock synced" nonsense, it
picks a source, and then... it's just done. Because the clock on this
computer is not drifting that much, and openntpd had (presumably) just
sync'd it anyways. And indeed, if we look at detailed stats from the
powerful chronyc client:
anarcat@angela:~(main)$ sudo chronyc tracking
Reference ID : CE6C0083 (ntp1.torix.ca)
Stratum : 2
Ref time (UTC) : Sun Jan 23 21:07:21 2022
System time : 0.000000311 seconds slow of NTP time
Last offset : +0.000807989 seconds
RMS offset : 0.000807989 seconds
Frequency : 3.814 ppm fast
Residual freq : -24.434 ppm
Skew : 1000000.000 ppm
Root delay : 0.013200894 seconds
Root dispersion : 65.357254028 seconds
Update interval : 1.4 seconds
Leap status : Normal
We see that we are nanoseconds away from NTP time. That was ran very
quickly after starting the server (literally in the same second as
chrony picked a source), so stats are a bit weird (e.g. the Skew is
huge). After a minute or two, it looks more reasonable:
Reference ID : CE6C0083 (ntp1.torix.ca)
Stratum : 2
Ref time (UTC) : Sun Jan 23 21:09:32 2022
System time : 0.000487002 seconds slow of NTP time
Last offset : -0.000332960 seconds
RMS offset : 0.000751204 seconds
Frequency : 3.536 ppm fast
Residual freq : +0.016 ppm
Skew : 3.707 ppm
Root delay : 0.013363549 seconds
Root dispersion : 0.000324015 seconds
Update interval : 65.0 seconds
Leap status : Normal
Now it's learning how good or bad the RTC clock is ("Frequency"), and
is smoothly adjusting the System time to follow the average offset
(RMS offset, more or less). You'll also notice the Update interval
has risen, and will keep expanding as chrony learns more about the
internal clock, so it doesn't need to constantly poll the NTP servers
to sync the clock. In the above, we're 487 micro seconds (less than a
milisecond!) away from NTP time.
(People interested in the explanation of every single one of those
fields can read the excellent chronyc manpage. That thing made me
want to nerd out on NTP again!)
On the machine with the bad clock, chrony also did a 1.5 second
adjustment, but just once, at startup:
jan 18 11:54:33 curie chronyd[2148399]: Selected source 206.108.0.133 (2.debian.pool.ntp.org)
jan 18 11:54:33 curie chronyd[2148399]: System clock wrong by -1.606546 seconds
jan 18 11:54:31 curie chronyd[2148399]: System clock was stepped by -1.606546 seconds
jan 18 11:54:31 curie chronyd[2148399]: System clock TAI offset set to 37 seconds
Then it would still struggle to keep the clock in sync, but not as
badly as openntpd. Here's the offset a few minutes after that above
startup:
System time : 0.000375352 seconds slow of NTP time
And again a few seconds later:
System time : 0.001793046 seconds slow of NTP time
I don't currently have access to that machine, and will update this
post with the latest status, but so far I've had a very good
experience with chrony on that machine, which is a testament to its
resilience, and it also just works on my other machines as well.
Extras
On top of "just working" (as demonstrated above), I feel that
chrony's feature set is so much superior... Here's an excerpt of the
extras in chrony, taken from the comparison table:
source frequency tracking
source state restore from file
temperature compensation
ready for next NTP era (year 2036)
replace unreachable / falseticker servers
aware of jitter
RTC drift tracking
RTC trimming
Restore time from file w/o RTC
leap seconds correction, in slew mode
drops root privileges
I even understand some of that stuff. I think.
So kudos to the chrony folks, I'm switching.
Caveats
One thing to keep in mind in the above, however is that it's quite
possible chrony does as bad of a job as openntpd on that old
machine, and just doesn't tell me about it. For example, here's
another log sample from another server
(marcos):
jan 23 11:13:25 marcos ntpd[1976694]: adjusting clock frequency by 0.451035 to -16.420273ppm
I get those basically every day, which seems to show that it's at
least trying to keep track of the hardware clock.
In other words, it's quite possible I have no idea what I'm talking
about and you definitely need to take this article with a grain of
salt. I'm not an NTP expert.
Update: I should also mentioned that I haven't evaluated
systemd-timesyncd, for a few reasons:
I have enough things running under systemd
I wasn't aware of it when I started writing this
I couldn't find good documentation on it... later I found the
above manpage and of course the Arch Wiki but that is very
minimal
therefore I can't tell how it compares with chrony or (open)ntpd,
so I don't see an enticing reason to switch
It has a few things going for it though:
it's likely shipped with your distribution already
it drops privileges (possibly like chrony, unclear if it also has
seccomp filters)
it's minimalist: it only does SNTP so not the server side
the status command is good enough that you can tell the clock
frequency, precision, and so on (especially when compared to
openntpd's ntpctl)
So I'm reserving judgement over it, but I'd certainly note that I'm
always a little weary in trusting systemd daemons with the network,
and would prefer to keep that attack surface to a minimum. Diversity
is a good thing, in general, so I'll keep chrony for now.
It would certainly nice to see it added to chrony's comparison table.
Switching to chrony
Because the default configuration in chrony (at least as shipped in
Debian) is sane (good default peers, no open network by default),
installing it is as simple as:
apt install chrony
And because it somehow conflicts with openntpd, that also takes care
of removing that cruft as well.
Update: Debian defaults
So it seems like I managed to write this entire blog post without
putting it in relation with the original reason I had to think about
this in the first place, which is odd and should be corrected.
This conversation came about on an IRC channel that mentioned that the
ntp package (and upstream) is in bad shape in Debian. In that
discussion, chrony and ntpsec were discussed as possible replacements,
but when we had the discussion on chat, I mentioned I was using
openntpd, and promptly realized I was actually unhappy with it. A
friend suggested chrony, I tried it, and it worked amazingly, I
switched, wrote this blog post, end of story.
Except today (2022-02-07, two weeks later), I actually read that
thread and realized that something happened in Debian I wasn't
actually aware of. In bookworm, systemd-timesyncd was not only
shipped, but it was installed by default, as it was marked as a hard
dependency of systemd. That was "fixed" in systemd-247.9-2 (see
bug 986651), but only by making the dependency a Recommendsand marking it as Priority: important.
So in effect, systemd-timesyncd became the default NTP daemon in
Debian in bookworm, which I find somewhat surprising. timesyncd has
many things going for it (as mentioned above), but I do find it a bit
annoying that systemd is replacing all those utilities in such a
way. I also wonder what is going to happen on upgrades. This is all a
little frustrating too because there is no good comparison between the
other NTP daemons and timesyncd anywhere. The chrony comparison
table doesn't mention it, and an audit by the Core Infrastructure
Initiative from 2017 doesn't mention it either, even though
timesyncd was announced in 2014. (Same with this blog post from Facebook.)
Several issues were brought before the Debian Community team regarding responsiveness, tone, and needed software updates to forums.debian.net. The question was asked, who s in charge?
Over the course of the discussion several Debian Developers volunteered to help by providing a presence on the forums from Debian and to assist with the necessary changes to keep the service up and running.
We are happy to announce the following changes to the (NEW!) forums.debian.net, which have and should address most of the prior concerns with accountability, tone, use, and reliability:
Debian Developers: Paulo Henrique de Lima Santana (phls), Felix Lechner (lechner), and Donald Norwood (donald) have been added to the forum's Server and Administration teams.
The server instance is now running directly within Debian's infrastructure.
The forum software and back-end have been updated to the most recent versions where applicable.
DNS resolves for both IPv4 and IPv6.
SSL/HTTPS are enabled. (It s 2021!)
New Captcha and Anti-spam systems are in place to thwart spammers, bots, and to make it easier for humans to register.
New Administrators and Moderation staff were added to provide additional coverage across the hours and to combine years of experience with forum operation and Debian usage.
New viewing styles are available for users to choose from, some of which are ideal for mobile/tablet viewing.
We inadvertently fixed the time issue that the prior forum had of running 11 minutes fast. :)
We have clarified staff roles and staff visibility.
Responsiveness to users on the forums has increased.
Email addresses for mods/admins have been updated and checked for validity, it has seen direct use and response.
The guidelines for forum use by users and staff have been updated.
The Debian COC has been made into a Global Announcement as an accompanyist to the newly updated guidelines to give the moderators/administrators an additional rule-set for unruly or unbecoming behavior.
Some of the discussion areas have been renamed and refocused, along with the movement of multiple threads to make indexing and searching of the forums easier.
Many (New!) features and extensions have been added to the forum for ease of use and modernization, such as a user thanks system and thread hover previews.
There are some server administrative tasks that were upgraded as well which don't belong on a public list, but we are backing up regularly and secure. :)
We have a few minor details here and there to attend to and the work is ongoing.
Many Thanks and Appreciation to the Debian System Administrators (DSA) and Ganneff who took the time to coordinate and assist with the instance, DNS, and network and server administration minutiae, our helpful DPL Jonathan Carter, many thanks to the current and prior forum moderators and administrators: Mez, sunrat, 4D696B65, arochester, and cds60601 for helping with the modifications and transition, and to the forum users who participated in lots of the tweaking. All in all this was a large community task and everyone did a significant part. Thank you!
Tonight I m provisioning a new virtual machine at
Hetzner and I wanted to share how
Consfigurator is helping
with that. Hetzner have a Debian buster image you can start with, as you d
expect, but it comes with things like cloud-init, preconfiguration to use
Hetzner s apt mirror which doesn t serve source packages(!), and perhaps other
things I haven t discovered. It s a fine place to begin, but I want all the
configuration for this server to be explicit in my Consfigurator consfig, so
it is good to start with pristine upstream Debian. I could boot one of
Hetzner s installation ISOs but that s slow and manual. Consfigurator can
replace the OS in the VM s root filesystem and reboot for me, and we re ready
to go.
Here s the configuration:
(defhost foo.silentflame.com (:deploy ((:ssh :user "root") :sbcl))
(os:debian-stable "buster" :amd64)
;; Hetzner's Debian 10 image comes with a three-partition layout and boots
;; with traditional BIOS.
(disk:has-volumes
(physical-disk
:device-file "/dev/sda" :boots-with '(grub:grub :target "i386-pc")))
(on-change (installer:cleanly-installed-once
nil
;; This is a specification of the OS Hetzner's image has, so
;; Consfigurator knows how to install SBCL and debootstrap(8).
;; In this case it's the same Debian release as the replacement.
'(os:debian-stable "buster" :amd64))
;; Clear out the old OS's EFI system partition contents, in case we can
;; switch to booting with EFI at some point (if we wanted we could specify
;; an additional x86_64-efi target above, and grub-install would get run
;; to repopulate /boot/efi, but I don't think Hetzner can boot from it yet).
(file:directory-does-not-exist "/boot/efi/EFI")
(apt:installed "linux-image-amd64")
(installer:bootloaders-installed)
(fstab:entries-for-volumes
(disk:volumes
(mounted-ext4-filesystem :mount-point "/")
(partition
(mounted-fat32-filesystem
:mount-options '("umask=0077") :mount-point "/boot/efi"))))
(file:lacks-lines "/etc/fstab" "# UNCONFIGURED FSTAB FOR BASE SYSTEM")
(file:is-copy-of "/etc/resolv.conf" "/old-os/etc/resolv.conf")
(mount:unmounted-below-and-removed "/old-os"))
(apt:mirror "http://ftp.de.debian.org/debian")
(apt:no-pdiffs)
(apt:standard-sources.list)
(sshd:installed)
(as "root" (ssh:authorized-keys +spwsshkey+))
(sshd:no-passwords)
(timezone:configured "Etc/UTC")
(swap:has-swap-file "2G")
(network:clean-/etc/network/interfaces)
(network:static "enp1s0" "xxx.xxx.xxx.xxx" "xxx.xxx.1.1" "255.255.255.255"))
Here the :HOP parameter specifies the IP address of the new machine, as
DNS hasn t been updated yet. Consfigurator installs SBCL and debootstrap(8),
prepares a minimal system, replaces the contents of /, gets to work
applying the other properties, and then reboots. This gets us a properly
populated fstab:
(slightly doctored for more readable alignment)
There s ordering logic so that the swapfile will end up after whatever
filesystem contains it; a UUID is used for ext4 filesystems, but for fat32
filesystems, to be safe, a PARTUUID is used.
The application of (INSTALLER:BOOTLOADERS-INSTALLED) handles calling both
update-grub(8) and grub-install(8), relying on the metadata specified
about /dev/sda. Next time we execute Consfigurator against the machine,
it ll ignore all the property applications attached to the application of
(INSTALLER:CLEANLY-INSTALLED-ONCE) with ON-CHANGE, and just apply
everything following that block.
There are a few things I don t have good solutions for. When you boot
Hetzner s image the primary network interface is eth0, but then for a
freshly debootstrapped Debian you get enp1s0, and I haven t got a good way
of knowing what it ll be (if you know it ll have the same name, you can use
(NETWORK:PRESERVE-STATIC-ONCE) to create a file in
/etc/network/interfaces.d based on the current default route and corresponding
interface).
Another tricky thing is SSH host keys. It s easy to use Consfigurator to add
host keys to your laptop s ~/.ssh/known_hosts, but in this case the host
key changes back and forth from whatever the Hetzner image has and the newly
generated key you get afterwards. One option might be to copy the old host
keys out of /old-os before it gets deleted, like how /etc/resolv.conf
is copied.
This work is based on Propellor s equivalent
functionality.
I think my approach to handling /etc/fstab and bootloader installation is an
improvement on what Joey does.
You know that you've had the same server too long when they discontinue the
entire class of servers you were using and you need to migrate to a new
instance. And you know you've not done anything with that server (and the blog
running on it) for too long when you have no idea how that thing is actually
working.
Its a good opportunity to start over from scratch, and a good motivation to the
new thing as simply as humanly possible, or even simpler.
So I am switching to a statically generated blog as well. Not sure what took me
so long, but thank good the tooling has really improved since the last time I
looked.
It was as simple as picking Nikola, finding its import_feed plugin, changing the
BLOG_RSS_LIMIT in my Django Mezzanine blog to a thousand (to export all posts
via RSS/ATOM feed), fixing some bugs
in the import_feed plugin, waiting a few minutes for the full feed to generate
and to be imported, adjusting the config of the resulting site, posting that to
git and writing a simple shell script
to pull that repo periodically and call nikola build on it, as well as config
to serve ther result via ngnix. Done.
After that creating a new blog post is just nikola new_post and editing it in
vim and pushing to git. I prefer Markdown, but it supports all kinds of formats.
And the old posts are just stored as HTML. Really simple.
I think I will spend more time fighting with Google to allow me to forward email
from my domain to my GMail postbox without it refusing all of it as spam.
It's been a long while since I last hosted a BSP, but 'tis the season.
Kubernetes SIG Node will be holding a bug scrub on June 24-25, and this is a
great opportunity for you to get involved if you're interested in contributing
to Kubernetes or SIG Node!
We will be hosting a global event with region captains for all timezones. I am
one of the NASA captains (~17:00-01:00 UTC) and I'll be leading the kickoff. We
will be working on Slack and Zoom. I hope you'll be able to drop in!
Details
Date: Thursday, June 24 through Friday, June 25
Start: 00:00 UTC June 24
End: 23:59 UTC June 25
Sign up: Participants do not have to sign up, just show up!
I'm an existing contributor, what should I work on?
Work on triaging and closing SIG Node bugs. We have a lot of bugs!!
The goal of our event is to categorize, clean up, and resolve some of the 450+
issues in k/k for SIG Node.
Check out the event docs for more instructions.
I'm a new contributor and want to join but I have no idea what I'm doing!
At some point, that was all of us!
This is a great opportunity to get involved if you've never contributed to
Kubernetes. We'll have dedicated mentors available to coordinate and help out
new contributors.
If you've never contributed to Kubernetes before, I recommend you check out the
Getting Started
and Contributor Guide
resources in advance of the event. You will want to ensure you've signed the
contributor license agreement (CLA).
Remember, you don't have to code to make valuable contributions! Triaging
the bug tracker is a great example of this.
See you there!
Happy hacking.