Search Results: "tina"

1 May 2025

Ian Jackson: Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn t be doing politics , and instead should just focus on technology. But that s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms. Today I m talking about small-p politics In this article I m using politics in the very wide sense: us humans managing our disagreements with each other. I m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it s impossible to be neutral because choosing not to take a stand is itself to take a stand. Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today. Today I m talking in more general terms about politics, power, and governance. Many people working together always entails politics Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors. Humans don t always agree about everything. This is natural. Indeed, it s healthy: to write the best code, we need a wide range of knowledge and experience. When we can t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone. Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed. This is all politics. Consensus is great but always requiring it is harmful Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus. When consensus can t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation. If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win. This is where governance comes in. Governance is like backups: we need to practice it Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don t see eye to eye. In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around. That means we need to practice our governance processes. We can t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair. So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that. First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full. Governance should usually be routine and boring When governance is working well it s quite boring. People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn t reached, the committee, or elected leader, makes a decision. Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons. Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome. Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience. Governance means deciding, not just mediating By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It s not governance if we re advising or cajoling: in that case, we re back to demanding consensus. Governance is necessary precisely when consensus is not achieved. If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted. Otherwise, when the we need to overrule, we ll find that we can t, because we lack the collective practice. To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants status, and not only on process questions. On the autonomy of the programmer Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable. Ultimately, it means sometimes overruling someone s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy. But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer s bad decisions can cause problems for many of the rest of us. We exasperate, why won t they just do the right thing . This is futile. People have never just ed and they re not going to start just ing now. So often the boot is on the other foot. More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!) Governance mechanisms are the answer. (No, forking anything but the smallest project is very rarely a practical answer.) Mitigate the consequences of decisions retain flexibility In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements. If we can convert the question from how will the software always behave into merely what should the default be , we can often save ourselves a lot of drama. So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them. There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software even crusty or buggy software is a lot more fun than having unpleasant arguments. But don t do decisionmaking like a corporation Many programmers experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example. They typically don t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations goals are often bad. You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable typically the effects of their tenure are only properly felt well after they ve left to mess up somewhere else. We should select our leaders more wisely, and base decisions on substance. If you won t do politics, politics will do you As a participant in a project, or a society, you can of course opt out of getting involved in politics. You can opt out of learning how to do politics generally, and opt out of understanding your project s governance structures. You can opt out of making judgements about disputed questions, and tell yourself there s merit on both sides . You can hate politicians indiscriminately, and criticise anyone you see doing politics. If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You re tacitly supporting the existing power bases. You re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted. If enough people won t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic. If you don t see the politics, it s still happening If your governance systems don t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres. Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal. So if you have a reasonable sized community, but don t see your formal governance systems working people debating things, votes, leadership making explicit decisions that doesn t mean everything is fine, and all the decisions are great, and there s no politics happening. It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won t put up with that will leave. The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process. Conclusions

comment count unavailable comments

27 April 2025

Valhalla's Things: Stickerses

Posted on April 27, 2025
Tags: madeof:atoms, madeof:bits, craft:graphics, topic:stickers
After just a few years of procrastination, I ve given a wash of git-filter-repo to the repository where I keep my hexagonal sticker designs, removed a few failed experiments and stuff with dubious licensing and was able to finally publish it among my public git repositories This repo includes the template I m using, most of the stickers I ve had printed, some that have been published elsewhere and have been printed by other people, as well as some that have never been printed and I may or may not print in the future. The licensing details are in the metadata of each file, and mostly depend on the logos or cliparts used. Most, but not all, are under a free culture license. My server is not setup to correctly serve the SVG files, yet: downloading them (from the plain links) should work, but I need to fix the content type that is provided. I will probably procrastinate doing it for quite some time, but eventually it will be done. Of course cloning the repository from the public https URL also works. BRB, need to add MOAR stickerses.

25 April 2025

Simon Josefsson: GitLab Runner with Rootless Privilege-less Capability-less Podman on riscv64

I host my own GitLab CI/CD runners, and find that having coverage on the riscv64 CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/ 8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available. Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode and without the --privileged flag, without any additional capabilities like SYS_ADMIN. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I m waiting for you! I wouldn t recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any riscv64 hardware that can run a libre OS, all of them appear to require non-free blobs and usually a non-mainline kernel.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr
Now with a reasonable setup ready, let s install the GitLab Runner. The following is adapted from gitlab-runner s official installation instructions documentation. The normal installation flow doesn t work because they don t publish riscv64 apt repositories, so you will have to perform upgrades manually.
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb
Remember the NVMe device? Let s not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner appears, or you have to move them manually.
gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner
Next install gitlab-runner and configure it. Replace token glrt-REPLACEME below with the registration token you get from your GitLab project s Settings -> CI/CD -> Runners -> New project runner. I used the tags riscv64 and a runner description of the hostname.
gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable
We install and configure gitlab-runner to use podman, and to use non-root user.
apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner
You need to run some commands as the gitlab-runner user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?
# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket
We modify /etc/gitlab-runner/config.toml as follows, replace 997 with the user id shown by systemctl status above. See feature flags documentation for more documentation.
...
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"
Note that unlike the documentation I do not add the privileged = true parameter here. I will come back to this later. Restart the system to confirm that pushing a .gitlab-ci.yml with a job that uses the riscv64 tag like the following works properly.
dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set
Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl as follows:
journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service
To stop the graphical environment and disable some unnecessary services, you can use:
systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord
At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects! I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian aardvark-dns binary instead.
wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian
My setup uses podman in rootless mode without passing the privileged parameter or any add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:
Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1
According to GitLab runner security considerations, you should not enable the privileged = true parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, and I suppose running podman as root would too.
[[runners]]
[runners.docker]
privileged = true
Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner user) without --privileged, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged and also compared to running podman as root.
[[runners]]
[runners.docker]
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]
Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome. Happy Riscv64 Building! Update 2025-05-05: I was able to make it work without the SYS_ADMIN capability too, with a GitLab /etc/gitlab-runner/config.toml like the following:
[[runners]]
  [runners.docker]
    privileged = false
    devices = ["/dev/fuse"]
And passing --isolation chroot to Buildah like this:
buildah build --isolation chroot -t $CI_REGISTRY_IMAGE:name image/
I ve updated the blog title to add the word capability-less as well. I ve confirmed that the same recipe works on podman on a ppc64el platform too. Remaining loop-holes are escaping from the chroot into the non-root gitlab-runner user, and escalating that privilege to root. The /dev/fuse and sub-uid/gid may be privilege escalation vectors here, otherwise I believe you ve found a serious software security issue rather than a configuration mistake.

12 April 2025

Kalyani Kenekar: Nextcloud Installation HowTo: Secure Your Data with a Private Cloud

Logo NGinx Nextcloud is an open-source software suite that enables you to set up and manage your own cloud storage and collaboration platform. It offers a range of features similar to popular cloud services like Google Drive or Dropbox but with the added benefit of complete control over your data and the server where it s hosted. I wanted to have a look at Nextcloud and the steps to setup a own instance with a PostgreSQL based database together with NGinx as the webserver to serve the WebUI. Before doing a full productive setup I wanted to play around locally with all the needed steps and worked out all the steps within KVM machine. While doing this I wrote down some notes to mostly document for myself what I need to do to get a Nextcloud installation running and usable. So this manual describes how to setup a Nextcloud installation on Debian 12 Bookworm based on NGinx and PostgreSQL.

Nextcloud Installation

Install PHP and PHP extensions for Nextcloud Nextcloud is basically a PHP application so we need to install PHP packages to get it working in the end. The following steps are based on the upstream documentation about how to install a own Nextcloud instance. Installing the virtual package package php on a Debian Bookworm system would pull in the depending meta package php8.2. This package itself would then pull also the package libapache2-mod-php8.2 as an dependency which then would pull in also the apache2 webserver as a depending package. This is something I don t wanted to have as I want to use NGinx that is already installed on the system instead. To get this we need to explicitly exclude the package libapache2-mod-php8.2 from the list of packages which we want to install, to achieve this we have to append a hyphen - at the end of the package name, so we need to use libapache2-mod-php8.2- within the package list that is telling apt to ignore this package as an dependency. I ended up with this call to get all needed dependencies installed.
$ sudo apt install php php-cli php-fpm php-json php-common php-zip \
  php-gd php-intl php-curl php-xml php-mbstring php-bcmath php-gmp \
  php-pgsql libapache2-mod-php8.2-
  • Check php version (optional step) $ php -v
PHP 8.2.28 (cli) (built: Mar 13 2025 18:21:38) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.2.28, Copyright (c) Zend Technologies
    with Zend OPcache v8.2.28, Copyright (c), by Zend Technologies
  • After installing all the packages, edit the php.ini file: $ sudo vi /etc/php/8.2/fpm/php.ini
  • Change the following settings per your requirements:
max_execution_time = 300
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
  • To make these settings effective, restart the php-fpm service $ sudo systemctl restart php8.2-fpm

Install PostgreSQL, Create a database and user This manual assumes we will use a PostgreSQL server on localhost, if you have a server instance on some remote site you can skip the installation step here. $ sudo apt install postgresql postgresql-contrib postgresql-client
  • Check version after installation (optinal step): $ sudo -i -u postgres $ psql -version
  • This output will be seen: psql (15.12 (Debian 15.12-0+deb12u2))
  • Exit the PSQL shell by using the command \q. postgres=# \q
  • Exit the CLI of the postgres user: postgres@host:~$ exit

Create a PostgreSQL Database and User:
  1. Create a new PostgreSQL user (Use a strong password!): $ sudo -u postgres psql -c "CREATE USER nextcloud_user PASSWORD '1234';"
  2. Create new database and grant access: $ sudo -u postgres psql -c "CREATE DATABASE nextcloud_db WITH OWNER nextcloud_user ENCODING=UTF8;"
  3. (Optional) Check if we now can connect to the database server and the database in detail (you will get a question about the password for the database user!). If this is not working it makes no sense to proceed further! We need to fix first the access then! $ psql -h localhost -U nextcloud_user -d nextcloud_db or $ psql -h 127.0.0.1 -U nextcloud_user -d nextcloud_db
  • Log out from postgres shell using the command \q.

Download and install Nextcloud
  • Use the following command to download the latest version of Nextcloud: $ wget https://download.nextcloud.com/server/releases/latest.zip
  • Extract file into the folder /var/www/html with the following command: $ sudo unzip latest.zip -d /var/www/html
  • Change ownership of the /var/www/html/nextcloud directory to www-data. $ sudo chown -R www-data:www-data /var/www/html/nextcloud

Configure NGinx for Nextcloud to use a certificate In case you want to use self signed certificate, e.g. if you play around to setup Nextcloud locally for testing purposes you can do the following steps.
  • Generate the private key and certificate: $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nextcloud.key -out nextcloud.crt $ sudo cp nextcloud.crt /etc/ssl/certs/ && sudo cp nextcloud.key /etc/ssl/private/
  • If you want or need to use the service of Let s Encrypt (or similar) drop the step above and create your required key data by using this command: $ sudo certbot --nginx -d nextcloud.your-domain.com You will need to adjust the path to the key and certificate in the next step!
  • Change the NGinx configuration: $ sudo vi /etc/nginx/sites-available/nextcloud.conf
  • Add the following snippet into the file and save it.
# /etc/nginx/sites-available/nextcloud.conf
upstream php-handler  
    #server 127.0.0.1:9000;
    server unix:/run/php/php8.2-fpm.sock;
 

# Set the  immutable  cache control options only for assets with a cache
# busting  v  argument

map $arg_v $asset_immutable  
    "" "";
    default ", immutable";
 

server  
    listen 80;
    listen [::]:80;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # Enforce HTTPS
    return 301 https://$server_name$request_uri;
 

server  
    listen 443      ssl http2;
    listen [::]:443 ssl http2;
    # Adjust this to the correct server name!
    server_name nextcloud.local;

    # Path to the root of your installation
    root /var/www/html/nextcloud;

    # Use Mozilla's guidelines for SSL/TLS settings
    # https://mozilla.github.io/server-side-tls/ssl-config-generator/
    # Adjust the usage and paths of the correct key data! E.g. it you want to use Let's Encrypt key material!
    ssl_certificate /etc/ssl/certs/nextcloud.crt;
    ssl_certificate_key /etc/ssl/private/nextcloud.key;
    # ssl_certificate /etc/letsencrypt/live/nextcloud.your-domain.com/fullchain.pem; 
    # ssl_certificate_key /etc/letsencrypt/live/nextcloud.your-domain.com/privkey.pem;

    # Prevent NGinx HTTP Server Detection
    server_tokens off;

    # HSTS settings
    # WARNING: Only add the preload option once you read about
    # the consequences in https://hstspreload.org/. This option
    # will add the domain to a hardcoded list that is shipped
    # in all major browsers and getting removed from this list
    # could take several months.
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;

    # set max upload size and increase upload timeout:
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;

    # Enable gzip but do not remove ETag headers
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

    # Pagespeed is not supported by Nextcloud, so if your server is built
    # with the  ngx_pagespeed  module, uncomment this line to disable it.
    #pagespeed off;

    # The settings allows you to optimize the HTTP2 bandwidth.
    # See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
    # for tuning hints
    client_body_buffer_size 512k;

    # HTTP response headers borrowed from Nextcloud  .htaccess 
    add_header Referrer-Policy                   "no-referrer"       always;
    add_header X-Content-Type-Options            "nosniff"           always;
    add_header X-Frame-Options                   "SAMEORIGIN"        always;
    add_header X-Permitted-Cross-Domain-Policies "none"              always;
    add_header X-Robots-Tag                      "noindex, nofollow" always;
    add_header X-XSS-Protection                  "1; mode=block"     always;

    # Remove X-Powered-By, which is an information leak
    fastcgi_hide_header X-Powered-By;

    # Set .mjs and .wasm MIME types
    # Either include it in the default mime.types list
    # and include that list explicitly or add the file extension
    # only for Nextcloud like below:
    include mime.types;
    types  
        text/javascript js mjs;
        application/wasm wasm;
     

    # Specify how to handle directories -- specifying  /index.php$request_uri 
    # here as the fallback means that NGinx always exhibits the desired behaviour
    # when a client requests a path that corresponds to a directory that exists
    # on the server. In particular, if that directory contains an index.php file,
    # that file is correctly served; if it doesn't, then the request is passed to
    # the front-end controller. This consistent behaviour means that we don't need
    # to specify custom rules for certain paths (e.g. images and other assets,
    #  /updater ,  /ocs-provider ), and thus
    #  try_files $uri $uri/ /index.php$request_uri 
    # always provides the desired behaviour.
    index index.php index.html /index.php$request_uri;

    # Rule borrowed from  .htaccess  to handle Microsoft DAV clients
    location = /  
        if ( $http_user_agent ~ ^DavClnt )  
            return 302 /remote.php/webdav/$is_args$args;
         
     

    location = /robots.txt  
        allow all;
        log_not_found off;
        access_log off;
     

    # Make a regex exception for  /.well-known  so that clients can still
    # access it despite the existence of the regex rule
    #  location ~ /(\. autotest ...)  which would otherwise handle requests
    # for  /.well-known .
    location ^~ /.well-known  
        # The rules in this block are an adaptation of the rules
        # in  .htaccess  that concern  /.well-known .

        location = /.well-known/carddav   return 301 /remote.php/dav/;  
        location = /.well-known/caldav    return 301 /remote.php/dav/;  

        location /.well-known/acme-challenge      try_files $uri $uri/ =404;  
        location /.well-known/pki-validation      try_files $uri $uri/ =404;  

        # Let Nextcloud's API for  /.well-known  URIs handle all other
        # requests by passing them to the front-end controller.
        return 301 /index.php$request_uri;
     

    # Rules borrowed from  .htaccess  to hide certain paths from clients
    location ~ ^/(?:build tests config lib 3rdparty templates data)(?:$ /)    return 404;  
    location ~ ^/(?:\. autotest occ issue indie db_ console)                  return 404;  

    # Ensure this block, which passes PHP files to the PHP process, is above the blocks
    # which handle static assets (as seen below). If this block is not declared first,
    # then NGinx will encounter an infinite rewriting loop when it prepend  /index.php 
    # to the URI, resulting in a HTTP 500 error response.
    location ~ \.php(?:$ /)  
        # Required for legacy support
        rewrite ^/(?!index remote public cron core\/ajax\/update status ocs\/v[12] updater\/.+ ocs-provider\/.+ .+\/richdocumentscode(_arm64)?\/proxy) /index.php$request_uri;

        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;

        try_files $fastcgi_script_name =404;

        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;

        fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
        fastcgi_param front_controller_active true;     # Enable pretty urls
        fastcgi_pass php-handler;

        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;

        fastcgi_max_temp_file_size 0;
     

    # Serve static files
    location ~ \.(?:css js mjs svg gif png jpg ico wasm tflite map ogg flac)$  
        try_files $uri /index.php$request_uri;
        # HTTP response headers borrowed from Nextcloud  .htaccess 
        add_header Cache-Control                     "public, max-age=15778463$asset_immutable";
        add_header Referrer-Policy                   "no-referrer"       always;
        add_header X-Content-Type-Options            "nosniff"           always;
        add_header X-Frame-Options                   "SAMEORIGIN"        always;
        add_header X-Permitted-Cross-Domain-Policies "none"              always;
        add_header X-Robots-Tag                      "noindex, nofollow" always;
        add_header X-XSS-Protection                  "1; mode=block"     always;
        access_log off;     # Optional: Don't log access to assets
     

    location ~ \.woff2?$  
        try_files $uri /index.php$request_uri;
        expires 7d;         # Cache-Control policy borrowed from  .htaccess 
        access_log off;     # Optional: Don't log access to assets
     

    # Rule borrowed from  .htaccess 
    location /remote  
        return 301 /remote.php$request_uri;
     

    location /  
        try_files $uri $uri/ /index.php$request_uri;
     
 
  • Symlink configuration site available to site enabled. $ ln -s /etc/nginx/sites-available/nextcloud.conf /etc/nginx/sites-enabled/
  • Restart NGinx and access the URI in the browser.
  • Go through the installation of Nextcloud.
  • The user data on the installation dialog should point e.g to administrator or similar, that user will become administrative access rights in Nextcloud!
  • To adjust the database connection detail you have to edit the file $install_folder/config/config.php. Means here in the example within this post you would need to modify /var/www/html/nextcloud/config/config.php to control or change the database connection.
---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(Or your remote PostgreSQL server address if you have.)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for database user.)
--->%---
After the installation and setup of the Nextcloud PHP application there are more steps to be done. Have a look into the WebUI what you will need to do as additional steps like create a cronjob or tuning of some more PHP configurations. If you ve done all things correct you should see a login page similar to this: Login Page of your Nextcloud instance

Optional other steps for more enhanced configuration modifications

Move the data folder to somewhere else The data folder is the root folder for all user content. By default it is located in $install_folder/data, so in our case here it is in /var/www/html/nextcloud/data.
  • Move the data directory outside the web server document root. $ sudo mv /var/www/html/nextcloud/data /var/nextcloud_data
  • Ensure access permissions, mostly not needed if you move the folder. $ sudo chown -R www-data:www-data /var/nextcloud_data $ sudo chown -R www-data:www-data /var/www/html/nextcloud/
  • Update the Nextcloud configuration:
    1. Open the config/config.php file of your Nextcloud installation. $ sudo vi /var/www/html/nextcloud/config/config.php
    2. Update the datadirectory parameter to point to the new location of your data directory.
  ---%<---
     'datadirectory' => '/var/nextcloud_data'
  --->%---
  • Restart NGinx service: $ sudo systemctl restart nginx

Make the installation available for multiple FQDNs on the same server
  • Adjust the Nextcloud configuration to listen and accept requests for different domain names. Configure and adjust the key trusted_domains accordingly. $ sudo vi /var/www/html/nextcloud/config/config.php
  ---%<---
    'trusted_domains' => 
    array (
      0 => 'domain.your-domain.com',
      1 => 'domain.other-domain.com',
    ),
  --->%---
  • Create and adjust the needed site configurations for the webserver.
  • Restart the NGinx unit.

An error message about .ocdata might occur
  • .ocdata is not found inside the data directory
    • Create file using touch and set necessary permissions. $ sudo touch /var/nextcloud_data/.ocdata $ sudo chown -R www-data:www-data /var/nextcloud_data/

The password for the administrator user is unknown
  1. Log in to your server:
    • SSH into the server where your PostgreSQL database is hosted.
  2. Switch to the PostgreSQL user:
    • $ sudo -i -u postgres
  3. Access the PostgreSQL command line
    • psql
  4. List the databases: (If you re unsure which database is being used by Nextcloud, you can list all the databases by the list command.)
    • \l
  5. Switch to the Nextcloud database:
    • Switch to the specific database that Nextcloud is using.
    • \c nextclouddb
  6. Reset the password for the Nextcloud database user:
    • ALTER USER nextcloud_user WITH PASSWORD 'new_password';
  7. Exit the PostgreSQL command line:
    • \q
  8. Verify Database Configuration:
    • Check the database connection details in the config.php file to ensure they are correct. sudo vi /var/www/html/nextcloud/config/config.php
    • Replace nextcloud_db, nextcloud_user, and your_password with your actual database name, user, and password.
---%<---
    'dbname' => 'nextcloud_db',
    'dbhost' => 'localhost', #(or your PostgreSQL server address)
    'dbport' => '',
    'dbtableprefix' => 'oc_',
    'dbuser' => 'nextcloud_user',
    'dbpassword' => '1234', #(The password you set for nextcloud_user.)
--->%---
  1. Restart NGinx and access the UI through the browser.

11 April 2025

Gunnar Wolf: Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's Computing Reviews . I do want, though, to share it with people that follow my general interests and such stuff.
This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself not too different from what we are now witnessing on a global level. I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides And rtica (and Mariana Fossatti) are clearly among them. Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre. Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual s possibility to act without interferences or coertion, and is limited by other people s freedom, while positive freedom is the real capacity to exercise one s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom. Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it s legal (in order not to infringe other s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will. The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights. She closes the article by stating (and I ll happily sign as if they were my own words) that we are Free Culture militants not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods ( ) We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective .

4 April 2025

Gunnar Wolf: Naming things revisited

How long has it been since you last saw a conversation over different blogs syndicated at the same planet? Well, it s one of the good memories of the early 2010s. And there is an opportunity to re-engage! I came across Evgeni s post naming things is hard in Planet Debian. So, what names have I given my computers? I have had many since the mid-1990s I also had several during the decade before that, but before Linux, my computers didn t hve a formal name. Naming my computers something nice Linux gave me. I have forgotten many. Some of the names I have used:

9 March 2025

Lisandro Dami n Nicanor P rez Meyer: Bah a Blanca floods - Mother nature says: no Nuremberg for you today

Update 20250309 13:20-03:00 - How to help A friend of mine living in the USA sent me this link to help the flood victims: Support Bah a Blanca (Argentina) Flood Victims Original blog post These are not good news. In fact, much the contrary. Compared to the real issue, the fact that I'm not able to attend Embedded World at Nuremberg is, well, a detail. Or at least that's what I'm forcing myself to believe, as I REALLY wanted to be there. But mother nature said otherwise. Plaza Dr. Alberto Martinelli - Barrio Parque Las Ca itas Park "D. Alberto Martinelli", Las Ca itas, Bah a Blanca (Google Maps) Bah a Blanca , the city I live, has received a lot on rainfall. Really, a lot. Let me introduce the number like this: the previous highest recorded measurement was 170mm (6.69 inch)... in a month. Yesterday Friday 07 we had more than 400mm (15.75 inch) in 9 hours. But those are just numbers. Some things are better seen in images. I'll start with some soft ones. Streetk sink 1 Streetk sink 2 Sink in Fournier street near Cambaceres (Google Maps) I also happen to do figure skating in the same school of the 4 times world champions (where "world" means the whole world) Roller Dreams precision skating team - Instagram, from Club El Nacional. Our skating rink got severely damaged with the hail we had like 3 weeks ago (yes, we had hail too!!!). Now it's just impossible: Roller Dreams CEN skating rink The "real" thing Let's get to the heavy, heartbreaker part. I did go to downtown Bah a Blanca, but during night, so let me share some links, most of them in Spanish, but images are images: My alma matter, Universidad Nacional del Sur, lost its main library, great part of the Physics department and a lot of labs :-( A nearby town, General Cerri, had even worst luck. In Bah a Blanca, a city of 300k+ people, has around 400 evacuated people. General Cerri, a town of 3000? people, had at least 800. Bah a Blanca, devil's land Every place has its legends. We do too. This land was called "Huecuv Map ", something like "Devil's land" by the original inhabitants of the zone, due to its harsh climate: string winters and hot summers, couple with fierce wind. But back in 1855 the Cacique (chief) Jos Mar a Bulnes Yanquetruz had a peace agreement with commander Nicanor Otamendi. But a battle ensued, which Yanquetruz won. At this point history defers depending upon who tells it. Some say Yanquetruz was assigned a military grade as Captain of the indigenous auxiliary forces and provided a military suit, some say he stole it, some say this was a setup of another chief wanting to disrupt peace. But what is known is that Yanquetruz was killed, and his wife, the "machi" (sorceress), issued a curse over the land that would last 1000 years, and the curse was on the climate. Aftermath No, we are not there yet. This has just happened. The third violent climate occurrence in 15 months. The city needs to mourn and start healing itself. Time will say.

9 February 2025

Philipp Kern: 20 years

20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.
During my studies I took on more things. In January 2008 I joined the Release Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.
Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.I ended up as Stable Release Manager for a while, from August 2008 - when Martin Zobel-Helas moved into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the creation of -updates in favor of a separate volatile archive and a change of the update policy to allow for more common sense updates in the main archive vs. the very strict "breakage or security" policy we had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.
In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.
Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.
I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.

The future
I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.Hopefully I can contribute a bit or two to these efforts in the future.

Antoine Beaupr : Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it. Qalculate is the best calculator that has ever been made. I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan. This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer On Debian, Qalculate's CLI interface can be installed with:
apt install qalc
Then you start it with the qalc command, and end up on a prompt:
anarcat@angela:~$ qalc
> 
Then it's a normal calculator:
anarcat@angela:~$ qalc
> 1+1
  1 + 1 = 2
> 1/7
  1 / 7   0.1429
> pi
  pi   3.142
> 
There's a bunch of variables to control display, approximation, and so on:
> set precision 6
> 1/7
  1 / 7   0.142857
> set precision 20
> pi
  pi   3.1415926535897932385
When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):
> 1 megabit/s * 30 day to gibibyte 
  (1 megabit/second)   (30 days)   301.7 GiB
Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:
> 1GiB/(100 megabit/s)
  (1 gibibyte) / (100 megabits/second)   1 min + 25.90 s

Password entropy To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries. For example, an alphabetic 14-character password is:
> log2(26*2)*14
  log (26   2)   14   79.81
... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:
> log2(8k)*x = 80
  (log (8   000)   x) = 80  
  x   6.170
... about 6 words, which gives you:
> log2(8k)*6
  log (8   1000)   6   77.79
78 bits of entropy.

Exchange rates You can convert between currencies!
> 1 EUR to USD
  1 EUR   1.038 USD
Even fake ones!
> 1 BTC to USD
  1 BTC   96712 USD
This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:
It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y
As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates. The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions Here are other neat conversions extracted from my history
> teaspoon to ml
  teaspoon = 5 mL
> tablespoon to ml
  tablespoon = 15 mL
> 1 cup to ml 
  1 cup   236.6 mL
> 6 L/100km to mpg
  (6 liters) / (100 kilometers)   39.20 mpg
> 100 kph to mph
  100 kph   62.14 mph
> (108km - 72km) / 110km/h
  ((108 kilometers)   (72 kilometers)) / (110 kilometers/hour)  
  19 min + 38.18 s

Completion time estimates This is a more involved example I often do.

Background Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!). (Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:
$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -
So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor. If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable This is optional, but for the sake of demonstration, let's save this as a variable:
> start="2025-02-07 15:50:25"
  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:
# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 
Strip off the weird dots in there, because that will confuse qalculate, which will count this as:
  2.968252503968 bytes   2.968 B
Or, essentially, three bytes. We actually transferred almost 3TB here:
  2968252503968 bytes   2.968 TB
So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:
Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs
(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.
> 2870444032 KiB
  2870444032 kibibytes   2.939 TB
> 2870444032 kB
  2870444032 kilobytes   2.870 TB
At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:
> 2870444032 KiB - 2870444032 kB
  (2870444032 kibibytes)   (2870444032 kilobytes)   68.89 GB
Anyways. Let's take 2968252503968 bytes as our current progress. Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication We have 3 out of four variables for our equation here, so we can already solve:
> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h
  ((actual   start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))
  x   59.24 h
The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time. To break this down step by step, we could calculate how long it has taken so far:
> now-start
  now   start   23 h + 53 min + 6.762 s
> now-start to s
  now   start   85987 s
... and do the cross-multiplication manually, it's basically:
x/(now-start) = (total/current)
so:
x = (total/current) * (now-start)
or, in Qalc:
> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s
  ((7258298064 kibibytes) / (2996538438607 bytes))   (85987 secondes)  
  2 d + 11 h + 14 min + 38.81 s
It's interesting it gives us different units here! Not sure why.

Now and built-in variables The now here is actually a built-in variable:
> now
  now   "2025-02-08T22:25:25"
There is a bewildering list of such variables, for example:
> uptime
  uptime = 5 d + 6 h + 34 min + 12.11 s
> golden
  golden   1.618
> exact
  golden = ( (5) + 1) / 2

Computing dates In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left. But I did that all in my head, we can ask more of Qalc yet! Let's make another variable, for that total estimated time:
> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)
  save(((now   start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1)  
  2 d + 11 h + 14 min + 38.22 s
And we can plug that into another formula with our start time to figure out when we'll be done!
> start+total
  start + total   "2025-02-10T03:28:52"
> start+total-now
  start + total   now   1 d + 11 h + 34 min + 48.52 s
> start+total-now to h
  start + total   now   35 h + 34 min + 32.01 s
That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th. But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head. I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality Qalculate can:
  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!
I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R. But for daily use, Qalculate is just fantastic. And it's pink! Use it!

Further reading and installation This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section. Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

Updates Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!

8 February 2025

Thorsten Alteholz: My Debian Activities in January 2025

Debian LTS This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: As new CVEs for ffmpeg appeared, I started to work again for an update of this package Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting. Debian ELTS This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on: As new CVEs for ffmpeg appeared, I started to work again for an update of this package Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting. Debian Printing This month I uploaded new packages or new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Matomo This month I uploaded new packages or new upstream or bugfix versions of: This work is generously funded by Freexian! Debian Astro This month I uploaded new packages or new upstream or bugfix versions of: Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie. Debian IoT Unfortunately I didn t found any time to work on this topic. Debian Mobcom This month I uploaded new packages or new upstream or bugfix versions of: misc This month I uploaded new upstream or bugfix versions of: FTP master This month I accepted 385 and rejected 37 packages. The overall number of packages that got accepted was 402.

26 November 2024

Sandro Knau : Akademy 2024 in W rzburg

In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working. I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary. With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826. With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656. Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning. There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy. In detail I found some small things to improve: I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS. This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining. I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan. At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40. The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps. I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs. The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible. a comic page with Yoko Tsuno in Rothenburg ob der Tauber In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM. The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of W rzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden. Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train. By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)

2 November 2024

Russell Coker: More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I m still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what s the best resolution to have on a laptop if price isn t an issue, but it s at least 1440p for a 14 display, that s 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term Retina Display to mean something in the range of 250DPI to 300DPI, so my current laptop is below Retina while the most expensive new Thinkpads are above it. I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32 5K display, currently they all seem to be 27 and I ve found that even for 4K resolution 32 is better than 27 . With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it s touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn t confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :
# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3
[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/acpi/wakeup"
[Install]
WantedBy=multi-user.target
Now it works fine, for stylus at least. I still get kernel error messages like the following which don t seem to cause problems:
wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.
When it wasn t working I got the above but also kernel error messages like:
wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events
This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I ve configured KDE to suspend when the lid is closed and there s no monitor connected.

10 October 2024

Gunnar Wolf: Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I m not really that much of a Python guy But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it s the language I find my students can best cope with. But delivering a talk to ChiPy? On the other hand, I have long used a very simplistic and limited filesystem I ve designed as an implementation project at class: FIUnamFS (for Facultad de Ingenier a, Universidad Nacional Aut noma de M xico : the Engineering Faculty for Mexico s National University, where I teach. Sorry, the link is in Spanish but you will find several implementations of it from the students ). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk. As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE. Python FUSE But, in the six semesters I ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation. Maybe this is because it s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros , and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4 But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?). And of course, there isn t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy Where to start from? So I setup to try and help. Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but maybe my documentation ends up obfuscating the intent? I hope not and, read on, I ve provided some remediation). I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS). So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem. I think providing fun or useful examples is also a good way to get students to use what I m teaching, so I ve added some ideas I ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem. They all provide something that could be seen as useful, in a way that s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! So I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time. Of course, I will also share this project with my students in the next couple of weeks And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report! Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A hand-holding guide to writing FUSE-based filesystems in Python, and after presenting at Jornadas Regionales, I present you the video in Spanish here: Aprendiendo y ense ando a escribir sistemas de archivo en espacio de usuario con FUSE y Python.

21 September 2024

Jamie McClelland: How do I warm up an IP Address?

After years on the waiting list, May First was just given a /24 block of IP addresses. Excellent. Now we want to start using them for, among other things, sending email. I haven t added a new IP address to our mail relays in a while and things seems to change regularly in the world of email so I m curious: what s the best 2024 way to warm up IP addresses, particularly using postfix? Sendergrid has a nice page on the topic. It establishes the number of messages to send per day. But I m not entirely sure how to fit messages per day into our setup. We use round robin DNS to direct email to one of several dozen email relay servers using postfix. And unfortunately our DNS software (knot) doesn t have a way to add weights to ensure some IPs show up more often than others (much less limit the specific number of messages a given relay should get). Postfix has some nice knobs for rate limiting, particularly: default_destination_recipient_limit and default_destination_rate_delay If default_destination_recipient_limit is over 1, then default_destination_rate_delay is equal to the minimum delay between sending email to the same domain. So, I m staring our IP addresses out at 30m - which prevents any single domain from receiving more than 2 messages per hour. Sadly, there are a lot of different domain names that deliver to the same set of popular corporate MX servers, so I am not sure I can accurately control how many messages a given provider sees coming from a given IP address. But it s a start. A bigger problem is that messages that exceed the limit hang out in the active queue until they can be sent without violating the rate limit. Since I can t fully control the number of messages a given queue receives (due to my inability to control the DNS round robin weights), a lot of messages are going to be severely delayed, especially ones with an @gmail.com domain name. I know I can temporarily set relayhost to a different queue and flush deferred messages, however, as far as I can tell, it doesn t work with active messages. To help mitigate the problem I m only using our bulk mail queue to warm up IPs, but really, this is not ideal. Suggestions welcome!

Update #1 If you are running postfix in a multi-instance setup and you have instances that are already warmed up, you can move active messages between queues with these steps:
# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid
After just 12 hours we had thousands of messages piling up. This warm up method was never going to work without the ability to move them to a faster queue. [Additional update: be sure to reload the postfix instance after flushing the queue so messages are drained from the active queue on the correct schedule. See update #4.]

Update #2 After 24 hours, most email is being accepted as far as I can tell. I am still getting a small percentage of email deferred by Yahoo with:
421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply
So I will keep it as 30m for another 24 hours or so and then move to 15m. Now that I can flush the backlog of active messages I am in less of a hurry.

Update #3 Well, this doesn t seem to be working the way I want it to. When a message arrives faster than the designated rate limit, it remains in the active queue. I m entirely sure how the timing is supposed to work, but at this point I m down to a 5m rate delay, and the active messages are just hanging out for a lot longer than 5m. I tried flushing the queue, but that only seems to affect the deferred messages. I finally got them re-tried with systemctl reload. I wonder if there is a setting to control this retry? Or better yet, why can t these messages that exceed the rate delayed be deferred instead?

Update #4 I think I see why I was confused in Update #3 about the timing. I suspect that when I move messages out of the active queue it screws up the timer. Reloading the instance resets the timer. Every time you muck with active messages, you should reload.

30 August 2024

Sahil Dhiman: Debconf24 Busan

DebConf24 was held in Busan, South Korea, from July 28th to August 4th 2024 and preceded by DebCamp from July 21st to July 27th. This was my second IRL DebConf (DC) and fourth one in total. I started in Debian with a DebConf, so its always an occasion when one happens. This year again, I worked in fundraising team, working to raise funds from International sponsors. We did manage to raise good enough funding, albeit less than budgeted. Though, the local Korean team was able to connect and gather many Governmental sponsors, which was quite surprising for me. I wasn t seriously considering attending DebConf until I discussed this with Nilesh. More or less, his efforts helped push me through the whole process. Thanks, Nilesh, for this. In March, I got my passport and started preparing documents for South Korean visa. It did require quite a lot of paper work but seeing South Korea s s fresh passport visa rejection rate, I had doubts about visa acceptance. The visa finally got approved, which could be attributed to great documentation and help from DebConf visa team. This was also my first trip outside India, and this being to DebConf made many things easy. Most stuff were documented on DebConf website and wiki. Asking some query got immediate responses from someone in the DebConf channels. We then booked a direct flight from Delhi, reaching Seoul in the morning. With good directions from Sourab TK who had reached Seoul a few hours earlier, we quickly got Korean Won, local SIM and T Money card (transportation card) and headed towards Seoul by AREX, airport metro. We spent the next two days exploring Seoul, which is huge. It probably has the highest number of skyscrapers I have ever seen. The city has a good mix of modern and ancient culture. We explored various places in Seoul including Gyeongbokgung Palace, Statue of King Sejong, Bukchon Hanok village, N Seoul Tower and various food markets which were amazing. A Street in Seoul
A Street in Seoul
Next, we headed to Busan for DebConf using KTX (Korean high speed rail). (Fun fact, slogan for City of Busan is Busan is Good .) South Korea has a good network of frequently running high speed trains. We had pre-booked our tickets because, despite the frequency, trains were sold out most of the time. KTX ride was quite smooth, despite travelling at 300 Kmph at times through Korean countryside and long mountain tunnels. View from Dorm Room
PKNU Entrance
The venue for DebConf was Pukyong National University (PKNU), Daeyeon Campus. PKNU had two campuses in the Busan and some folks ended up in wrong campus too. With good help and guidance from the front desk, we got our dormitory rooms assigned. Dorms here were quite different, ie: View from Dorm Room
View from Dorm Room
Settling in was easy. We started meeting familiar folks after almost a year. The long conversations started again. Everyone was excited for DebConf. Like everytime, the first day was full of action (and chaos). Meet and greet, volunteers check in, video team running around and fixing stuff and things working (or not). There were some interesting talks and sponsors stalls. After day one, things more or less settled down. I again volunteered for video team stuff and helped in camera operations and talk directions, which is always fun. As the tradition applies, saw few talks live on stream too sitting in the dorm room during the conf, which is always fun, when too tired to get ready and go out. From Talk Director's chair
From Talk Director's chair
DebConf takes care of food needs for vegan/vegetarianism folks well, of which I m one. I got to try different food items, which was quite an experience. Tried using chopsticks again which didn t work, which I later figured that handling metal ones were more difficult. We had late night ramens and wooden chopsticks worked perfectly. One of the days, we even went out to an Indian restaurant to have some desi aloo paratha, paneer dishes, samosas and chai (milk tea). I wasn t particularly craving desi food but wasn t able to get something according to my taste so went there. As usual Bits from DPL talk was packed
As usual Bits from DPL talk was packed
For day trip, I went to Ulsan. San means mountains in Korean. Ulsan is a port city with many industries including Hyundai car factory, petrochemical industry, paint industry, ship building etc. We saw bamboo forest, Ulsan tower (quite a view towards Ulsan port), whale village, Ulsan Onggi Museum and the sea (which was beautiful). The beautiful sea
The beautiful sea

View from Ulsan Bridge Observatory
View from Ulsan Bridge Observatory
Amongst the sponsors, I was most interested in our network sponsors, folks who were National research and education networks (NREN) here. We had two network sponsors, KOREN and KREONET, thanks to efforts by local team. Initially it was discussed that they ll provide 20G uplink each, so 40G in total, which was whopping but by the time the closing talk happened, we got to know we had 200G uplink to the Internet. This was a massive update to last year when we had 1G main and 100M backup link. 200G wasn t what is required, but it was massive capacity and IIRC from the talk, we peaked at around 500M in usage, but it s always fun to have astronomical amount of bandwidth for bragging rights ;) Various mascots in attendance
Various mascots in attendance

Video and Network stats. Screengrab from closing ceremony
Video and Network stats. Screengrab from closing ceremony
Now let s talk about things I found interesting about South Korea in general: Gyeongbokgung Palace Entrance Gyeongbokgung Palace Entrance Gyeongbokgung Palace Entrance
Grand Gyeongbokgung Palace, Seoul

Starfield Library
Starfield Library, Seoul
If one has to get the whole DebConf experience, it s better to attend DebCamp as well because that s when you can sit and interact with everyone better. As DebConf starts, everyone gets busy in various talks and events and things take a pace. DebConf days literally fly. This year, attending DebConf in person was a different experience. Attending DebConf without any organizational work/stress so was better, and I was able to understand working of different Debian team and workflows better while also identified a few where I would like to join and help. A general conclusion was that almost all Debian teams needs more folks to help out. So if someone want to join, they can probably reach out to the team, and would be able to onboard new folks. Though this would require some patience. Kudos to the Korean team who were able to pull off this event under this tight timeline and thanks for all the hospitality. DebConf24 Group Photo
DebConf24 Group Photo. Click to enlarge.
Credits - Aigars Mahinovs
This whole experience expanded my world view. There s so much to see and explore and understand. Looking forward to DebConf25 in Brest, France. PS - Shoutout to abbyck (aka hamCK)!

10 August 2024

Bits from Debian: DebConf24 closes in Busan and DebConf25 dates announced

DebConf24 group photo - click to enlarge On Saturday 3 August 2024, the annual Debian Developers and Contributors Conference came to a close. Over 339 attendees representing 48 countries from around the world came together for a combined 108 events made up of more than 50 Talks and Discussions, 37 Birds of a Feather (BoF informal meeting between developers and users) sessions, 12 workshops, and activities in support of furthering our distribution and free software (25 patches submitted to the Linux kernel), learning from our mentors and peers, building our community, and having a bit of fun. The conference was preceded by the annual DebCamp hacking session held July 21st through July 27th where Debian Developers and Contributors convened to focus on their Individual Debian-related projects or work in team sprints geared toward in-person collaboration in developing Debian. This year featured a BootCamp that was held for newcomers with a GPG Workshop and a focus on Introduction to creating .deb files (Debian packaging) staged by a team of dedicated mentors who shared hands-on experience in Debian and offered a deeper understanding of how to work in and contribute to the community. The actual Debian Developers Conference started on Sunday July 28 2024. In addition to the traditional 'Bits from the DPL' talk, the continuous key-signing party, lightning talks and the announcement of next year's DebConf25, there were several update sessions shared by internal projects and teams. Many of the hosted discussion sessions were presented by our technical core teams with the usual and useful meet the Technical Committee and the ftpteam and a set of BoFs about packaging policy and Debian infrastructure, including talk about APT and Debian Installer and an overview about the first eleven years of Reproducible Builds. Internationalization and localization have been subject of several talks. The Python, Perl, Ruby, and Go programming language teams, as well as Med team, also shared updates on their work and efforts. More than fifteen BoFs and talks about community, diversity and local outreach highlighted the work of various team involved in the social aspect of our community. This year again, Debian Brazil shared strategy and action to attract and retain new contributors and members and opportunities both in Debian and F/OSS. The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the conference. Several traditional activities took place: a job fair, a poetry performance, the traditional Cheese and Wine party, the group photos and the Day Trips. For those who were not able to attend, most of the talks and sessions were broadcast live and recorded and the videos made available through a link in their summary in the schedule. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents which allowed remote attendees to 'be in the room' to ask questions or share comments with the speaker or assembled audience. DebConf24 saw over 6.8 TiB (4.3 TiB in 2023) of data streamed, 91.25 hours (55 in 2023) of scheduled talks, 20 network access points, 1.6 km fibers (1 broken fiber...) and 2.2 km UTP cable deployed, more than 20 country Geoip viewers, 354 T-shirts, 3 day trips, and up to 200 meals planned per day. All of these events, activities, conversations, and streams coupled with our love, interest, and participation in Debian and F/OSS certainly made this conference an overall success both here in Busan, South Korea and online around the world. The DebConf24 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf25 will be held in Brest, France, from Monday, July 7 to Monday, July 21, 2025. As tradition follows before the next DebConf the local organizers in France will start the conference activities with DebCamp with particular focus on individual and team work towards improving the distribution. DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct in DebConf24 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf24, particularly our Platinum Sponsors: Infomaniak, Proxmox, and Wind River. We also wish to thank our Video and Infrastructure teams, the DebConf24 and DebConf committees, our host nation of South Korea, and each and every person who helped contribute to this event and to Debian overall. Thank you all for your work in helping Debian continue to be "The Universal Operating System". See you next year! About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, Bosnia and Herzegovina, and India. More information about DebConf is available from https://debconf.org/. About Infomaniak Infomaniak is an independent cloud service provider recognized throughout Europe for its commitment to privacy, the local economy and the environment. Recording growth of 18% in 2023, the company is developing a suite of online collaborative tools and cloud hosting, streaming, marketing and events solutions. Infomaniak uses exclusively renewable energy, builds its own data centers and develops its solutions in Switzerland, without relocating. The company powers the website of the Belgian radio and TV service (RTBF) and provides streaming for more than 3,000 TV and radio stations in Europe. About Proxmox Proxmox provides powerful and user-friendly Open Source server software. Enterprises of all sizes and industries use Proxmox solutions to deploy efficient and simplified IT infrastructures, minimize total cost of ownership, and avoid vendor lock-in. Proxmox also offers commercial support, training services, and an extensive partner ecosystem to ensure business continuity for its customers. Proxmox Server Solutions GmbH was established in 2005 and is headquartered in Vienna, Austria. Proxmox builds its product offerings on top of the Debian operating system. About Wind River Wind River For nearly 20 years, Wind River has led in commercial Open Source Linux solutions for mission-critical enterprise edge computing. With expertise across aerospace, automotive, industrial, telecom, and more, the company is committed to Open Source through initiatives like eLxr, Yocto, Zephyr, and StarlingX. Contact Information For further information, please visit the DebConf24 web page at https://debconf24.debconf.org/ or send mail to press@debian.org.

9 August 2024

Kalyani Kenekar: One Backpack, One Passport: My First Solo Trip

Planing A Self Organized Solo Trip You know the movie Queen? The actor Kangana Ranaut plays in that movie the role of Rani Mehra, a 24-year-old Punjabi woman, who was a simple, homely girl that was always reliant on her family. Similar to Rani I too rarely ventured out without my parents and often needed my younger sibling by my side. Inspired by her transformation, I decided it was time to take control of my own story and discover who I truly am. Queen movie picture Of Kangana

Trip Requirements

My First Passport The journey began with a significant first step: Obtaining my first passport Never having had one before, I scheduled the nearest available interview date on June 29 2022. This meant traveling to Solapur, a city 309 km from my hometown, accompanied by my father. After successfully completing the interview, I received my passport on July 14 2022.

Select A Country, Booking Flights And Accommodation Excited and ready to embark on my adventure, I planed trip to Albania and booked the flight tickets. Why? I had heard from friends that it was a beautiful European country with beaches and other attractions, and importantly, it didn t require a visa for Indian citizens and was more affordable than other European destinations. Before heading to Albania, I planned a overnight stop in Abu Dhabi with a transit visa, thanks to friend who knew the process for obtaining it. Some of my friends did travel also to Europe at the same time and quite close to my plannings, but that I realized just later the trip.

Day 1, Starting The Experience On July 20, 2022, I started my journey by traveling from Pune, Maharashtra, to Delhi, where my brother lives. He came to see me off at the airport, adding a touch of warmth and support to the beginning of my solo adventure. Upon arriving in Delhi, with my next flight scheduled for July 21, I stayed at a backpacker hostel named Zostel, Paharganj, Delhi to rest. During my stay, I noticed that many travelers at the hostel carried rucksacks, which sparked a desire in me to get one for my own trip to Europe. Up until then, I had always shopped with my mom and had never bought anything on my own. Inspired by the travelers, I set out to find a suitable rucksack. I traveled alone by metro from Paharganj to Rohini to visit a Decathlon store, where I purchased a 50-liter rucksack. This was a significant step in preparing for my European adventure and marked a milestone in my journey of self reliance. Rucksack description tag Kalyani s packpacker

Day 2, Flying To Abu Dhabi The following day, July 21 2024, I had a flight to Abu Dhabi. I spent the night at the hostel to rest before my journey. On the day of the flight, I needed to reach the airport by 3 PM, and a friend kindly came to drop me off. With my rucksack packed and excitement building, I was ready for the next leg of my adventure. When we arrived at the airport, my friend saw me off, marking the start of my international journey. With mom made spices, chutneys, and chilly flakes packed for comfort, I completed my immigration process in about two and a half hours. I then settled at the gate for my flight, feeling a mix of excitement and anxiety as thoughts raced through my mind. mom-made spices Passport and boarding pass To ease my nerves, I struck up a conversation with a man seated nearby who was also traveling to Abu Dhabi for work. He provided helpful information about safety and transportation in Abu Dhabi, which reassured me. With the boarding process complete and my anxiety somewhat eased. I found my window seat on the flight and settled in, excited for the journey ahead. Next to me was a young man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining factory. We had an engaging conversation about work culture in Abu Dhabi and recruitment from India. Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and began finding my way to the hotel Premier Inn AbuDhabi, which was in the airport area. To my surprise, I ran into the same man from the flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly accepted since navigating an unfamiliar city with a short acquaintance felt safer. At the hotel gate, he asked if I had local currency (Dirhams) for payment, as sometimes online transactions can fail. That hadn t crossed my mind, and I realized I might be left stranded if a transaction failed. Recognizing his help as a godsend, I asked if he could lend me some Dirhams, promising to transfer the amount later. He kindly assured me to pay him back once I reached the hotel room. With that relief, I checked into the hotel, feeling deeply grateful for the unexpected assistance and transferred the money to him after getting to my room. dhiramm money hotel room Kalyani in hotel room

Day 3, Flying And Arrive In Tirana Once in the hotel room, I found it hard to sleep, anxious about waking up on time for my flight. I set an alarm to wake up early, but my subconscious mind kept me alert, and I woke up before the alarm went off. I got freshened up and went down for breakfast, where I found some vegetarian options like Idli-Sambar and bread with butter, along with some morning tea. After breakfast, I headed back to the airport, ready to catch my flight to my final destination: Tirana, Albania. Breakfast at hotel Airport area I reached Tirana, Albania after a six hours flight, feeling exhausted and I was suffering from a headache. The air pressure had blocked my ears, and jet lag added to my fatigue. After collecting my checked luggage, I headed to the first ATM machine at the airport. Struggling to insert my card, I asked a nearby gentleman for help. He tried his best, but my card got stuck inside the machine. Panic set in as I worried about how I would survive without money. Taking a deep breath, I found an airport employee and explained the situation. The gentleman stayed with me, offering support and repeatedly apologizing for his mistake. However, it wasn t his fault, the ATM was out of order, which I hadn t noticed. My focus was solely on retrieving my ATM card. The airport employee worked diligently, using a hairpin to carefully extract my card. Finally, the card was freed, and I felt an immense sense of relief, grateful for the help of these kind strangers. I used another ATM, successfully withdrew money, and then went to an airport mobile SIM shop to buy a new SIM card for local internet and connectivity. sim plans

Day 4, Arriving In Tirana, Facing Challenges In A Foreign Country I had booked a stay at a backpacker hostel near the city center of Tirana. After sorting out the ATM and SIM card issues, I searched for a bus or any transport to get there. It was quite late, around 8:30 PM, and being in a new city, I was in a hurry. I saw a bus nearly leaving the airport, stopped it, and asked if it went to the city center. They gave me the green flag, so I boarded the airport service bus and reached the city center. Feeling very tired, I discovered that the hostel was about an hour and a half away by walking. Deciding to take a cab, I faced a challenge as the driver couldn t understand my English or accent. Using a mobile translator to convert my address from English to Albanian, I finally communicated my destination to him. With that sorted out, I headed to the Blue Door Backpacker Hostel and arrived around 9 PM, relieved to have finally reached my destination and I checked in. Hostel gate Street in Tirana I found my top bunk bed, only to realize I had booked a mixed-gender dormitory. This detail had completely escaped my notice during the booking process. I felt unsure about how to handle the situation. Coincidentally, my experience mirrored what Kangana faced in the movie Queen . Feeling acidic due to an empty stomach and the exhaustion of heavy traveling, I wasn t up to cooking in the hostel s kitchen. I asked the front desk about the nearest restaurant. It was nearly 9:30 PM, and the streets were deserted. To avoid any mishaps like in the movie Queen, I kept my passport securely locked in my bag, ensuring it wouldn t be a victim of theft. Venturing out for dinner, I felt uneasy on the quiet streets. I eventually found a restaurant recommended by the hostel, but the menu was almost entirely non-vegetarian. I struggled to ask about vegetarian options and was uncertain if any dishes contained eggs, as some people consider eggs to be vegetarian. Feeling frustrated and unsure, I left the restaurant without eating. I noticed a nearby grocery store that was about to close and managed to get a few extra minutes to shop. I bought some snacks, wafers, milk, and tea bags (though I couldn t find tea powder to make Indian-style tea). Returning to the hostel, I made do with wafers, cookies, and milk for dinner. That day was incredibly tough for me, I filled with exhaustion and struggle in a new country, I was on the verge of tears . I made a video call home before sleeping on the top bunk bed. It was a new experience for me, sharing a room with both unknown men and women. I kept my passport safe inside my purse and under my pillow while sleeping, staying very conscious about its security.

Day 5, Exploring Nearby Places I woke up the next day at noon. After having some coffee, the hostel management girl asked if I wanted breakfast. She offered curd with cornflakes, which I refused because I don t like curd. Instead, I ordered a pizza from a vegetarian pizza place with her help, and I started feeling better. I met some people in the hostel, some from Syria and others from Italy. I struggled to understand their accents but kept pushing myself to get involved in their discussions. Despite the challenges, I felt more at ease and was slowly adapting to my new environment. I went out from the hostel in the evening to buy some vegetables to cook something. I searched for shops and found some potatoes, tomatoes, and rice. I decided to cook Khichdi, an Indian dish made with rice, and added some chili flakes I brought from home. After preparing my dinner, I ate and then went to sleep again. vegetable shop cooking in kitchen Food

Day 6, Tiranas Recent History The next day, I planned to explore the city and visited Bunkart-1, a fascinating museum in a massive underground bunker from the communist era. Originally built as a shelter for Albania s political and military elite, it now offers a unique glimpse into the country s history under Enver Hoxha s oppressive regime. The museum s exhibits include historical artifacts, photographs, and multimedia displays that detail the lives of Albanians during that time. Walking through the dimly lit corridors, I felt the weight of history and gained a deeper understanding of Albania s past. Bunkart Bunkart Bunkart Bunkart Bunkart Bunkart Bunkar Bunkart Bunkart Bunkart Bunkart

Day 7-8, Meeting Friends From India The next day, I accidentally met with Chirag, who was returning from the Debian Conference 2022 held in Prizren, Kosovo, and staying at the same hostel. When I encountered him, he was talking on the phone, and I recognized he was Indian by his accent. I introduced myself, and we discovered we had some mutual friends. Chirag told me that our common friend, Raju, was also coming to stay at the hostel the next day. This news made me feel relaxed and happy to have known people around. When Raju arrived, the three of us, Chirag, Raju, and I planned to have dinner at an Indian restaurant and explore Tirana city. I had a great time talking and enjoying their company. Friends on street

Day 9-10, Meeting More Friends Raju had a ticket to leave soon, so Chirag and I made a plan to visit Shkod r and the nearby Komani Lake for kayaking. We started our journey early in the morning by bus and reached Shkod r. There, we met new friends from the conference, Pavit and Abraham, who were already there. We had dinner together and enjoyed an ice cream treat from Chirag. Friends on dinner

Day 12, Kayaking And Say Good Bye To Friends The next day, Pavit and Abraham had a flight back to India, so Chirag and I went to Komani Lake. We had an adventurous time kayaking, even though neither of us knew how to swim. We took a ferry through the backwaters to the island on Komani Lake and enjoyed a fantastic adventure together. After our trip, Chirag returned to Tirana for his flight back to India, leaving me to continue my journey alone. Lake with mountain Kayak

Day 13, Climbing Rozafa Castel By stopping at Shkod r, I visited Rozafa Castle. Despite the language barrier, as most locals only spoke Albanian, people around me guided me correctly on how to get there. At times, I used applications like Google Translate to communicate. To read signs or hotel menus, I used Google Photos' language converter. I even used the audio converter to understand and speak some basic Albanian phrases. View from top of Castel Rozafa castel I took a bus from Shkod r to the southern part of Albania, heading to Sarand . The journey lasted about five to six hours, and I had booked a stay at Mona s Hostel. Upon arrival, I met Eliza from America, and we went together to Ksamil Beach, spending a wonderful day there.

Day 14, Vlora Beach: Beach Side Cycling Next, I traveled to Vlor , where I stayed for one day. During my time there, I enjoyed beach side cycling with a cycle provided by the hostel owner and spent some time feeding fish. I also met a fellow traveler from Delhi who had brought along some preserved Indian curry. He kindly shared it with me, which was a welcome change after nearly 15 days without authentic Indian cuisine, except for what I had cooked myself in various hostels. Sunset on BeachKalyani on Beach Beach with streetBeach side cycling

Day 15-16 Visiting Durress, Travelling Back To Tirana I then visited Durr s, exploring its beautiful beaches, before heading back to Tirana one day before my flight home. On the day of my flight, my alarm didn t go off, and I woke up late at the hostel. In a frantic rush, I packed everything in just five minutes and dashed toward the city center to catch the bus to the airport. If I had been just five minutes later, I would have missed the bus. Thankfully, I managed to stop it just in time and began my journey back home, reflecting on the incredible adventure I had experienced. Fortunately, I wasn t late; I arrived at the airport just in time. After clearing immigration, I boarded my flight, which had a layover in Warsaw, Poland. The journey from Tirana to Warsaw took about two and a half hours, followed by a seven to eight-hour flight from Poland back to India. Once I arrived in Delhi, I returned to Zostel and booked a train ticket to Aurangabad for the next three days.

Backview This trip was an incredible adventure for me. I never imagined I could accomplish something like this, but I did. Meeting diverse people, experiencing different cultures, and learning so much made this journey truly unforgettable. Looking back, I realize how much I ve grown from this experience. Although I may have more opportunities to travel abroad in the future, this trip will always hold a special place in my heart. The memories I made and the incredible people I met along the way are irreplaceable. This experience goes beyond what I can express through this blog or words; it was incredibly precious to me. Every moment of this journey is etched in my memory, and I am grateful for every part of it.

17 July 2024

Gunnar Wolf: Script for weather reporting in Waybar

While I was living in Argentina, we (my family) found ourselves checking for weather forecasts almost constantly weather there can be quite unexpected, much more so that here in Mexico. So it took me a bit of tinkering to come up with a couple of simple scripts to show the weather forecast as part of my Waybar setup. I haven t cared to share with anybody, as I believe them to be quite trivial and quite dirty. But today, V ctor was asking for some slightly-related things, so here I go. Please do remember I warned: Dirty. Forecast I am using OpenWeather s open API. I had to register to get an APPID, and it allows me for up to 1,000 API calls per day, more than plenty for my uses, even if I am logged in at my desktops at three different computers (not an uncommon situation). Having that, I set up a file named /etc/get_weather/, that currently reads:
# Home, Mexico City
LAT=19.3364
LONG=-99.1819
# # Home, Paran , Argentina
# LAT=-31.7208
# LONG=-60.5317
# # PKNU, Busan, South Korea
# LAT=35.1339
#LONG=129.1055
APPID=SomeLongRandomStringIAmNotSharing
Then, I have a simple script, /usr/local/bin/get_weather, that fetches the current weather and the forecast, and stores them as /run/weather.json and /run/forecast.json:
#!/usr/bin/bash
CONF_FILE=/etc/get_weather
if [ -e "$CONF_FILE" ]; then
    . "$CONF_FILE"
else
    echo "Configuration file $CONF_FILE not found"
    exit 1
fi
if [ -z "$LAT" -o -z "$LONG" -o -z "$APPID" ]; then
    echo "Configuration file must declare latitude (LAT), longitude (LONG) "
    echo "and app ID (APPID)."
    exit 1
fi
CURRENT=/run/weather.json
FORECAST=/run/forecast.json
wget -q "https://api.openweathermap.org/data/2.5/weather?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ CURRENT "
wget -q "https://api.openweathermap.org/data/2.5/forecast?lat=$ LAT &lon=$ LONG &units=metric&appid=$ APPID " -O "$ FORECAST "
This script is called by the corresponding systemd service unit, found at /etc/systemd/system/get_weather.service:
[Unit]
Description=Get the current weather
[Service]
Type=oneshot
ExecStart=/usr/local/bin/get_weather
And it is run every 15 minutes via the following systemd timer unit, /etc/systemd/system/get_weather.timer:
[Unit]
Description=Get the current weather every 15 minutes
[Timer]
OnCalendar=*:00/15:00
Unit=get_weather.service
[Install]
WantedBy=multi-user.target
(yes, it runs even if I m not logged in, wasting some of my free API calls but within reason) Then, I declare a "custom/weather" module in the desired position of my ~/.config/waybar/waybar.config, and define it as:
"custom/weather":  
    "exec": "while true;do /home/gwolf/bin/parse_weather.rb;sleep 10; done",
"return-type": "json",
 ,
This script basically morphs a generic weather JSON description into another set of JSON bits that display my weather in the way I prefer to have it displayed as:
#!/usr/bin/ruby
require 'json'
Sources =  :weather => '/run/weather.json',
           :forecast => '/run/forecast.json'
           
Icons =  '01d' => ' ', # d   day
         '01n' => ' ', # n   night
         '02d' => ' ',
         '02n' => ' ',
         '03d' => ' ',
         '03n' => ' ',
         '04d'  => ' ',
         '04n' => ' ',
         '09d' => ' ',
         '10n' =>  '  ',
         '10d' => ' ',
         '13d' => ' ',
         '50d' => ' '
         
ret =  'text': nil, 'tooltip': nil, 'class': 'weather', 'percentage': 100 
# Current weather report: Main text of the module
begin
  weather = JSON.parse(open(Sources[:weather],'r').read)
  loc_name = weather['name']
  icon = Icons[weather['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
  temp = weather['main']['temp']
  sens = weather['main']['feels_like']
  hum = weather['main']['humidity']
  wind_vel = weather['wind']['speed']
  wind_dir = weather['wind']['deg']
  portions =  
  portions[:loc] = loc_name
  portions[:temp] = '%s  %2.2f C (%2.2f)' % [icon, temp, sens]
  portions[:hum] = '  %2d%%' % hum
  portions[:wind] = ' %2.2fm/s %d ' % [wind_vel, wind_dir]
  ret['text'] = [:loc, :temp, :hum, :wind].map   p  portions[p] .join(' ')
rescue => err
  ret['text'] = 'Could not process weather file (%s   %s: %s)' % [Sources[:weather], err.class, err.to_s]
end
# Weather prevision for the following hours/days
begin
  cast = []
  forecast = JSON.parse(open(Sources[:forecast], 'r').read)
  min = ''
  max = ''
  day=Time.now.strftime('%Y.%m.%d')
  by_day =  
  forecast['list'].each_with_index do  f,i 
    by_day[day]  = []
    time = Time.at(f['dt'])
    time_lbl = '%02d:%02d' % [time.hour, time.min]
    icon = Icons[f['weather'][0]['icon']]   ' ' + f['weather'][0]['icon'] + f['weather'][0]['main']
    by_day[day] << f['main']['temp']
    if time.hour == 0
      min = '%2.2f' % by_day[day].min
      max = '%2.2f' % by_day[day].max
      cast << '          min: <b>%s C</b> max: <b>%s C</b>' % [min, max]
      day = time.strftime('%Y.%m.%d')
      cast << '        <b>%04d.%02d.%02d</b>  ' %
              [time.year, time.month, time.day]
    end
    cast << '%s   %2.2f C    %2d%%   %s %s' % [time_lbl,
                                                f['main']['temp'],
                                                f['main']['humidity'],
                                                icon,
                                                f['weather'][0]['description']
                                               ]
  end
  cast << '          min: <b>%s</b> C max: <b>%s C</b>' % [min, max]
  ret['tooltip'] = cast.join("\n")
  
rescue => err
  ret['tooltip'] = 'Could not process forecast file (%s   %s)' % [Sources[:forecast], err.class, err.to_s]
end
# Print out the result for Waybar to process
puts ret.to_json
The end result? Nothing too stunning, but definitively something I find useful and even nicely laid out: Screenshot Do note that it seems OpenWeather will return the name of the closest available meteorology station with (most?) recent data for my home, I often get Ciudad Universitaria, but sometimes Coyoac n or even San ngel Inn.

9 July 2024

Russ Allbery: Review: Raising Steam

Review: Raising Steam, by Terry Pratchett
Series: Discworld #40
Publisher: Anchor Books
Copyright: 2013
Printing: October 2014
ISBN: 0-8041-6920-9
Format: Trade paperback
Pages: 365
Raising Steam is the 40th Discworld novel and the third Moist von Lipwig novel, following Making Money. This is not a good place to start reading the series. Dick Simnel is a tinkerer from a line of tinkerers. He has been obsessed with mastering the power of steam since the age of ten, when his father died in a steam accident. That pursuit took him deeper into mathematics and precision, calculations and experiments, until he built Iron Girder: Discworld's first steam-powered locomotive. His early funding came from some convenient family pirate treasure, but turning his prototype into something more will require significantly more resources. That is how he ends up in the office of Harry King, Ankh-Morpork's sanitation magnate. Simnel's steam locomotive has the potential to solve some obvious logistical problems, such as getting fish from the docks of Quirm to the streets of Ankh-Morpork before it stops being vaguely edible. That's not what makes railways catch fire, however. As soon as Iron Girder is huffing and puffing its way around King's compound, it becomes the most popular attraction in the city. People stand in line for hours to ride it over and over again for reasons that they cannot entirely explain. There is something wild and uncontrollable going on. Vetinari is not sure he likes wild and uncontrollable, but he knows the lap into which such problems can be dumped: Moist von Lipwig, who is already getting bored with being a figurehead for the city's banking system. The setup for Raising Steam reminds me more of Moving Pictures than the other Moist von Lipwig novels. Simnel himself is a relentlessly practical engineer, but the trains themselves have tapped some sort of primal magic. Unlike Moving Pictures, Pratchett doesn't provide an explicit fantasy explanation involving intruding powers from another world. It might have been a more interesting book if he had. Instead, this book expects the reader to believe there is something inherently appealing and fascinating about trains, without providing much logic or underlying justification. I think some readers will be willing to go along with this, and others (myself included) will be left wishing the story had more world-building and fewer exclamation points. That's not the real problem with this book, though. Sadly, its true downfall is that Pratchett's writing ability had almost completely collapsed by the time he wrote it. As mentioned in my review of Snuff, we're now well into the period where Pratchett was suffering the effects of early-onset Alzheimer's. In that book, his health issues mostly affected the dialogue near the end of the novel. In this book, published two years later, it's pervasive and much worse. Here's a typical passage from early in the book:
It is said that a soft answer turneth away wrath, but this assertion has a lot to do with hope and was now turning out to be patently inaccurate, since even a well-spoken and thoughtful soft answer could actually drive the wrong kind of person into a state of fury if wrath was what they had in mind, and that was the state the elderly dwarf was now enjoying.
One of the best things about Discworld is Pratchett's ability to drop unexpected bits of wisdom in a sentence or two, or twist a verbal knife in an unexpected and surprising direction. Raising Steam still shows flashes of that ability, but it's buried in run-on sentences, drowned in cliches and repetition, and often left behind as the containing sentence meanders off into the weeds and sputters to a confused halt. The idea is still there; the delivery, sadly, is not. This is the first Discworld novel that I found mentally taxing to read. Sentences are often so overpacked that they require real effort to untangle, and the untangled meaning rarely feels worth the effort. The individual voice of the characters is almost gone. Vetinari's monologues, rather than being a rare event with dangerous layers, are frequent, rambling, and indecisive, often sounding like an entirely different character than the Vetinari we know. The constant repetition of the name any given character is speaking to was impossible for me to ignore. And the momentum of the story feels wrong; rather than constructing the events of the story in a way that sweeps the reader along, it felt like Pratchett was constantly pushing, trying to convince the reader that trains were the most exciting thing to ever happen to Discworld. The bones of a good story are here, including further development of dwarf politics from The Fifth Elephant and Thud! and the further fallout of the events of Snuff. There are also glimmers of Pratchett's typically sharp observations and turns of phrase that could have been unearthed and polished. But at the very least this book needed way more editing and a lot of rewriting. I suspect it could have dropped thirty pages just by tightening the dialogue and removing some of the repetition. I'm afraid I did not enjoy this. I am a bit of a hard sell for the magic fascination of trains I love trains, but my model railroad days are behind me and I'm now more interested in them as part of urban transportation policy. Previous Discworld books on technology and social systems did more of the work of drawing the reader in, providing character hooks and additional complexity, and building a firmer foundation than "trains are awesome." The main problem, though, was the quality of the writing, particularly when compared to the previous novels with the same characters. I dragged myself through this book out of a sense of completionism and obligation, and was relieved when I finished it. This is the first Discworld novel that I don't recommend. I think the only reason to read it is if you want to have read all of Discworld. Otherwise, consider stopping with Snuff and letting it be the send-off for the Ankh-Morpork characters. Followed by The Shepherd's Crown, a Tiffany Aching story and the last Discworld novel. Rating: 3 out of 10

1 July 2024

Russ Allbery: Review: Snuff

Review: Snuff, by Terry Pratchett
Series: Discworld #39
Publisher: Harper
Copyright: October 2011
Printing: January 2013
ISBN: 0-06-221886-7
Format: Mass market
Pages: 470
Snuff is the 39th Discworld novel and the 8th (and last) Watch novel. This is not a good place to start reading. Sam Vines has been talked, cajoled, and coerced into taking a vacation. Since he is now the Duke of Ankh, he has a country estate that he's never visited. Lady Sybil is insistent on remedying this, as is Vetinari. Both of them may have ulterior motives. They may also be colluding. It does not take long for Vimes to realize that something is amiss in the countryside. It's not that the servants are uncomfortable with him talking to them, the senior servants are annoyed that he talks to the wrong servants, and the maids turn to face the wall at the sight of him. Those are just the strange customs of the aristocracy, for which he has little understanding and even less patience. There's something else going on. The nobility is wary, the town blacksmith is angry about something more than disliking the nobles, and the bartender doesn't want to get involved. Vimes smells something suspicious. When he's framed for a murder, the suspicions seem justified. It takes some time before the reader learns what the local nobility are squirming about, so I won't spoil it. What I will say is that Snuff is Pratchett hammering away at one of his favorite targets: prejudice, cruelty, and treating people like things. Vimes, with his uncompromising morality, is one of the first to realize the depth of the problem. It takes most of the rest longer to come around, even Sybil. It's both painful, and painfully accurate, to contemplate how often recognition of other people's worth only comes once they do something that you recognize as valuable. This is one of the better-plotted Discworld novels. Vimes starts out with nothing but suspicions and stubbornness, and manages to turn Snuff into a mystery novel through dogged persistence. The story is one continuous plot arc with the normal Pratchett color (Young Sam's obsession with types of poo, for example) but without extended digressions. It also has considerably better villains than most Pratchett novels: layers of foot soldiers and plotters, each of which have to be dealt with in a suitable way. Even the concluding action sequences worked for me, which is not always a given in Discworld. The problem, unfortunately, is that the writing is getting a bit wobbly. Pratchett died of early-onset Alzheimer's in 2015, four years after this book was first published, and this is the first novel where I can see some early effects. It mostly shows up in the dialogue: it's just a bit flabby and a bit repetitive, and the characters, particularly towards the end of the book, start repeating the name of the person they're talking to every other line. Once I saw it, I couldn't unsee it, and it was annoying enough to rob a bit of enjoyment from the end of the book. That aside, though, this was a solid Discworld novel. Vimes testing his moral certainty against the world and forcing it into a more ethical shape is always rewarding, and here he takes more risks, with better justification, than in most of the Watch novels. We also find out that Vimes has a legacy from the events of Thud!, which has interesting implications that I wish Pratchett had more time to explore. I think the best part of this book is how it models the process of social change through archetypes: the campaigner who knew the right choice early on, the person who formed their opinion the first time they saw injustice, the person who gets there through a more explicit moral code, the ones who have to be pushed by someone who was a bit faster, the ones who have to be convinced but then work to convince others, and of course the person who is willing to take on the unfair and far-too-heavy burden of being exceptional enough that they can be used as a tool to force other people to acknowledge them as a person. And, since this is Discworld, Vetinari is lurking in the scenery pulling strings, balancing threats, navigating politics, and giving Vimes just enough leeway to try to change the world without abusing his power. I love that the amount of leeway Vimes gets depends on how egregious the offense is, and Vetinari calibrates this quite carefully without ever saying so openly. Recommended, and as much as I don't want to see this series end, this is not a bad entry for the Watch novels to end on. Followed in publication order by Raising Steam. Rating: 8 out of 10

Next.