Search Results: "tina"

25 August 2022

Jonathan Dowland: dues (or blues)

After I wrote hledger, I got some good feedback, both from a friend in-person and also on Twitter. My in-person friend asked, frankly, do I really try to manage money like this: tracking every single expense? Which affirms my suspicion, that many people don't, and that it perhaps isn't essential to do so. Combined with the details below, 3/4 of the way through my experiment with using hledger, I'm not convinced that it has been a good idea. I'm quoting my Twitter feedback here in order to respond. The context is handling when I have used the "wrong" card to pay for something: a card affiliated with my family expenses for something personal, or vice versa. With double-entry book-keeping, and one pair of transactions, the destination account can either record the expense category:
2022-08-20  coffee
    family:liabilities:creditcard     -3
    jon:expenses:coffee                3
or the fact it was paid for on the wrong card
2022-08-20  coffee
    family:liabilities:creditcard     -3
    family:liabilities:jon             3 ; jon owes family
but not easily both. https://twitter.com/pranesh/status/1516819846431789058:
When you accidentally use the family CV for personal expenses, credit the account "family:liabilities:creditcard:jon" instead of "family:liabilities:creditcard". That'll allow you to track w/ 2 postings.
This is an interesting idea: create a sub-account underneath the credit card, and I would have a separate balance representing the money I owed. Before:
$ hledger bal -t
              -3  family:liabilities:creditcard
               3  jon:expenses:coffee
transaction
2022-08-20  coffee
    family:liabilities:creditcard:jon     -3
    jon:expenses:coffee                    3
Corresponding balances
$ hledger bal -t
              -3  family:liabilities:creditcard
              -3    jon
               3  jon:expenses:coffee
Great. However, what process would clear the balance on that sub-account? In practice, I don't make a separate, explicit payment to the credit card from my personal accounts. It's paid off in full by direct debit from the family shared account. In practice, such dues are accumulated and settled with one off bank transfers, now and then. Since the sub-account is still part of the credit card heirarchy, I can't just use a set of virtual postings to consolidate that value with other liabilities, or cover it. Any transaction in there which did not correspond to a real transaction on the credit card would make the balance drift away from the real-word credit statements. The only way I could see this working would be if the direct debit that settles the credit card was artificially split to clear the sub-account, and then the amount owed would be lost. https://twitter.com/pranesh/status/1516819846431789058:
Else, add: family:assets:receivable:jon $3
jon:liabilities:family:cc $-3
A "receivable" account would function like the "dues" accounts I described in hledger (except "receivable" is an established account type in double-entry book-keeping). Here I think Pranesh is proposing using these two accounts in addition to the others on a posting. E.g.
2022-08-20  coffee
    family:liabilities:creditcard     -3
    jon:expenses:coffee                3
    family:assets:receivable:jon       3
    jon:liabilities:family            -3
This balances, and we end up with two other accounts, which are tracking the exact same thing. I only owe 3, but if you didn't know that the accounts were "views" onto the same thing, you could mistakenly think I owed 6. I can't see the advantage of this over just using a virtual, unbalanced posting. Dues, Liabilities I'd invented accounts called "dues" to track moneys owed. The more correct term for this in accounting parlance would be "accounts receivable", as in one of the examples above. I could instead be tracking moneys due; this is a classic liability. Liabilities have negative balances.
jon:liabilities:family    -3
This means, I owe the family 3. Liability accounts like that are identical to "dues" accounts. A positive balance in a Liability is a counter-intuitive way of describing moneys owed to me, rather than by me. And, reviewing a lot of the coding I did this year, I've got myself hopelessly confused with the signs, and made lots of errors. Crucially, double-entry has not protected me from making those mistakes: of course, I'm circumventing it by using unbalanced virtual postings in many cases (although I was not consistent in where I did this), but even if I used a pair of accounts as in the last example above, I could still screw it up.

5 August 2022

Thorsten Alteholz: My Debian Activities in July 2022

FTP master This month I accepted 420 and rejected 44 packages. The overall number of packages that got accepted was 422. I am sad to write the following lines, but unfortunately there are people who rather take advantage of others instead of doing a proper maintenance of their packages. So, in order to find time slots for as much packages in NEW as possible, I no longer write a debian/copyright for others. I know it is a boring task to collect the copyright information, but our policy still requires this. Of course nobody is perfect and certainly one or the other license or copyright holder can be overlooked. Luckily most of the contributors maintain their debian/copyright very thouroughly with a terrific result. On the other hand some contributors upload only some crap and demand that I exactly list what is missing. I am no longer willing to do this. I am going to stop processing after I found a few missing things and reject the package. When I see repeatedly uploads containing only improvements with things I pointed out, I will process this package only after all others from NEW are done. Debian LTS This was my ninety-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. This month my all in all workload has been 35.75h. Unfortunately Stretch LTS has moved to Stretch ELTS and Buster LTS was not yet opened in July. So I think this is the first month I did not work all assigned hours. Besides things on security-master, I only worked 20h on moving the LTS documentation to their new destination. At the moment the documentation is spread over several locations. As searching over all those locations is not possible, it shall be collected at one place. Debian ELTS This month was the forty-eighth ELTS month. During my allocated time I uploaded: I also started to work on mod-wsgi and my patch was already approved by the maintainer. Now I am waiting for the security team to decide whether it will be uploaded as DSA or via PU. Last but not least I did some days of frontdesk duties. Other stuff This month I uploaded new upstream versions or improved packaging of:

4 August 2022

Abhijith PA: Trip to misty mountains in Munnar

Munnar is a hill station in Idukki district of Kerala, India. Home to 2nd largest tea plantation in the country. Lot of people visit here on summer and in winter as well. I live in the neighboring district of Munnar though I never made a visit. In my mind I pictured Munnar as a Tourist trap with lots of garbage lying around. I recently made a visit and it changed my perception of this place. Munnar!

Little background I never liked tea much. I am also not a coffee person either. But if I have to choose over two that will be coffee because of the strong aroma. Going to relatives house, they always offered hot tea defacto. I always find difficult say no to their friendly gesture. But I hate tea. A generation before me drinks lot of tea here at my place. You can see tea stalls in every corner and people sipping tea. I always wondered why people drink lot of tea on a hot country like India. The book I am currently trying to read has a chapter about Munnar and how it became a Tea plantation under the British rule. Well, around the same time. I watched a documentary program about the tea and Munnar.

Munnar Munnar on early evening Too much word here and there I decided to do a visit. I took a motorbike and started a journey to Munnar. Due to covid restrictions there weren t much tourists, so this was to my advantage. There are many water falls on the way to Munnar. Some are very close to road and some are far away but can be spotted. Munnar travel is just not about the destination because its never been a single spot. Enjoying the journey that the ride has to offer. I stayed at a hotel, little far away from town, though I never recommend hotels in Munnar. Try to find home stays and small establishments away from the town. There are British Era bungalows inside the plantations still maintained in good condition which can be booked per room or entire property. The lush greenery on the Mountains of tea plantation is very refreshing and feast to our eyes. The mornings and evenings of Munnar is something to watch, mountains wrapped in mist slowly uncovering with sunlight and again slipping to mist by dark evening. I planned only to visit places which are less explored by tourists. People here live a simple life. Most of them are plantation workers. The native people of Munnar are actually tribal folks but since the plantation boom many people from Tamil Nadu(neighboring state) and other parts of Kerala settled here. The houses of this plantation workers resembled Hobbit homes in Shire from Lord of the Rings as they are in the hill slides. The Kannan Devan hills, the biggest hill in area covers more than half of Munnar. Hobbit homes Two famous Tea companies from Munnar are Tata Tea and KDHP(Kanan Devan Hills Plantations Company (P) Limited) tea. KDHP is actually an employee owned Tea company ie a good share of this company is owned by the employees working there. This was interesting to me, so I bought a bag of speciality tea from KDHP store on my return. I don t drink tea on a daily basis but I will try it on special occasions.

31 July 2022

Russell Coker: Links July 2022

Darren Hayes wrote an interesting article about his battle with depression and his journey to accepting being gay [1]. Savage Garden had some great songs, Affirmation is relevant to this topic. Rorodi wrote an interesting article about the biggest crypto lending company being a Ponzi scheme [2]. One thing I find particularly noteworthy is how obviously scammy it is, even to the extent of having an ex porn star as an executive! Celsuis is now in the process of going bankrupt, 7 months after that article was published. Quora has an interesting discussion about different type casts in C++ [3]. C style casts shouldn t be used! MamaMia has an interesting article about Action Faking which means procrastination by doing tasks marginally related to the end goal [3]. This can mean include excessive study about the topic, excessive planning for the work, and work on things that aren t on the critical path first (EG thinking of a name for a project). Apple has a new Lockdown Mode to run an iPhone in a more secure configuration [4]. It would be good if more operating systems had a feature like this. Informative article about energy use of different organs [5]. The highest metabolic rates (in KCal/Kg/day) are for the heart and kidneys. The brain is 3rd on the list and as it s significantly more massive than the heart and kidneys it uses more energy, however this research was done on people who were at rest. Scientific American has an interesting article about brain energy use and exhaustion from mental effort [6]. Apparently it s doing things that aren t fun that cause exhaustion, mental effort that s fun can be refreshing.

25 July 2022

Bits from Debian: DebConf22 closes in Prizren and DebConf23 dates announced

DebConf22 group photo - click to enlarge On Sunday 24 July 2022, the annual Debian Developers and Contributors Conference came to a close. Hosting 260 attendees from 38 different countries over a combined 91 event talks, discussion sessions, Birds of a Feather (BoF) gatherings, workshops, and activities, DebConf22 was a large success. The conference was preceded by the annual DebCamp held 10 July to 16 July which focused on individual work and team sprints for in-person collaboration towards developing Debian. In particular, this year there have been sprints to advance development of Mobian/Debian on mobile, reproducible builds and Python in Debian, and a BootCamp for newcomers, to get introduced to Debian and have some hands-on experience with using it and contributing to the community. The actual Debian Developers Conference started on Sunday 17 July 2022. Together with activities such as the traditional 'Bits from the DPL' talk, the continuous key-signing party, lightning talks and the announcement of next year's DebConf (DebConf23 in Kochi, India), there were several sessions related to programming language teams such as Python, Perl and Ruby, as well as news updates on several projects and internal Debian teams, discussion sessions (BoFs) from many technical teams (Long Term Support, Android tools, Debian Derivatives, Debian Installer and Images team, Debian Science...) and local communities (Debian Brasil, Debian India, the Debian Local Teams), along with many other events of interest regarding Debian and free software. The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the entire conference. Several activities that couldn\'t be organized in past years due to the COVID pandemic returned to the conference\'s schedule: a job fair, open-mic and poetry night, the traditional Cheese and Wine party, the group photos and the Day Trip. For those who were not able to attend, most of the talks and sessions were recorded for live streams with videos made, available through the Debian meetings archive website. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents. The DebConf22 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf23 will be held in Kochi, India, from September 10 to September 16, 2023. As tradition follows before the next DebConf the local organizers in India will start the conference activites with DebCamp (September 03 to September 09, 2023), with particular focus on individual and team work towards improving the distribution. DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct in DebConf22 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf22, particularly our Platinum Sponsors: Lenovo, Infomaniak, ITP Prizren and Google. About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/. About Lenovo As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. About Infomaniak Infomaniak is Switzerland\'s largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). About ITP Prizren Innovation and Training Park Prizren intends to be a changing and boosting element in the area of ICT, agro-food and creatives industries, through the creation and management of a favourable environment and efficient services for SMEs, exploiting different kinds of innovations that can contribute to Kosovo to improve its level of development in industry and research, bringing benefits to the economy and society of the country as a whole. About Google Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. Contact Information For further information, please visit the DebConf22 web page at https://debconf22.debconf.org/ or send mail to press@debian.org.

5 July 2022

Alberto Garc a: Running the Steam Deck s OS in a virtual machine using QEMU

SteamOS desktop Introduction The Steam Deck is a handheld gaming computer that runs a Linux-based operating system called SteamOS. The machine comes with SteamOS 3 (code name holo ), which is in turn based on Arch Linux. Although there is no SteamOS 3 installer for a generic PC (yet), it is very easy to install on a virtual machine using QEMU. This post explains how to do it. The goal of this VM is not to play games (you can already install Steam on your computer after all) but to use SteamOS in desktop mode. The Gamescope mode (the console-like interface you normally see when you use the machine) requires additional development to make it work with QEMU and will not work with these instructions. A SteamOS VM can be useful for debugging, development, and generally playing and tinkering with the OS without risking breaking the Steam Deck. Running the SteamOS desktop in a virtual machine only requires QEMU and the OVMF UEFI firmware and should work in any relatively recent distribution. In this post I m using QEMU directly, but you can also use virt-manager or some other tool if you prefer, we re emulating a standard x86_64 machine here. General concepts SteamOS is a single-user operating system and it uses an A/B partition scheme, which means that there are two sets of partitions and two copies of the operating system. The root filesystem is read-only and system updates happen on the partition set that is not active. This allows for safer updates, among other things. There is one single /home partition, shared by both partition sets. It contains the games, user files, and anything that the user wants to install there. Although the user can trivially become root, make the root filesystem read-write and install or change anything (the pacman package manager is available), this is not recommended because A simple way for the user to install additional software that survives OS updates and doesn t touch the root filesystem is Flatpak. It comes preinstalled with the OS and is integrated with the KDE Discover app. Preparing all necessary files The first thing that we need is the installer. For that we have to download the Steam Deck recovery image from here: https://store.steampowered.com/steamos/download/?ver=steamdeck&snr= Once the file has been downloaded, we can uncompress it and we ll get a raw disk image called steamdeck-recovery-4.img (the number may vary). Note that the recovery image is already SteamOS (just not the most up-to-date version). If you simply want to have a quick look you can play a bit with it and skip the installation step. In this case I recommend that you extend the image before using it, for example with truncate -s 64G steamdeck-recovery-4.img or, better, create a qcow2 overlay file and leave the original raw image unmodified: qemu-img create -f qcow2 -F raw -b steamdeck-recovery-4.img steamdeck-recovery-extended.qcow2 64G But here we want to perform the actual installation, so we need a destination image. Let s create one: $ qemu-img create -f qcow2 steamos.qcow2 64G Installing SteamOS Now that we have all files we can start the virtual machine:
$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \
    -device usb-ehci -device usb-tablet \
    -device intel-hda -device hda-duplex \
    -device VGA,xres=1280,yres=800 \
    -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \
    -drive if=virtio,file=steamdeck-recovery-4.img,driver=raw \
    -device nvme,drive=drive0,serial=badbeef \
    -drive if=none,id=drive0,file=steamos.qcow2
Note that we re emulating an NVMe drive for steamos.qcow2 because that s what the installer script expects. This is not strictly necessary but it makes things a bit easier. If you don t want to do that you ll have to edit ~/tools/repair_device.sh and change DISK and DISK_SUFFIX. SteamOS installer shortcuts Once the system has booted we ll see a KDE Plasma session with a few tools on the desktop. If we select Reimage Steam Deck and click Proceed on the confirmation dialog then SteamOS will be installed on the destination drive. This process should not take a long time. Now, once the operation finishes a new confirmation dialog will ask if we want to reboot the Steam Deck, but here we have to choose Cancel . We cannot use the new image yet because it would try to boot into the Gamescope session, which won t work, so we need to change the default desktop session. SteamOS comes with a helper script that allows us to enter a chroot after automatically mounting all SteamOS partitions, so let s open a Konsole and make the Plasma session the default one in both partition sets:
$ sudo steamos-chroot --disk /dev/nvme0n1 --partset A
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit
$ sudo steamos-chroot --disk /dev/nvme0n1 --partset B
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit
After this we can shut down the virtual machine. Our new SteamOS drive is ready to be used. We can discard the recovery image now if we want. Booting SteamOS and first steps To boot SteamOS we can use a QEMU line similar to the one used during the installation. This time we re not emulating an NVMe drive because it s no longer necessary.
$ cp /usr/share/OVMF/OVMF_VARS.fd .
$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \
   -device usb-ehci -device usb-tablet \
   -device intel-hda -device hda-duplex \
   -device VGA,xres=1280,yres=800 \
   -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \
   -drive if=pflash,format=raw,file=OVMF_VARS.fd \
   -drive if=virtio,file=steamos.qcow2 \
   -device virtio-net-pci,netdev=net0 \
   -netdev user,id=net0,hostfwd=tcp::2222-:22
(the last two lines redirect tcp port 2222 to port 22 of the guest to be able to SSH into the VM. If you don t want to do that you can omit them) If everything went fine, you should see KDE Plasma again, this time with a desktop icon to launch Steam and another one to Return to Gaming Mode (which we should not use because it won t work). See the screenshot that opens this post. Congratulations, you re running SteamOS now. Here are some things that you probably want to do: Updating the OS to the latest version The Steam Deck recovery image doesn t install the most recent version of SteamOS, so now we should probably do a software update. Note: if the last step fails after reaching 100% with a post-install handler error then go to Connections in the system settings, rename Wired Connection 1 to something else (anything, the name doesn t matter), click Apply and run steamos-update again. This works around a bug in the update process. Recent images fix this and this workaround is not necessary with them. As we did with the recovery image, before rebooting we should ensure that the new update boots into the Plasma session, otherwise it won t work:
$ sudo steamos-chroot --partset other
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit
After this we can restart the system. If everything went fine we should be running the latest SteamOS release. Enjoy! Reporting bugs SteamOS is under active development. If you find problems or want to request improvements please go to the SteamOS community tracker. Edit 06 Jul 2022: Small fixes, mention how to install the OS without using NVMe.

2 July 2022

Fran ois Marier: Remote logging of Turris Omnia log messages using syslog-ng and rsyslog

As part of debugging an upstream connection problem I've been seeing recently, I wanted to be able to monitor the logs from my Turris Omnia router. Here's how I configured it to send its logs to a server I already had on the local network.

Server setup The first thing I did was to open up my server's rsyslog (Debian's default syslog server) to remote connections since it's going to be the destination host for the router's log messages. I added the following to /etc/rsyslog.d/router.conf:
module(load="imtcp")
input(type="imtcp" port="514")
if $fromhost-ip == '192.168.1.1' then  
    if $syslogseverity <= 5 then  
        action(type="omfile" file="/var/log/router.log")
     
    stop
 
This is using the latest rsyslog configuration method: a handy scripting language called RainerScript. Severity level 5 maps to "notice" which consists of unusual non-error conditions, and 192.168.1.1 is of course the IP address of the router on the LAN side. With this, I'm directing all router log messages to a separate file, filtering out anything less important than severity 5. In order for rsyslog to pick up this new configuration file, I restarted it:
systemctl restart rsyslog.service
and checked that it was running correctly (e.g. no syntax errors in the new config file) using:
systemctl status rsyslog.service
Since I added a new log file, I also setup log rotation for it by putting the following in /etc/logrotate.d/router:
/var/log/router.log
 
    rotate 4
    weekly
    missingok
    notifempty
    compress
    delaycompress
    sharedscripts
    postrotate
        /usr/lib/rsyslog/rsyslog-rotate
    endscript
 
In addition, since I use logcheck to monitor my server logs and email me errors, I had to add /var/log/router.log to /etc/logcheck/logcheck.logfiles. Finally I opened the rsyslog port to the router in my server's firewall by adding the following to /etc/network/iptables.up.rules:
# Allow logs from the router
-A INPUT -s 192.168.1.1 -p tcp --dport 514 -j ACCEPT
and ran iptables-apply. With all of this in place, it was time to get the router to send messages.

Router setup As suggested on the Turris forum, I ssh'ed into my router and added this in /etc/syslog-ng.d/remote.conf:
destination d_loghost  
        network("192.168.1.200" time-zone("America/Vancouver"));
 ;
source dns  
        file("/var/log/resolver");
 ;
log  
        source(src);
        source(net);
        source(kernel);
        source(dns);
        destination(d_loghost);
 ;
Setting the timezone to the same as my server was needed because the router messages were otherwise sent with UTC timestamps. To ensure that the destination host always gets the same IP address (192.168.1.200), I went to the advanced DHCP configuration page and added a static lease for the server's MAC address so that it always gets assigned 192.168.1.200. If that wasn't already the server's IP address, you'll have to restart it for this to take effect. Finally, I restarted the syslog-ng daemon on the router to pick up the new config file:
/etc/init.d/syslog-ng restart

Testing In order to test this configuration, I opened three terminal windows:
  1. tail -f /var/log/syslog on the server
  2. tail -f /var/log/router.log on the server
  3. tail -f /var/log/messages on the router
I immediately started to see messages from the router in the third window and some of these, not all because of my severity-5 filter, were flowing to the second window as well. Also important is that none of the messages make it to the first window, otherwise log messages from the router would be mixed in with the server's own logs. That's the purpose of the stop command in /etc/rsyslog.d/router.conf. To force a log messages to be emitted by the router, simply ssh into it and issue the following command:
logger Test
It should show up in the second and third windows immediately if you've got everything setup correctly

29 June 2022

Aigars Mahinovs: Long travel in an electric car

Since the first week of April 2022 I have (finally!) changed my company car from a plug-in hybrid to a fully electic car. My new ride, for the next two years, is a BMW i4 M50 in Aventurine Red metallic. An ellegant car with very deep and memorable color, insanely powerful (544 hp/795 Nm), sub-4 second 0-100 km/h, large 84 kWh battery (80 kWh usable), charging up to 210 kW, top speed of 225 km/h and also very efficient (which came out best in this trip) with WLTP range of 510 km and EVDB real range of 435 km. The car also has performance tyres (Hankook Ventus S1 evo3 245/45R18 100Y XL in front and 255/45R18 103Y XL in rear all at recommended 2.5 bar) that have reduced efficiency. So I wanted to document and describe how was it for me to travel ~2000 km (one way) with this, electric, car from south of Germany to north of Latvia. I have done this trip many times before since I live in Germany now and travel back to my relatives in Latvia 1-2 times per year. This was the first time I made this trip in an electric car. And as this trip includes both travelling in Germany (where BEV infrastructure is best in the world) and across Eastern/Northen Europe, I believe that this can be interesting to a few people out there. Normally when I travelled this trip with a gasoline/diesel car I would normally drive for two days with an intermediate stop somewhere around Warsaw with about 12 hours of travel time in each day. This would normally include a couple bathroom stops in each day, at least one longer lunch stop and 3-4 refueling stops on top of that. Normally this would use at least 6 liters of fuel per 100 km on average with total usage of about 270 liters for the whole trip (or about 540 just in fuel costs, nowadays). My (personal) quirk is that both fuel and recharging of my (business) car inside Germany is actually paid by my employer, so it is useful for me to charge up (or fill up) at the last station in Gemany before driving on. The plan for this trip was made in a similar way as when travelling with a gasoline car: travelling as fast as possible on German Autobahn network to last chargin stop on the A4 near G rlitz, there charging up as much as reasonable and then travelling to a hotel in Warsaw, charging there overnight and travelling north towards Ionity chargers in Lithuania from where reaching the final target in north of Latvia should be possible. How did this plan meet the reality? Travelling inside Germany with an electric car was basically perfect. The most efficient way would involve driving fast and hard with top speed of even 180 km/h (where possible due to speed limits and traffic). BMW i4 is very efficient at high speeds with consumption maxing out at 28 kWh/100km when you actually drive at this speed all the time. In real situation in this trip we saw consumption of 20.8-22.2 kWh/100km in the first legs of the trip. The more traffic there is, the more speed limits and roadworks, the lower is the average speed and also the lower the consumption. With this kind of consumption we could comfortably drive 2 hours as fast as we could and then pick any fast charger along the route and in 26 minutes at a charger (50 kWh charged total) we'd be ready to drive for another 2 hours. This lines up very well with recommended rest stops for biological reasons (bathroom, water or coffee, a bit of movement to get blood circulating) and very close to what I had to do anyway with a gasoline car. With a gasoline car I had to refuel first, then park, then go to bathroom and so on. With an electric car I can do all of that while the car is charging and in the end the total time for a stop is very similar. Also not that there was a crazy heat wave going on and temperature outside was at about 34C minimum the whole day and hitting 40C at one point of the trip, so a lot of power was used for cooling. The car has a heat pump standard, but it still was working hard to keep us cool in the sun. The car was able to plan a charging route with all the charging stops required and had all the good options (like multiple intermediate stops) that many other cars (hi Tesla) and mobile apps (hi Google and Apple) do not have yet. There are a couple bugs with charging route and display of current route guidance, those are already fixed and will be delivered with over the air update with July 2022 update. Another good alterantive is the ABRP (A Better Route Planner) that was specifically designed for electric car routing along the best route for charging. Most phone apps (like Google Maps) have no idea about your specific electric car - it has no idea about the battery capacity, charging curve and is missing key live data as well - what is the current consumption and remaining energy in the battery. ABRP is different - it has data and profiles for almost all electric cars and can also be linked to live vehicle data, either via a OBD dongle or via a new Tronity cloud service. Tronity reads data from vehicle-specific cloud service, such as MyBMW service, saves it, tracks history and also re-transmits it to ABRP for live navigation planning. ABRP allows for options and settings that no car or app offers, for example, saying that you want to stop at a particular place for an hour or until battery is charged to 90%, or saying that you have specific charging cards and would only want to stop at chargers that support those. Both the car and the ABRP also support alternate routes even with multiple intermediate stops. In comparison, route planning by Google Maps or Apple Maps or Waze or even Tesla does not really come close. After charging up in the last German fast charger, a more interesting part of the trip started. In Poland the density of high performance chargers (HPC) is much lower than in Germany. There are many chargers (west of Warsaw), but vast majority of them are (relatively) slow 50kW chargers. And that is a difference between putting 50kWh into the car in 23-26 minutes or in 60 minutes. It does not seem too much, but the key bit here is that for 20 minutes there is easy to find stuff that should be done anyway, but after that you are done and you are just waiting for the car and if that takes 4 more minutes or 40 more minutes is a big, perceptual, difference. So using HPC is much, much preferable. So we put in the Ionity charger near Lodz as our intermediate target and the car suggested an intermediate stop at a Greenway charger by Katy Wroclawskie. The location is a bit weird - it has 4 charging stations with 150 kW each. The weird bits are that each station has two CCS connectors, but only one parking place (and the connectors share power, so if two cars were to connect, each would get half power). Also from the front of the location one can only see two stations, the otehr two are semi-hidden around a corner. We actually missed them on the way to Latvia and one person actually waited for the charger behind us for about 10 minutes. We only discovered the other two stations on the way back. With slower speeds in Poland the consumption goes down to 18 kWh/100km which translates to now up to 3 hours driving between stops. At the end of the first day we drove istarting from Ulm from 9:30 in the morning until about 23:00 in the evening with total distance of about 1100 km, 5 charging stops, starting with 92% battery, charging for 26 min (50 kWh), 33 min (57 kWh + lunch), 17 min (23 kWh), 12 min (17 kWh) and 13 min (37 kW). In the last two chargers you can see the difference between a good and fast 150 kW charger at high battery charge level and a really fast Ionity charger at low battery charge level, which makes charging faster still. Arriving to hotel with 23% of battery. Overnight the car charged from a Porsche Destination Charger to 87% (57 kWh). That was a bit less than I would expect from a full power 11kW charger, but good enough. Hotels should really install 11kW Type2 chargers for their guests, it is a really significant bonus that drives more clients to you. The road between Warsaw and Kaunas is the most difficult part of the trip for both driving itself and also for charging. For driving the problem is that there will be a new highway going from Warsaw to Lithuanian border, but it is actually not fully ready yet. So parts of the way one drives on the new, great and wide highway and parts of the way one drives on temporary roads or on old single lane undivided roads. And the most annoying part is navigating between parts as signs are not always clear and the maps are either too old or too new. Some maps do not have the new roads and others have on the roads that have not been actually build or opened to traffic yet. It's really easy to loose ones way and take a significant detour. As far as charging goes, basically there is only the slow 50 kW chargers between Warsaw and Kaunas (for now). We chose to charge on the last charger in Poland, by Suwalki Kaufland. That was not a good idea - there is only one 50 kW CCS and many people decide the same, so there can be a wait. We had to wait 17 minutes before we could charge for 30 more minutes just to get 18 kWh into the battery. Not the best use of time. On the way back we chose a different charger in Lomza where would have a relaxed dinner while the car was charging. That was far more relaxing and a better use of time. We also tried charging at an Orlen charger that was not recommended by our car and we found out why. Unlike all other chargers during our entire trip, this charger did not accept our universal BMW Charging RFID card. Instead it demanded that we download their own Orlen app and register there. The app is only available in some countries (and not in others) and on iPhone it is only available in Polish. That is a bad exception to the rule and a bad example. This is also how most charging works in USA. Here in Europe that is not normal. The normal is to use a charging card - either provided from the car maker or from another supplier (like PlugSufring or Maingau Energy). The providers then make roaming arrangements with all the charging networks, so the cards just work everywhere. In the end the user gets the prices and the bills from their card provider as a single monthly bill. This also saves all any credit card charges for the user. Having a clear, separate RFID card also means that one can easily choose how to pay for each charging session. For example, I have a corporate RFID card that my company pays for (for charging in Germany) and a private BMW Charging card that I am paying myself for (for charging abroad). Having the car itself authenticate direct with the charger (like Tesla does) removes the option to choose how to pay. Having each charge network have to use their own app or token bring too much chaos and takes too much setup. The optimum is having one card that works everywhere and having the option to have additional card or cards for specific purposes. Reaching Ionity chargers in Lithuania is again a breath of fresh air - 20-24 minutes to charge 50 kWh is as expected. One can charge on the first Ionity just enough to reach the next one and then on the second charger one can charge up enough to either reach the Ionity charger in Adazi or the final target in Latvia. There is a huge number of CSDD (Road Traffic and Safety Directorate) managed chargers all over Latvia, but they are 50 kW chargers. Good enough for local travel, but not great for long distance trips. BMW i4 charges at over 50 kW on a HPC even at over 90% battery state of charge (SoC). This means that it is always faster to charge up in a HPC than in a 50 kW charger, if that is at all possible. We also tested the CSDD chargers - they worked without any issues. One could pay with the BMW Charging RFID card, one could use the CSDD e-mobi app or token and one could also use Mobilly - an app that you can use in Latvia for everything from parking to public transport tickets or museums or car washes. We managed to reach our final destination near Aluksne with 17% range remaining after just 3 charging stops: 17+30 min (18 kWh), 24 min (48 kWh), 28 min (36 kWh). Last stop we charged to 90% which took a few extra minutes that would have been optimal. For travel around in Latvia we were charging at our target farmhouse from a normal 3 kW Schuko EU socket. That is very slow. We charged for 33 hours and went from 17% to 94%, so not really full. That was perfectly fine for our purposes. We easily reached Riga, drove to the sea and then back to Aluksne with 8% still in reserve and started charging again for the next trip. If it were required to drive around more and charge faster, we could have used the normal 3-phase 440V connection in the farmhouse to have a red CEE 16A plug installed (same as people use for welders). BMW i4 comes standard with a new BMW Flexible Fast Charger that has changable socket adapters. It comes by default with a Schucko connector in Europe, but for 90 one can buy an adapter for blue CEE plug (3.7 kW) or red CEE 16A or 32A plugs (11 kW). Some public charging stations in France actually use the blue CEE plugs instead of more common Type2 electric car charging stations. The CEE plugs are also common in camping parking places. On the way back the long distance BEV travel was already well understood and did not cause us any problem. From our destination we could easily reach the first Ionity in Lithuania, on the Panevezhis bypass road where in just 8 minutes we got 19 kWh and were ready to drive on to Kaunas, there a longer 32 minute stop before the charging desert of Suwalki Gap that gave us 52 kWh to 90%. That brought us to a shopping mall in Lomzha where we had some food and charged up 39 kWh in lazy 50 minutes. That was enough to bring us to our return hotel for the night - Hotel 500W in Strykow by Lodz that has a 50kW charger on site, while we were having late dinner and preparing for sleep, the car easily recharged to full (71 kWh in 95 minutes), so I just moved it from charger to a parking spot just before going to sleep. Really easy and well flowing day. Second day back went even better as we just needed an 18 minute stop at the same Katy Wroclawskie charger as before to get 22 kWh and that was enough to get back to Germany. After that we were again flying on the Autobahn and charging as needed, 15 min (31 kWh), 23 min (48 kWh) and 31 min (54 kWh + food). We started the day on about 9:40 and were home at 21:40 after driving just over 1000 km on that day. So less than 12 hours for 1000 km travelled, including all charging, bio stops, food and some traffic jams as well. Not bad. Now let's take a look at all the apps and data connections that a technically minded customer can have for their car. Architecturally the car is a network of computers by itself, but it is very secured and normally people do not have any direct access. However, once you log in into the car with your BMW account the car gets your profile info and preferences (seat settings, navigation favorites, ...) and the car then also can start sending information to the BMW backend about its status. This information is then available to the user over multiple different channels. There is no separate channel for each of those data flow. The data only goes once to the backend and then all other communication of apps happens with the backend. First of all the MyBMW app. This is the go-to for everything about the car - seeing its current status and location (when not driving), sending commands to the car (lock, unlock, flash lights, pre-condition, ...) and also monitor and control charging processes. You can also plan a route or destination in the app in advance and then just send it over to the car so it already knows where to drive to when you get to the car. This can also integrate with calendar entries, if you have locations for appointments, for example. This also shows full charging history and allows a very easy export of that data, here I exported all charging sessions from June and then trimmed it back to only sessions relevant to the trip and cut off some design elements to have the data more visible. So one can very easily see when and where we were charging, how much power we got at each spot and (if you set prices for locations) can even show costs. I've already mentioned the Tronity service and its ABRP integration, but it also saves the information that it gets from the car and gathers that data over time. It has nice aspects, like showing the driven routes on a map, having ways to do business trip accounting and having good calendar view. Sadly it does not correctly capture the data for charging sessions (the amounts are incorrect). Update: after talking to Tronity support, it looks like the bug was in the incorrect value for the usable battery capacity for my car. They will look into getting th eright values there by default, but as a workaround one can edit their car in their system (after at least one charging session) and directly set the expected battery capacity (usable) in the car properties on the Tronity web portal settings. One other fun way to see data from your BMW is using the BMW integration in Home Assistant. This brings the car as a device in your own smart home. You can read all the variables from the car current status (and Home Asisstant makes cute historical charts) and you can even see interesting trends, for example for remaining range shows much higher value in Latvia as its prediction is adapted to Latvian road speeds and during the trip it adapts to Polish and then to German road speeds and thus to higher consumption and thus lower maximum predicted remaining range. Having the car attached to the Home Assistant also allows you to attach the car to automations, both as data and event source (like detecting when car enters the "Home" zone) and also as target, so you could flash car lights or even unlock or lock it when certain conditions are met. So, what in the end was the most important thing - cost of the trip? In total we charged up 863 kWh, so that would normally cost one about 290 , which is close to half what this trip would have costed with a gasoline car. Out of that 279 kWh in Germany (paid by my employer) and 154 kWh in the farmhouse (paid by our wonderful relatives :D) so in the end the charging that I actually need to pay adds up to 430 kWh or about 150 . Typically, it took about 400 in fuel that I had to pay to get to Latvia and back. The difference is really nice! In the end I believe that there are three different ways of charging:
  • incidental charging - this is wast majority of charging in the normal day-to-day life. The car gets charged when and where it is convinient to do so along the way. If we go to a movie or a shop and there is a chance to leave the car at a charger, then it can charge up. Works really well, does not take extra time for charging from us.
  • fast charging - charging up at a HPC during optimal charging conditions - from relatively low level to no more than 70-80% while you are still doing all the normal things one would do in a quick stop in a long travel process: bio things, cleaning the windscreen, getting a coffee or a snack.
  • necessary charging - charging from a whatever charger is available just enough to be able to reach the next destination or the next fast charger.
The last category is the only one that is really annoying and should be avoided at all costs. Even by shifting your plans so that you find something else useful to do while necessary charging is happening and thus, at least partially, shifting it over to incidental charging category. Then you are no longer just waiting for the car, you are doing something else and the car magically is charged up again. And when one does that, then travelling with an electric car becomes no more annoying than travelling with a gasoline car. Having more breaks in a trip is a good thing and makes the trips actually easier and less stressfull - I was more relaxed during and after this trip than during previous trips. Having the car air conditioning always be on, even when stopped, was a godsend in the insane heat wave of 30C-38C that we were driving trough. Final stats: 4425 km driven in the trip. Average consumption: 18.7 kWh/100km. Time driving: 2 days and 3 hours. Car regened 152 kWh. Charging stations recharged 863 kWh. Questions? You can use this i4talk forum thread or this Twitter thread to ask them to me.

27 June 2022

Russ Allbery: Review: Light from Uncommon Stars

Review: Light from Uncommon Stars, by Ryka Aoki
Publisher: Tor
Copyright: 2021
ISBN: 1-250-78907-9
Format: Kindle
Pages: 371
Katrina Nguyen is an young abused transgender woman. As the story opens, she's preparing to run away from home. Her escape bag is packed with meds, clothes, her papers, and her violin. The note she is leaving for her parents says that she's going to San Francisco, a plausible lie. Her actual destination is Los Angeles, specifically the San Gabriel Valley, where a man she met at a queer youth conference said he'd give her a place to sleep. Shizuka Satomi is the Queen of Hell, the legendary uncompromising violin teacher responsible for six previous superstars, at least within the limited world of classical music. She's wealthy, abrasive, demanding, and intimidating, and unbeknownst to the rest of the world she has made a literal bargain with Hell. She has to deliver seven souls, seven violin players who want something badly enough that they'll bargain with Hell to get it. Six have already been delivered in spectacular fashion, but she's running out of time to deliver the seventh before her own soul is forfeit. Tamiko Grohl, an up-and-coming violinist from her native Los Angeles, will hopefully be the seventh. Lan Tran is a refugee and matriarch of a family who runs Starrgate Donut. She and her family didn't flee another unstable or inhospitable country. They fled the collapsing Galactic Empire, securing their travel authorization by promising to set up a tourism attraction. Meanwhile, she's careful to give cops free donuts and to keep their advanced technology carefully concealed. The opening of this book is unlikely to be a surprise in general shape. Most readers would expect Katrina to end up as Satomi's student rather than Tamiko, and indeed she does, although not before Katrina has a very difficult time. Near the start of the novel, I thought "oh, this is going to be hurt/comfort without a romantic relationship," and it is. But it then goes beyond that start into a multifaceted story about complexity, resilience, and how people support each other. It is also a fantastic look at the nuance and intricacies of being or supporting a transgender person, vividly illustrated by a story full of characters the reader cares about and without the academic abstruseness that often gets in the way. The problems with gender-blindness, the limitations of honoring someone's gender without understanding how other people do not, the trickiness of privilege, gender policing as a distraction and alienation from the rest of one's life, the complications of real human bodies and dysmorphia, the importance of listening to another person rather than one's assumptions about how that person feels it's all in here, flowing naturally from the story, specific to the characters involved, and never belabored. I cannot express how well-handled this is. It was a delight to read. The other wonderful thing Aoki does is set Satomi up as the almost supernaturally competent teacher who in a sense "rescues" Katrina, and then invert the trope, showing the limits of Satomi's expertise, the places where she desperately needs human connection for herself, and her struggle to understand Katrina well enough to teach her at the level Satomi expects of herself. Teaching is not one thing to everyone; it's about listening, and Katrina is nothing like Satomi's other students. This novel is full of people thinking they finally understand each other and realizing there is still more depth that they had missed, and then talking through the gap like adults. As you can tell from any summary of this book, it's an odd genre mash-up. The fantasy part is a classic "magician sells her soul to Hell" story; there are a few twists, but it largely follows genre expectations. The science fiction part involving Lan is unfortunately weaker and feels more like a random assortment of borrowed Star Trek tropes than coherent world-building. Genre readers should not come to this story expecting a well-thought-out science fiction universe or a serious attempt to reconcile metaphysics between the fantasy and science fiction backgrounds. It's a quirky assortment of parts that don't normally go together, defy easy classification, and are often unexplained. I suspect this was intentional on Aoki's part given how deeply this book is about the experience of being transgender. Of the three primary viewpoint characters, I thought Lan's perspective was the weakest, and not just because of her somewhat generic SF background. Aoki uses her as a way to talk about the refugee experience, describing her as a woman who brings her family out of danger to build a new life. This mostly works, but Lan has vastly more power and capabilities than a refugee would normally have. Rather than the typical Asian refugee experience in the San Gabriel valley, Lan is more akin to a US multimillionaire who for some reason fled to Vietnam (relative to those around her, Lan is arguably even more wealthy than that). This is also a refugee experience, but it is an incredibly privileged one in a way that partly undermines the role that she plays in the story. Another false note bothered me more: I thought Tamiko was treated horribly in this story. She plays a quite minor role, sidelined early in the novel and appearing only briefly near the climax, and she's portrayed quite negatively, but she's clearly hurting as deeply as the protagonists of this novel. Aoki gives her a moment of redemption, but Tamiko gets nothing from it. Unlike every other injured and abused character in this story, no one is there for Tamiko and no one ever attempts to understand her. I found that profoundly sad. She's not an admirable character, but neither is Satomi at the start of the book. At least a gesture at a future for Tamiko would have been appreciated. Those two complaints aside, though, I could not put this book down. I was able to predict the broad outline of the plot, but the specifics were so good and so true to characters. Both the primary and supporting cast are unique, unpredictable, and memorable. Light from Uncommon Stars has a complex relationship with genre. It is squarely in the speculative fiction genre; the plot would not work without the fantasy and (more arguably) the science fiction elements. Music is magical in a way that goes beyond what can be attributed to metaphor and subjectivity. But it's also primarily character story deeply rooted in the specific location of the San Gabriel valley east of Los Angeles, full of vivid descriptions (particularly of food) and day-to-day life. As with the fantasy and science fiction elements, Aoki does not try to meld the genre elements into a coherent whole. She lets them sit side by side and be awkward and charming and uneven and chaotic. If you're the sort of SF reader who likes building a coherent theory of world-building rules, you may have to turn that desire off to fully enjoy this book. I thought this book was great. It's not flawless, but like its characters it's not trying to be flawless. In places it is deeply insightful and heartbreakingly emotional; in others, it's a glorious mess. It's full of cooking and food, YouTube fame, the disappointments of replicators, video game music, meet-cutes over donuts, found family, and classical music drama. I wish we'd gotten way more about the violin repair shop and a bit less warmed-over Star Trek, but I also loved it exactly the way it was. Definitely the best of the 2022 Hugo nominees that I've read so far. Content warning for child abuse, rape, self-harm, and somewhat explicit sex work. The start of the book is rather heavy and horrific, although the author advertises fairly clearly (and accurately) that things will get better. Rating: 9 out of 10

26 June 2022

Russ Allbery: Review: Feet of Clay

Review: Feet of Clay, by Terry Pratchett
Series: Discworld #19
Publisher: Harper
Copyright: October 1996
Printing: February 2014
ISBN: 0-06-227551-8
Format: Mass market
Pages: 392
Feet of Clay is the 19th Discworld novel, the third Watch novel, and probably not the best place to start. You could read only Guards! Guards! and Men at Arms before this one, though, if you wanted. This story opens with a golem selling another golem to a factory owner, obviously not caring about the price. This is followed by two murders: an elderly priest, and the curator of a dwarven bread museum. (Dwarf bread is a much-feared weapon of war.) Meanwhile, assassins are still trying to kill Watch Commander Vimes, who has an appointment to get a coat of arms. A dwarf named Cheery Littlebottom is joining the Watch. And Lord Vetinari, the ruler of Ankh-Morpork, has been poisoned. There's a lot going on in this book, and while it's all in some sense related, it's more interwoven than part of a single story. The result felt to me like a day-in-the-life episode of a cop show: a lot of character development, a few largely separate plot lines so that the characters have something to do, and the development of a few long-running themes that are neither started nor concluded in this book. We check in on all the individual Watch members we've met to date, add new ones, and at the end of the book everyone is roughly back to where they were when the book started. This is, to be clear, not a bad thing for a book to do. It relies on the reader already caring about the characters and being invested in the long arc of the series, but both of those are true of me, so it worked. Cheery is a good addition, giving Pratchett an opportunity to explore gender nonconformity with a twist (all dwarfs are expected to act the same way regardless of gender, which doesn't work for Cheery) and, even better, giving Angua more scenes. Angua is among my favorite Watch characters, although I wish she'd gotten more of a resolution for her relationship anxiety in this book. The primary plot is about golems, which on Discworld are used in factories because they work nonstop, have no other needs, and do whatever they're told. Nearly everyone in Ankh-Morpork considers them machinery. If you've read any Discworld books before, you will find it unsurprising that Pratchett calls that belief into question, but the ways he gets there, and the links between the golem plot and the other plot threads, have a few good twists and turns. Reading this, I was reminded vividly of Orwell's discussion of Charles Dickens:
It seems that in every attack Dickens makes upon society he is always pointing to a change of spirit rather than a change of structure. It is hopeless to try and pin him down to any definite remedy, still more to any political doctrine. His approach is always along the moral plane, and his attitude is sufficiently summed up in that remark about Strong's school being as different from Creakle's "as good is from evil." Two things can be very much alike and yet abysmally different. Heaven and Hell are in the same place. Useless to change institutions without a "change of heart" that, essentially, is what he is always saying. If that were all, he might be no more than a cheer-up writer, a reactionary humbug. A "change of heart" is in fact the alibi of people who do not wish to endanger the status quo. But Dickens is not a humbug, except in minor matters, and the strongest single impression one carries away from his books is that of a hatred of tyranny.
and later:
His radicalism is of the vaguest kind, and yet one always knows that it is there. That is the difference between being a moralist and a politician. He has no constructive suggestions, not even a clear grasp of the nature of the society he is attacking, only an emotional perception that something is wrong, all he can finally say is, "Behave decently," which, as I suggested earlier, is not necessarily so shallow as it sounds. Most revolutionaries are potential Tories, because they imagine that everything can be put right by altering the shape of society; once that change is effected, as it sometimes is, they see no need for any other. Dickens has not this kind of mental coarseness. The vagueness of his discontent is the mark of its permanence. What he is out against is not this or that institution, but, as Chesterton put it, "an expression on the human face."
I think Pratchett is, in that sense, a Dickensian writer, and it shows all through Discworld. He does write political crises (there is one in this book), but the crises are moral or personal, not ideological or structural. The Watch novels are often concerned with systems of government, but focus primarily on the popular appeal of kings, the skill of the Patrician, and the greed of those who would maneuver for power. Pratchett does not write (at least so far) about the proper role of government, the impact of Vetinari's policies (or even what those policies may be), or political theory in any deep sense. What he does write about, at great length, is morality, fairness, and a deeply generous humanism, all of which are central to the golem plot. Vimes is a great protagonist for this type of story. He's grumpy, cynical, stubborn, and prejudiced, and we learn in this book that he's a descendant of the Discworld version of Oliver Cromwell. He can be reflexively self-centered, and he has no clear idea how to use his newfound resources. But he behaves decently towards people, in both big and small things, for reasons that the reader feels he could never adequately explain, but which are rooted in empathy and an instinctual sense of fairness. It's fun to watch him grumble his way through the plot while making snide comments about mysteries and detectives. I do have to complain a bit about one of those mysteries, though. I would have enjoyed the plot around Vetinari's poisoning more if Pratchett hadn't mercilessly teased readers who know a bit about French history. An allusion or two would have been fun, but he kept dropping references while having Vimes ignore them, and I found the overall effect both frustrating and irritating. That and a few other bits, like Angua's uncommunicative angst, fell flat for me. Thankfully, several other excellent scenes made up for them, such as Nobby's high society party and everything about the College of Heralds. Also, Vimes's impish PDA (smartphone without the phone, for those younger than I am) remains absurdly good commentary on the annoyances of portable digital devices despite an original publication date of 1996. Feet of Clay is less focused than the previous Watch novels and more of a series book than most Discworld novels. You're reading about characters introduced in previous books with problems that will continue into subsequent books. The plot and the mysteries are there to drive the story but seem relatively incidental to the characterization. This isn't a complaint; at this point in the series, I'm in it for the long haul, and I liked the variation. As usual, Pratchett is stronger for me when he's not overly focused on parody. His own characters are as good as the material he's been parodying, and I'm happy to see them get a book that's not overshadowed by another material. If you've read this far in the series, or even in just the Watch novels, recommended. Followed by Hogfather in publication order and, thematically, by Jingo. Rating: 8 out of 10

29 May 2022

John Goerzen: Fast, Ordered Unixy Queues over NNCP and Syncthing with Filespooler

It seems that lately I ve written several shell implementations of a simple queue that enforces ordered execution of jobs that may arrive out of order. After writing this for the nth time in bash, I decided it was time to do it properly. But first, a word on the why of it all.

Why did I bother? My needs arose primarily from handling Backups over Asynchronous Communication methods in this case, NNCP. When backups contain incrementals that are unpacked on the destination, they must be applied in the correct order. In some cases, like ZFS, the receiving side will detect an out-of-order backup file and exit with an error. In those cases, processing in random order is acceptable but can be slow if, say, hundreds or thousands of hourly backups have stacked up over a period of time. The same goes for using gitsync-nncp to synchronize git repositories. In both cases, a best effort based on creation date is sufficient to produce a significant performance improvement. With other cases, such as tar or dar backups, the receiving cannot detect out of order incrementals. In those situations, the incrementals absolutely must be applied with strict ordering. There are many other situations that arise with these needs also. Filespooler is the answer to these.

Existing Work Before writing my own program, I of course looked at what was out there already. I looked at celeary, gearman, nq, rq, cctools work queue, ts/tsp (task spooler), filequeue, dramatiq, GNU parallel, and so forth. Unfortunately, none of these met my needs at all. They all tended to have properties like:
  • An extremely complicated client/server system that was incompatible with piping data over existing asynchronous tools
  • A large bias to processing of small web requests, resulting in terrible inefficiency or outright incompatibility with jobs in the TB range
  • An inability to enforce strict ordering of jobs, especially if they arrive in a different order from how they were queued
Many also lacked some nice-to-haves that I implemented for Filespooler:
  • Support for the encryption and cryptographic authentication of jobs, including metadata
  • First-class support for arbitrary compressors
  • Ability to use both stream transports (pipes) and filesystem-like transports (eg, rclone mount, S3, Syncthing, or Dropbox)

Introducing Filespooler Filespooler is a tool in the Unix tradition: that is, do one thing well, and integrate nicely with other tools using the fundamental Unix building blocks of files and pipes. Filespooler itself doesn t provide transport for jobs, but instead is designed to cooperate extremely easily with transports that can be written to as a filesystem or piped to which is to say, almost anything of interest. Filespooler is written in Rust and has an extensive Filespooler Reference as well as many tutorials on its homepage. To give you a few examples, here are some links:

Basics of How it Works Filespooler is intentionally simple:
  • The sender maintains a sequence file that includes a number for the next job packet to be created.
  • The receiver also maintains a sequence file that includes a number for the next job to be processed.
  • fspl prepare creates a Filespooler job packet and emits it to stdout. It includes a small header (<100 bytes in most cases) that includes the sequence number, creation timestamp, and some other useful metadata.
  • You get to transport this job packet to the receiver in any of many simple ways, which may or may not involve Filespooler s assistance.
  • On the receiver, Filespooler (when running in the default strict ordering mode) will simply look at the sequence file and process jobs in incremental order until it runs out of jobs to process.
The name of job files on-disk matches a pattern for identification, but the content of them is not significant; only the header matters. You can send job data in three ways:
  1. By piping it to fspl prepare
  2. By setting certain environment variables when calling fspl prepare
  3. By passing additional command-line arguments to fspl prepare, which can optionally be passed to the processing command at the receiver.
Data piped in is added to the job payload , while environment variables and command-line parameters are encoded in the header.

Basic usage Here I will excerpt part of the Using Filespooler over Syncthing tutorial; consult it for further detail. As a bit of background, Syncthing is a FLOSS decentralized directory synchronization tool akin to Dropbox (but with a much richer feature set in many ways).

Preparation First, on the receiver, you create the queue (passing the directory name to -q):
sender$ fspl queue-init -q ~/sync/b64queue
Now, we can send a job like this:
sender$ echo Hi   fspl prepare -s ~/b64seq -i -   fspl queue-write -q ~/sync/b64queue
Let s break that down:
  • First, we pipe Hi to fspl prepare.
  • fspl prepare takes two parameters:
    • -s seqfile gives the path to a sequence file used on the sender side. This file has a simple number in it that increments a unique counter for every generated job file. It is matched with the nextseq file within the queue to make sure that the receiver processes jobs in the correct order. It MUST be separate from the file that is in the queue and should NOT be placed within the queue. There is no need to sync this file, and it would be ideal to not sync it.
    • The -i option tells fspl prepare to read a file for the packet payload. -i - tells it to read stdin for this purpose. So, the payload will consist of three bytes: Hi\n (that is, including the terminating newline that echo wrote)
  • Now, fspl prepare writes the packet to its stdout. We pipe that into fspl queue-write:
    • fspl queue-write reads stdin and writes it to a file in the queue directory in a safe manner. The file will ultimately match the fspl-*.fspl pattern and have a random string in the middle.
At this point, wait a few seconds (or however long it takes) for the queue files to be synced over to the recipient. On the receiver, we can see if any jobs have arrived yet:
receiver$ fspl queue-ls -q ~/sync/b64queue
ID                   creation timestamp          filename
1                    2022-05-16T20:29:32-05:00   fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl
Let s say we d like some information about the job. Try this:
receiver$ $ fspl queue-info -q ~/sync/b64queue -j 1
FSPL_SEQ=1
FSPL_CTIME_SECS=1652940172
FSPL_CTIME_NANOS=94106744
FSPL_CTIME_RFC3339_UTC=2022-05-17T01:29:32Z
FSPL_CTIME_RFC3339_LOCAL=2022-05-16T20:29:32-05:00
FSPL_JOB_FILENAME=fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl
FSPL_JOB_QUEUEDIR=/home/jgoerzen/sync/b64queue
FSPL_JOB_FULLPATH=/home/jgoerzen/sync/b64queue/jobs/fspl-7b85df4e-4df9-448d-9437-5a24b92904a4.fspl
This information is intentionally emitted in a format convenient for parsing. Now let s run the job!
receiver$ fspl queue-process -q ~/sync/b64queue --allow-job-params base64
SGkK
There are two new parameters here:
  • --allow-job-params says that the sender is trusted to supply additional parameters for the command we will be running.
  • base64 is the name of the command that we will run for every job. It will:
    • Have environment variables set as we just saw in queue-info
    • Have the text we previously prepared Hi\n piped to it
By default, fspl queue-process doesn t do anything special with the output; see Handling Filespooler Command Output for details on other options. So, the base64-encoded version of our string is SGkK . We successfully sent a packet using Syncthing as a transport mechanism! At this point, if you do a fspl queue-ls again, you ll see the queue is empty. By default, fspl queue-process deletes jobs that have been successfully processed.

For more See the Filespooler homepage.
This blog post is also available as a permanent, periodically-updated page.

26 May 2022

Sergio Talens-Oliag: New Blog Config

As promised, on this post I m going to explain how I ve configured this blog using hugo, asciidoctor and the papermod theme, how I publish it using nginx, how I ve integrated the remark42 comment system and how I ve automated its publication using gitea and json2file-go. It is a long post, but I hope that at least parts of it can be interesting for some, feel free to ignore it if that is not your case

Hugo Configuration

Theme settingsThe site is using the PaperMod theme and as I m using asciidoctor to publish my content I ve adjusted the settings to improve how things are shown with it. The current config.yml file is the one shown below (probably some of the settings are not required nor being used right now, but I m including the current file, so this post will have always the latest version of it):
config.yml
baseURL: https://blogops.mixinet.net/
title: Mixinet BlogOps
paginate: 5
theme: PaperMod
destination: public/
enableInlineShortcodes: true
enableRobotsTXT: true
buildDrafts: false
buildFuture: false
buildExpired: false
enableEmoji: true
pygmentsUseClasses: true
minify:
  disableXML: true
  minifyOutput: true
languages:
  en:
    languageName: "English"
    description: "Mixinet BlogOps - https://blogops.mixinet.net/"
    author: "Sergio Talens-Oliag"
    weight: 1
    title: Mixinet BlogOps
    homeInfoParams:
      Title: "Sergio Talens-Oliag Technical Blog"
      Content: >
        ![Mixinet BlogOps](/images/mixinet-blogops.png)
    taxonomies:
      category: categories
      tag: tags
      series: series
    menu:
      main:
        - name: Archive
          url: archives
          weight: 5
        - name: Categories
          url: categories/
          weight: 10
        - name: Tags
          url: tags/
          weight: 10
        - name: Search
          url: search/
          weight: 15
outputs:
  home:
    - HTML
    - RSS
    - JSON
params:
  env: production
  defaultTheme: light
  disableThemeToggle: false
  ShowShareButtons: true
  ShowReadingTime: true
  disableSpecial1stPost: true
  disableHLJS: true
  displayFullLangName: true
  ShowPostNavLinks: true
  ShowBreadCrumbs: true
  ShowCodeCopyButtons: true
  ShowRssButtonInSectionTermList: true
  ShowFullTextinRSS: true
  ShowToc: true
  TocOpen: false
  comments: true
  remark42SiteID: "blogops"
  remark42Url: "/remark42"
  profileMode:
    enabled: false
    title: Sergio Talens-Oliag Technical Blog
    imageUrl: "/images/mixinet-blogops.png"
    imageTitle: Mixinet BlogOps
    buttons:
      - name: Archives
        url: archives
      - name: Categories
        url: categories
      - name: Tags
        url: tags
  socialIcons:
    - name: CV
      url: "https://www.uv.es/~sto/cv/"
    - name: Debian
      url: "https://people.debian.org/~sto/"
    - name: GitHub
      url: "https://github.com/sto/"
    - name: GitLab
      url: "https://gitlab.com/stalens/"
    - name: Linkedin
      url: "https://www.linkedin.com/in/sergio-talens-oliag/"
    - name: RSS
      url: "index.xml"
  assets:
    disableHLJS: true
    favicon: "/favicon.ico"
    favicon16x16:  "/favicon-16x16.png"
    favicon32x32:  "/favicon-32x32.png"
    apple_touch_icon:  "/apple-touch-icon.png"
    safari_pinned_tab:  "/safari-pinned-tab.svg"
  fuseOpts:
    isCaseSensitive: false
    shouldSort: true
    location: 0
    distance: 1000
    threshold: 0.4
    minMatchCharLength: 0
    keys: ["title", "permalink", "summary", "content"]
markup:
  asciidocExt:
    attributes:  
    backend: html5s
    extensions: ['asciidoctor-html5s','asciidoctor-diagram']
    failureLevel: fatal
    noHeaderOrFooter: true
    preserveTOC: false
    safeMode: unsafe
    sectionNumbers: false
    trace: false
    verbose: false
    workingFolderCurrent: true
privacy:
  vimeo:
    disabled: false
    simple: true
  twitter:
    disabled: false
    enableDNT: true
    simple: true
  instagram:
    disabled: false
    simple: true
  youtube:
    disabled: false
    privacyEnhanced: true
services:
  instagram:
    disableInlineCSS: true
  twitter:
    disableInlineCSS: true
security:
  exec:
    allow:
      - '^asciidoctor$'
      - '^dart-sass-embedded$'
      - '^go$'
      - '^npx$'
      - '^postcss$'
Some notes about the settings:
  • disableHLJS and assets.disableHLJS are set to true; we plan to use rouge on adoc and the inclusion of the hljs assets adds styles that collide with the ones used by rouge.
  • ShowToc is set to true and the TocOpen setting is set to false to make the ToC appear collapsed initially. My plan was to use the asciidoctor ToC, but after trying I believe that the theme one looks nice and I don t need to adjust styles, although it has some issues with the html5s processor (the admonition titles use <h6> and they are shown on the ToC, which is weird), to fix it I ve copied the layouts/partial/toc.html to my site repository and replaced the range of headings to end at 5 instead of 6 (in fact 5 still seems a lot, but as I don t think I ll use that heading level on the posts it doesn t really matter).
  • params.profileMode values are adjusted, but for now I ve left it disabled setting params.profileMode.enabled to false and I ve set the homeInfoParams to show more or less the same content with the latest posts under it (I ve added some styles to my custom.css style sheet to center the text and image of the first post to match the look and feel of the profile).
  • On the asciidocExt section I ve adjusted the backend to use html5s, I ve added the asciidoctor-html5s and asciidoctor-diagram extensions to asciidoctor and adjusted the workingFolderCurrent to true to make asciidoctor-diagram work right (haven t tested it yet).

Theme customisationsTo write in asciidoctor using the html5s processor I ve added some files to the assets/css/extended directory:
  1. As said before, I ve added the file assets/css/extended/custom.css to make the homeInfoParams look like the profile page and I ve also changed a little bit some theme styles to make things look better with the html5s output:
    custom.css
    /* Fix first entry alignment to make it look like the profile */
    .first-entry   text-align: center;  
    .first-entry img   display: inline;  
    /**
     * Remove margin for .post-content code and reduce padding to make it look
     * better with the asciidoctor html5s output.
     **/
    .post-content code   margin: auto 0; padding: 4px;  
  2. I ve also added the file assets/css/extended/adoc.css with some styles taken from the asciidoctor-default.css, see this blog post about the original file; mine is the same after formatting it with css-beautify and editing it to use variables for the colors to support light and dark themes:
    adoc.css
    /* AsciiDoctor*/
    table  
        border-collapse: collapse;
        border-spacing: 0
     
    .admonitionblock>table  
        border-collapse: separate;
        border: 0;
        background: none;
        width: 100%
     
    .admonitionblock>table td.icon  
        text-align: center;
        width: 80px
     
    .admonitionblock>table td.icon img  
        max-width: none
     
    .admonitionblock>table td.icon .title  
        font-weight: bold;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        text-transform: uppercase
     
    .admonitionblock>table td.content  
        padding-left: 1.125em;
        padding-right: 1.25em;
        border-left: 1px solid #ddddd8;
        color: var(--primary)
     
    .admonitionblock>table td.content>:last-child>:last-child  
        margin-bottom: 0
     
    .admonitionblock td.icon [class^="fa icon-"]  
        font-size: 2.5em;
        text-shadow: 1px 1px 2px var(--secondary);
        cursor: default
     
    .admonitionblock td.icon .icon-note::before  
        content: "\f05a";
        color: var(--icon-note-color)
     
    .admonitionblock td.icon .icon-tip::before  
        content: "\f0eb";
        color: var(--icon-tip-color)
     
    .admonitionblock td.icon .icon-warning::before  
        content: "\f071";
        color: var(--icon-warning-color)
     
    .admonitionblock td.icon .icon-caution::before  
        content: "\f06d";
        color: var(--icon-caution-color)
     
    .admonitionblock td.icon .icon-important::before  
        content: "\f06a";
        color: var(--icon-important-color)
     
    .conum[data-value]  
        display: inline-block;
        color: #fff !important;
        background-color: rgba(100, 100, 0, .8);
        -webkit-border-radius: 100px;
        border-radius: 100px;
        text-align: center;
        font-size: .75em;
        width: 1.67em;
        height: 1.67em;
        line-height: 1.67em;
        font-family: "Open Sans", "DejaVu Sans", sans-serif;
        font-style: normal;
        font-weight: bold
     
    .conum[data-value] *  
        color: #fff !important
     
    .conum[data-value]+b  
        display: none
     
    .conum[data-value]::after  
        content: attr(data-value)
     
    pre .conum[data-value]  
        position: relative;
        top: -.125em
     
    b.conum *  
        color: inherit !important
     
    .conum:not([data-value]):empty  
        display: none
     
  3. The previous file uses variables from a partial copy of the theme-vars.css file that changes the highlighted code background color and adds the color definitions used by the admonitions:
    theme-vars.css
    :root  
        /* Solarized base2 */
        /* --hljs-bg: rgb(238, 232, 213); */
        /* Solarized base3 */
        /* --hljs-bg: rgb(253, 246, 227); */
        /* Solarized base02 */
        --hljs-bg: rgb(7, 54, 66);
        /* Solarized base03 */
        /* --hljs-bg: rgb(0, 43, 54); */
        /* Default asciidoctor theme colors */
        --icon-note-color: #19407c;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #bf6900;
        --icon-caution-color: #bf3400;
        --icon-important-color: #bf0000
     
    .dark  
        --hljs-bg: rgb(7, 54, 66);
        /* Asciidoctor theme colors with tint for dark background */
        --icon-note-color: #3e7bd7;
        --icon-tip-color: var(--primary);
        --icon-warning-color: #ff8d03;
        --icon-caution-color: #ff7847;
        --icon-important-color: #ff3030
     
  4. The previous styles use font-awesome, so I ve downloaded its resources for version 4.7.0 (the one used by asciidoctor) storing the font-awesome.css into on the assets/css/extended dir (that way it is merged with the rest of .css files) and copying the fonts to the static/assets/fonts/ dir (will be served directly):
    FA_BASE_URL="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0"
    curl "$FA_BASE_URL/css/font-awesome.css" \
      > assets/css/extended/font-awesome.css
    for f in FontAwesome.otf fontawesome-webfont.eot \
      fontawesome-webfont.svg fontawesome-webfont.ttf \
      fontawesome-webfont.woff fontawesome-webfont.woff2; do
        curl "$FA_BASE_URL/fonts/$f" > "static/assets/fonts/$f"
    done
  5. As already said the default highlighter is disabled (it provided a css compatible with rouge) so we need a css to do the highlight styling; as rouge provides a way to export them, I ve created the assets/css/extended/rouge.css file with the thankful_eyes theme:
    rougify style thankful_eyes > assets/css/extended/rouge.css
  6. To support the use of the html5s backend with admonitions I ve added a variation of the example found on this blog post to assets/js/adoc-admonitions.js:
    adoc-admonitions.js
    // replace the default admonitions block with a table that uses a format
    // similar to the standard asciidoctor ... as we are using fa-icons here there
    // is no need to add the icons: font entry on the document.
    window.addEventListener('load', function ()  
      const admonitions = document.getElementsByClassName('admonition-block')
      for (let i = admonitions.length - 1; i >= 0; i--)  
        const elm = admonitions[i]
        const type = elm.classList[1]
        const title = elm.getElementsByClassName('block-title')[0];
    	const label = title.getElementsByClassName('title-label')[0]
    		.innerHTML.slice(0, -1);
        elm.removeChild(elm.getElementsByClassName('block-title')[0]);
        const text = elm.innerHTML
        const parent = elm.parentNode
        const tempDiv = document.createElement('div')
        tempDiv.innerHTML =  <div class="admonitionblock $ type ">
        <table>
          <tbody>
            <tr>
              <td class="icon">
                <i class="fa icon-$ type " title="$ label "></i>
              </td>
              <td class="content">
                $ text 
              </td>
            </tr>
          </tbody>
        </table>
      </div> 
        const input = tempDiv.childNodes[0]
        parent.replaceChild(input, elm)
       
     )
    and enabled its minified use on the layouts/partials/extend_footer.html file adding the following lines to it:
     - $admonitions := slice (resources.Get "js/adoc-admonitions.js")
        resources.Concat "assets/js/adoc-admonitions.js"   minify   fingerprint  
    <script defer crossorigin="anonymous" src="  $admonitions.RelPermalink  "
      integrity="  $admonitions.Data.Integrity  "></script>

Remark42 configurationTo integrate Remark42 with the PaperMod theme I ve created the file layouts/partials/comments.html with the following content based on the remark42 documentation, including extra code to sync the dark/light setting with the one set on the site:
comments.html
<div id="remark42"></div>
<script>
  var remark_config =  
    host:   .Site.Params.remark42Url  ,
    site_id:   .Site.Params.remark42SiteID  ,
    url:   .Permalink  ,
    locale:   .Site.Language.Lang  
   ;
  (function(c)  
    /* Adjust the theme using the local-storage pref-theme if set */
    if (localStorage.getItem("pref-theme") === "dark")  
      remark_config.theme = "dark";
      else if (localStorage.getItem("pref-theme") === "light")  
      remark_config.theme = "light";
     
    /* Add remark42 widget */
    for(var i = 0; i < c.length; i++) 
      var d = document, s = d.createElement('script');
      s.src = remark_config.host + '/web/' + c[i] +'.js';
      s.defer = true;
      (d.head   d.body).appendChild(s);
     
   )(remark_config.components   ['embed']);
</script>
In development I use it with anonymous comments enabled, but to avoid SPAM the production site uses social logins (for now I ve only enabled Github & Google, if someone requests additional services I ll check them, but those were the easy ones for me initially). To support theme switching with remark42 I ve also added the following inside the layouts/partials/extend_footer.html file:
 - if (not site.Params.disableThemeToggle)  
<script>
/* Function to change theme when the toggle button is pressed */
document.getElementById("theme-toggle").addEventListener("click", () =>  
  if (typeof window.REMARK42 != "undefined")  
    if (document.body.className.includes('dark'))  
      window.REMARK42.changeTheme('light');
      else  
      window.REMARK42.changeTheme('dark');
     
   
 );
</script>
 - end  
With this code if the theme-toggle button is pressed we change the remark42 theme before the PaperMod one (that s needed here only, on page loads the remark42 theme is synced with the main one using the code from the layouts/partials/comments.html shown earlier).

Development setupTo preview the site on my laptop I m using docker-compose with the following configuration:
docker-compose.yaml
version: "2"
services:
  hugo:
    build:
      context: ./docker/hugo-adoc
      dockerfile: ./Dockerfile
    image: sto/hugo-adoc
    container_name: hugo-adoc-blogops
    restart: always
    volumes:
      - .:/documents
    command: server --bind 0.0.0.0 -D -F
    user: $ APP_UID :$ APP_GID 
  nginx:
    image: nginx:latest
    container_name: nginx-blogops
    restart: always
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    ports:
      -  1313:1313
  remark42:
    build:
      context: ./docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    container_name: remark42-blogops
    restart: always
    env_file:
      - ./.env
      - ./remark42/env.dev
    volumes:
      - ./remark42/var.dev:/srv/var
To run it properly we have to create the .env file with the current user ID and GID on the variables APP_UID and APP_GID (if we don t do it the files can end up being owned by a user that is not the same as the one running the services):
$ echo "APP_UID=$(id -u)\nAPP_GID=$(id -g)" > .env
The Dockerfile used to generate the sto/hugo-adoc is:
Dockerfile
FROM asciidoctor/docker-asciidoctor:latest
RUN gem install --no-document asciidoctor-html5s &&\
 apk update && apk add --no-cache curl libc6-compat &&\
 repo_path="gohugoio/hugo" &&\
 api_url="https://api.github.com/repos/$repo_path/releases/latest" &&\
 download_url="$(\
  curl -sL "$api_url"  \
  sed -n "s/^.*download_url\": \"\\(.*.extended.*Linux-64bit.tar.gz\)\"/\1/p"\
 )" &&\
 curl -sL "$download_url" -o /tmp/hugo.tgz &&\
 tar xf /tmp/hugo.tgz hugo &&\
 install hugo /usr/bin/ &&\
 rm -f hugo /tmp/hugo.tgz &&\
 /usr/bin/hugo version &&\
 apk del curl && rm -rf /var/cache/apk/*
# Expose port for live server
EXPOSE 1313
ENTRYPOINT ["/usr/bin/hugo"]
CMD [""]
If you review it you will see that I m using the docker-asciidoctor image as the base; the idea is that this image has all I need to work with asciidoctor and to use hugo I only need to download the binary from their latest release at github (as we are using an image based on alpine we also need to install the libc6-compat package, but once that is done things are working fine for me so far). The image does not launch the server by default because I don t want it to; in fact I use the same docker-compose.yml file to publish the site in production simply calling the container without the arguments passed on the docker-compose.yml file (see later). When running the containers with docker-compose up (or docker compose up if you have the docker-compose-plugin package installed) we also launch a nginx container and the remark42 service so we can test everything together. The Dockerfile for the remark42 image is the original one with an updated version of the init.sh script:
Dockerfile
FROM umputun/remark42:latest
COPY init.sh /init.sh
The updated init.sh is similar to the original, but allows us to use an APP_GID variable and updates the /etc/group file of the container so the files get the right user and group (with the original script the group is always 1001):
init.sh
#!/sbin/dinit /bin/sh
uid="$(id -u)"
if [ "$ uid " -eq "0" ]; then
  echo "init container"
  # set container's time zone
  cp "/usr/share/zoneinfo/$ TIME_ZONE " /etc/localtime
  echo "$ TIME_ZONE " >/etc/timezone
  echo "set timezone $ TIME_ZONE  ($(date))"
  # set UID & GID for the app
  if [ "$ APP_UID " ]   [ "$ APP_GID " ]; then
    [ "$ APP_UID " ]   APP_UID="1001"
    [ "$ APP_GID " ]   APP_GID="$ APP_UID "
    echo "set custom APP_UID=$ APP_UID  & APP_GID=$ APP_GID "
    sed -i "s/^app:x:1001:1001:/app:x:$ APP_UID :$ APP_GID :/" /etc/passwd
    sed -i "s/^app:x:1001:/app:x:$ APP_GID :/" /etc/group
  else
    echo "custom APP_UID and/or APP_GID not defined, using 1001:1001"
  fi
  chown -R app:app /srv /home/app
fi
echo "prepare environment"
# replace  % REMARK_URL %  by content of REMARK_URL variable
find /srv -regex '.*\.\(html\ js\ mjs\)$' -print \
  -exec sed -i "s % REMARK_URL % $ REMARK_URL  g"   \;
if [ -n "$ SITE_ID " ]; then
  #replace "site_id: 'remark'" by SITE_ID
  sed -i "s 'remark' '$ SITE_ID ' g" /srv/web/*.html
fi
echo "execute \"$*\""
if [ "$ uid " -eq "0" ]; then
  exec su-exec app "$@"
else
  exec "$@"
fi
The environment file used with remark42 for development is quite minimal:
env.dev
TIME_ZONE=Europe/Madrid
REMARK_URL=http://localhost:1313/remark42
SITE=blogops
SECRET=123456
ADMIN_SHARED_ID=sto
AUTH_ANON=true
EMOJI=true
And the nginx/default.conf file used to publish the service locally is simple too:
default.conf
server   
 listen 1313;
 server_name localhost;
 location /  
    proxy_pass http://hugo:1313;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  
 location /remark42/  
    rewrite /remark42/(.*) /$1 break;
    proxy_pass http://remark42:8080/;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
   
 

Production setupThe VM where I m publishing the blog runs Debian GNU/Linux and uses binaries from local packages and applications packaged inside containers. To run the containers I m using docker-ce (I could have used podman instead, but I already had it installed on the machine, so I stayed with it). The binaries used on this project are included on the following packages from the main Debian repository:
  • git to clone & pull the repository,
  • jq to parse json files from shell scripts,
  • json2file-go to save the webhook messages to files,
  • inotify-tools to detect when new files are stored by json2file-go and launch scripts to process them,
  • nginx to publish the site using HTTPS and work as proxy for json2file-go and remark42 (I run it using a container),
  • task-spool to queue the scripts that update the deployment.
And I m using docker and docker compose from the debian packages on the docker repository:
  • docker-ce to run the containers,
  • docker-compose-plugin to run docker compose (it is a plugin, so no - in the name).

Repository checkoutTo manage the git repository I ve created a deploy key, added it to gitea and cloned the project on the /srv/blogops PATH (that route is owned by a regular user that has permissions to run docker, as I said before).

Compiling the site with hugoTo compile the site we are using the docker-compose.yml file seen before, to be able to run it first we build the container images and once we have them we launch hugo using docker compose run:
$ cd /srv/blogops
$ git pull
$ docker compose build
$ if [ -d "./public" ]; then rm -rf ./public; fi
$ docker compose run hugo --
The compilation leaves the static HTML on /srv/blogops/public (we remove the directory first because hugo does not clean the destination folder as jekyll does). The deploy script re-generates the site as described and moves the public directory to its final place for publishing.

Running remark42 with dockerOn the /srv/blogops/remark42 folder I have the following docker-compose.yml:
docker-compose.yml
version: "2"
services:
  remark42:
    build:
      context: ../docker/remark42
      dockerfile: ./Dockerfile
    image: sto/remark42
    env_file:
      - ../.env
      - ./env.prod
    container_name: remark42
    restart: always
    volumes:
      - ./var.prod:/srv/var
    ports:
      - 127.0.0.1:8042:8080
The ../.env file is loaded to get the APP_UID and APP_GID variables that are used by my version of the init.sh script to adjust file permissions and the env.prod file contains the rest of the settings for remark42, including the social network tokens (see the remark42 documentation for the available parameters, I don t include my configuration here because some of them are secrets).

Nginx configurationThe nginx configuration for the blogops.mixinet.net site is as simple as:
server  
  listen 443 ssl http2;
  server_name blogops.mixinet.net;
  ssl_certificate /etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/blogops.mixinet.net/privkey.pem;
  include /etc/letsencrypt/options-ssl-nginx.conf;
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  access_log /var/log/nginx/blogops.mixinet.net-443.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-443.error.log;
  root /srv/blogops/nginx/public_html;
  location /  
    try_files $uri $uri/ =404;
   
  include /srv/blogops/nginx/remark42.conf;
 
server  
  listen 80 ;
  listen [::]:80 ;
  server_name blogops.mixinet.net;
  access_log /var/log/nginx/blogops.mixinet.net-80.access.log;
  error_log  /var/log/nginx/blogops.mixinet.net-80.error.log;
  if ($host = blogops.mixinet.net)  
    return 301 https://$host$request_uri;
   
  return 404;
 
On this configuration the certificates are managed by certbot and the server root directory is on /srv/blogops/nginx/public_html and not on /srv/blogops/public; the reason for that is that I want to be able to compile without affecting the running site, the deployment script generates the site on /srv/blogops/public and if all works well we rename folders to do the switch, making the change feel almost atomic.

json2file-go configurationAs I have a working WireGuard VPN between the machine running gitea at my home and the VM where the blog is served, I m going to configure the json2file-go to listen for connections on a high port using a self signed certificate and listening on IP addresses only reachable through the VPN. To do it we create a systemd socket to run json2file-go and adjust its configuration to listen on a private IP (we use the FreeBind option on its definition to be able to launch the service even when the IP is not available, that is, when the VPN is down). The following script can be used to set up the json2file-go configuration:
setup-json2file.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
BASE_DIR="/srv/blogops/webhook"
J2F_DIR="$BASE_DIR/json2file"
TLS_DIR="$BASE_DIR/tls"
J2F_SERVICE_NAME="json2file-go"
J2F_SERVICE_DIR="/etc/systemd/system/json2file-go.service.d"
J2F_SERVICE_OVERRIDE="$J2F_SERVICE_DIR/override.conf"
J2F_SOCKET_DIR="/etc/systemd/system/json2file-go.socket.d"
J2F_SOCKET_OVERRIDE="$J2F_SOCKET_DIR/override.conf"
J2F_BASEDIR_FILE="/etc/json2file-go/basedir"
J2F_DIRLIST_FILE="/etc/json2file-go/dirlist"
J2F_CRT_FILE="/etc/json2file-go/certfile"
J2F_KEY_FILE="/etc/json2file-go/keyfile"
J2F_CRT_PATH="$TLS_DIR/crt.pem"
J2F_KEY_PATH="$TLS_DIR/key.pem"
# ----
# MAIN
# ----
# Install packages used with json2file for the blogops site
sudo apt update
sudo apt install -y json2file-go uuid
if [ -z "$(type mkcert)" ]; then
  sudo apt install -y mkcert
fi
sudo apt clean
# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
J2F_DIRLIST="blogops:$(uuid)"
J2F_LISTEN_STREAM="172.31.31.1:4443"
# Configure json2file
[ -d "$J2F_DIR" ]   mkdir "$J2F_DIR"
sudo sh -c "echo '$J2F_DIR' >'$J2F_BASEDIR_FILE'"
[ -d "$TLS_DIR" ]   mkdir "$TLS_DIR"
if [ ! -f "$J2F_CRT_PATH" ]   [ ! -f "$J2F_KEY_PATH" ]; then
  mkcert -cert-file "$J2F_CRT_PATH" -key-file "$J2F_KEY_PATH" "$(hostname -f)"
fi
sudo sh -c "echo '$J2F_CRT_PATH' >'$J2F_CRT_FILE'"
sudo sh -c "echo '$J2F_KEY_PATH' >'$J2F_KEY_FILE'"
sudo sh -c "cat >'$J2F_DIRLIST_FILE'" <<EOF
$(echo "$J2F_DIRLIST"   tr ';' '\n')
EOF
# Service override
[ -d "$J2F_SERVICE_DIR" ]   sudo mkdir "$J2F_SERVICE_DIR"
sudo sh -c "cat >'$J2F_SERVICE_OVERRIDE'" <<EOF
[Service]
User=$J2F_USER
Group=$J2F_GROUP
EOF
# Socket override
[ -d "$J2F_SOCKET_DIR" ]   sudo mkdir "$J2F_SOCKET_DIR"
sudo sh -c "cat >'$J2F_SOCKET_OVERRIDE'" <<EOF
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=$J2F_LISTEN_STREAM
EOF
# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$J2F_SERVICE_NAME"
sudo systemctl start "$J2F_SERVICE_NAME"
sudo systemctl enable "$J2F_SERVICE_NAME"
# ----
# vim: ts=2:sw=2:et:ai:sts=2
Warning: The script uses mkcert to create the temporary certificates, to install the package on bullseye the backports repository must be available.

Gitea configurationTo make gitea use our json2file-go server we go to the project and enter into the hooks/gitea/new page, once there we create a new webhook of type gitea and set the target URL to https://172.31.31.1:4443/blogops and on the secret field we put the token generated with uuid by the setup script:
sed -n -e 's/blogops://p' /etc/json2file-go/dirlist
The rest of the settings can be left as they are:
  • Trigger on: Push events
  • Branch filter: *
Warning: We are using an internal IP and a self signed certificate, that means that we have to review that the webhook section of the app.ini of our gitea server allows us to call the IP and skips the TLS verification (you can see the available options on the gitea documentation). The [webhook] section of my server looks like this:
[webhook]
ALLOWED_HOST_LIST=private
SKIP_TLS_VERIFY=true
Once we have the webhook configured we can try it and if it works our json2file server will store the file on the /srv/blogops/webhook/json2file/blogops/ folder.

The json2file spooler scriptWith the previous configuration our system is ready to receive webhook calls from gitea and store the messages on files, but we have to do something to process those files once they are saved in our machine. An option could be to use a cronjob to look for new files, but we can do better on Linux using inotify we will use the inotifywait command from inotify-tools to watch the json2file output directory and execute a script each time a new file is moved inside it or closed after writing (IN_CLOSE_WRITE and IN_MOVED_TO events). To avoid concurrency problems we are going to use task-spooler to launch the scripts that process the webhooks using a queue of length 1, so they are executed one by one in a FIFO queue. The spooler script is this:
blogops-spooler.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
TSP_DIR="$BASE_DIR/tsp"
WEBHOOK_COMMAND="$BIN_DIR/blogops-webhook.sh"
# ---------
# FUNCTIONS
# ---------
queue_job()  
  echo "Queuing job to process file '$1'"
  TMPDIR="$TSP_DIR" TS_SLOTS="1" TS_MAXFINISHED="10" \
    tsp -n "$WEBHOOK_COMMAND" "$1"
 
# ----
# MAIN
# ----
INPUT_DIR="$1"
if [ ! -d "$INPUT_DIR" ]; then
  echo "Input directory '$INPUT_DIR' does not exist, aborting!"
  exit 1
fi
[ -d "$TSP_DIR" ]   mkdir "$TSP_DIR"
echo "Processing existing files under '$INPUT_DIR'"
find "$INPUT_DIR" -type f   sort   while read -r _filename; do
  queue_job "$_filename"
done
# Use inotifywatch to process new files
echo "Watching for new files under '$INPUT_DIR'"
inotifywait -q -m -e close_write,moved_to --format "%w%f" -r "$INPUT_DIR"  
  while read -r _filename; do
    queue_job "$_filename"
  done
# ----
# vim: ts=2:sw=2:et:ai:sts=2
To run it as a daemon we install it as a systemd service using the following script:
setup-spooler.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
BASE_DIR="/srv/blogops/webhook"
BIN_DIR="$BASE_DIR/bin"
J2F_DIR="$BASE_DIR/json2file"
SPOOLER_COMMAND="$BIN_DIR/blogops-spooler.sh '$J2F_DIR'"
SPOOLER_SERVICE_NAME="blogops-j2f-spooler"
SPOOLER_SERVICE_FILE="/etc/systemd/system/$SPOOLER_SERVICE_NAME.service"
# Configuration file values
J2F_USER="$(id -u)"
J2F_GROUP="$(id -g)"
# ----
# MAIN
# ----
# Install packages used with the webhook processor
sudo apt update
sudo apt install -y inotify-tools jq task-spooler
sudo apt clean
# Configure process service
sudo sh -c "cat > $SPOOLER_SERVICE_FILE" <<EOF
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for $J2F_USER
After=docker.service
[Service]
Type=simple
User=$J2F_USER
Group=$J2F_GROUP
ExecStart=$SPOOLER_COMMAND
EOF
# Restart and enable service
sudo systemctl daemon-reload
sudo systemctl stop "$SPOOLER_SERVICE_NAME"   true
sudo systemctl start "$SPOOLER_SERVICE_NAME"
sudo systemctl enable "$SPOOLER_SERVICE_NAME"
# ----
# vim: ts=2:sw=2:et:ai:sts=2

The gitea webhook processorFinally, the script that processes the JSON files does the following:
  1. First, it checks if the repository and branch are right,
  2. Then, it fetches and checks out the commit referenced on the JSON file,
  3. Once the files are updated, compiles the site using hugo with docker compose,
  4. If the compilation succeeds the script renames directories to swap the old version of the site by the new one.
If there is a failure the script aborts but before doing it or if the swap succeeded the system sends an email to the configured address and/or the user that pushed updates to the repository with a log of what happened. The current script is this one:
blogops-webhook.sh
#!/bin/sh
set -e
# ---------
# VARIABLES
# ---------
# Values
REPO_REF="refs/heads/main"
REPO_CLONE_URL="https://gitea.mixinet.net/mixinet/blogops.git"
MAIL_PREFIX="[BLOGOPS-WEBHOOK] "
# Address that gets all messages, leave it empty if not wanted
MAIL_TO_ADDR="blogops@mixinet.net"
# If the following variable is set to 'true' the pusher gets mail on failures
MAIL_ERRFILE="false"
# If the following variable is set to 'true' the pusher gets mail on success
MAIL_LOGFILE="false"
# gitea's conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains
# when the KeepEmailPrivate option is enabled for a user
NO_REPLY_ADDRESS="noreply.example.org"
# Directories
BASE_DIR="/srv/blogops"
PUBLIC_DIR="$BASE_DIR/public"
NGINX_BASE_DIR="$BASE_DIR/nginx"
PUBLIC_HTML_DIR="$NGINX_BASE_DIR/public_html"
WEBHOOK_BASE_DIR="$BASE_DIR/webhook"
WEBHOOK_SPOOL_DIR="$WEBHOOK_BASE_DIR/spool"
WEBHOOK_ACCEPTED="$WEBHOOK_SPOOL_DIR/accepted"
WEBHOOK_DEPLOYED="$WEBHOOK_SPOOL_DIR/deployed"
WEBHOOK_REJECTED="$WEBHOOK_SPOOL_DIR/rejected"
WEBHOOK_TROUBLED="$WEBHOOK_SPOOL_DIR/troubled"
WEBHOOK_LOG_DIR="$WEBHOOK_SPOOL_DIR/log"
# Files
TODAY="$(date +%Y%m%d)"
OUTPUT_BASENAME="$(date +%Y%m%d-%H%M%S.%N)"
WEBHOOK_LOGFILE_PATH="$WEBHOOK_LOG_DIR/$OUTPUT_BASENAME.log"
WEBHOOK_ACCEPTED_JSON="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.json"
WEBHOOK_ACCEPTED_LOGF="$WEBHOOK_ACCEPTED/$OUTPUT_BASENAME.log"
WEBHOOK_REJECTED_TODAY="$WEBHOOK_REJECTED/$TODAY"
WEBHOOK_REJECTED_JSON="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_REJECTED_LOGF="$WEBHOOK_REJECTED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_DEPLOYED_TODAY="$WEBHOOK_DEPLOYED/$TODAY"
WEBHOOK_DEPLOYED_JSON="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_DEPLOYED_LOGF="$WEBHOOK_DEPLOYED_TODAY/$OUTPUT_BASENAME.log"
WEBHOOK_TROUBLED_TODAY="$WEBHOOK_TROUBLED/$TODAY"
WEBHOOK_TROUBLED_JSON="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.json"
WEBHOOK_TROUBLED_LOGF="$WEBHOOK_TROUBLED_TODAY/$OUTPUT_BASENAME.log"
# Query to get variables from a gitea webhook json
ENV_VARS_QUERY="$(
  printf "%s" \
    '(.             @sh "gt_ref=\(.ref);"),' \
    '(.             @sh "gt_after=\(.after);"),' \
    '(.repository   @sh "gt_repo_clone_url=\(.clone_url);"),' \
    '(.repository   @sh "gt_repo_name=\(.name);"),' \
    '(.pusher       @sh "gt_pusher_full_name=\(.full_name);"),' \
    '(.pusher       @sh "gt_pusher_email=\(.email);")'
)"
# ---------
# Functions
# ---------
webhook_log()  
  echo "$(date -R) $*" >>"$WEBHOOK_LOGFILE_PATH"
 
webhook_check_directories()  
  for _d in "$WEBHOOK_SPOOL_DIR" "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" \
    "$WEBHOOK_REJECTED" "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR"; do
    [ -d "$_d" ]   mkdir "$_d"
  done
 
webhook_clean_directories()  
  # Try to remove empty dirs
  for _d in "$WEBHOOK_ACCEPTED" "$WEBHOOK_DEPLOYED" "$WEBHOOK_REJECTED" \
    "$WEBHOOK_TROUBLED" "$WEBHOOK_LOG_DIR" "$WEBHOOK_SPOOL_DIR"; do
    if [ -d "$_d" ]; then
      rmdir "$_d" 2>/dev/null   true
    fi
  done
 
webhook_accept()  
  webhook_log "Accepted: $*"
  mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_ACCEPTED_JSON"
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_ACCEPTED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_ACCEPTED_LOGF"
 
webhook_reject()  
  [ -d "$WEBHOOK_REJECTED_TODAY" ]   mkdir "$WEBHOOK_REJECTED_TODAY"
  webhook_log "Rejected: $*"
  if [ -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
    mv "$WEBHOOK_JSON_INPUT_FILE" "$WEBHOOK_REJECTED_JSON"
  fi
  mv "$WEBHOOK_LOGFILE_PATH" "$WEBHOOK_REJECTED_LOGF"
  exit 0
 
webhook_deployed()  
  [ -d "$WEBHOOK_DEPLOYED_TODAY" ]   mkdir "$WEBHOOK_DEPLOYED_TODAY"
  webhook_log "Deployed: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_DEPLOYED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_DEPLOYED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_DEPLOYED_LOGF"
 
webhook_troubled()  
  [ -d "$WEBHOOK_TROUBLED_TODAY" ]   mkdir "$WEBHOOK_TROUBLED_TODAY"
  webhook_log "Troubled: $*"
  mv "$WEBHOOK_ACCEPTED_JSON" "$WEBHOOK_TROUBLED_JSON"
  mv "$WEBHOOK_ACCEPTED_LOGF" "$WEBHOOK_TROUBLED_LOGF"
  WEBHOOK_LOGFILE_PATH="$WEBHOOK_TROUBLED_LOGF"
 
print_mailto()  
  _addr="$1"
  _user_email=""
  # Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,
  # which should match the value of that variable on the gitea 'app.ini' (it
  # is the domain used for emails when the user hides it).
  # shellcheck disable=SC2154
  if [ -n "$ gt_pusher_email##*@"$ NO_REPLY_ADDRESS " " ] &&
    [ -z "$ gt_pusher_email##*@* " ]; then
    _user_email="\"$gt_pusher_full_name <$gt_pusher_email>\""
  fi
  if [ "$_addr" ] && [ "$_user_email" ]; then
    echo "$_addr,$_user_email"
  elif [ "$_user_email" ]; then
    echo "$_user_email"
  elif [ "$_addr" ]; then
    echo "$_addr"
  fi
 
mail_success()  
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_LOGFILE" = "true" ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="OK - $gt_repo_name updated to commit '$gt_after'"
    mail -s "$ MAIL_PREFIX $ subject " "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
 
mail_failure()  
  to_addr="$MAIL_TO_ADDR"
  if [ "$MAIL_ERRFILE" = true ]; then
    to_addr="$(print_mailto "$to_addr")"
  fi
  if [ "$to_addr" ]; then
    # shellcheck disable=SC2154
    subject="KO - $gt_repo_name update FAILED for commit '$gt_after'"
    mail -s "$ MAIL_PREFIX $ subject " "$to_addr" \
      <"$WEBHOOK_LOGFILE_PATH"
  fi
 
# ----
# MAIN
# ----
# Check directories
webhook_check_directories
# Go to the base directory
cd "$BASE_DIR"
# Check if the file exists
WEBHOOK_JSON_INPUT_FILE="$1"
if [ ! -f "$WEBHOOK_JSON_INPUT_FILE" ]; then
  webhook_reject "Input arg '$1' is not a file, aborting"
fi
# Parse the file
webhook_log "Processing file '$WEBHOOK_JSON_INPUT_FILE'"
eval "$(jq -r "$ENV_VARS_QUERY" "$WEBHOOK_JSON_INPUT_FILE")"
# Check that the repository clone url is right
# shellcheck disable=SC2154
if [ "$gt_repo_clone_url" != "$REPO_CLONE_URL" ]; then
  webhook_reject "Wrong repository: '$gt_clone_url'"
fi
# Check that the branch is the right one
# shellcheck disable=SC2154
if [ "$gt_ref" != "$REPO_REF" ]; then
  webhook_reject "Wrong repository ref: '$gt_ref'"
fi
# Accept the file
# shellcheck disable=SC2154
webhook_accept "Processing '$gt_repo_name'"
# Update the checkout
ret="0"
git fetch >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository fetch failed"
  mail_failure
fi
# shellcheck disable=SC2154
git checkout "$gt_after" >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Repository checkout failed"
  mail_failure
fi
# Remove the build dir if present
if [ -d "$PUBLIC_DIR" ]; then
  rm -rf "$PUBLIC_DIR"
fi
# Build site
docker compose run hugo -- >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
# go back to the main branch
git switch main && git pull
# Fail if public dir was missing
if [ "$ret" -ne "0" ]   [ ! -d "$PUBLIC_DIR" ]; then
  webhook_troubled "Site build failed"
  mail_failure
fi
# Remove old public_html copies
webhook_log 'Removing old site versions, if present'
find $NGINX_BASE_DIR -mindepth 1 -maxdepth 1 -name 'public_html-*' -type d \
  -exec rm -rf   \; >>"$WEBHOOK_LOGFILE_PATH" 2>&1   ret="$?"
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Removal of old site versions failed"
  mail_failure
fi
# Switch site directory
TS="$(date +%Y%m%d-%H%M%S)"
if [ -d "$PUBLIC_HTML_DIR" ]; then
  webhook_log "Moving '$PUBLIC_HTML_DIR' to '$PUBLIC_HTML_DIR-$TS'"
  mv "$PUBLIC_HTML_DIR" "$PUBLIC_HTML_DIR-$TS" >>"$WEBHOOK_LOGFILE_PATH" 2>&1  
    ret="$?"
fi
if [ "$ret" -eq "0" ]; then
  webhook_log "Moving '$PUBLIC_DIR' to '$PUBLIC_HTML_DIR'"
  mv "$PUBLIC_DIR" "$PUBLIC_HTML_DIR" >>"$WEBHOOK_LOGFILE_PATH" 2>&1  
    ret="$?"
fi
if [ "$ret" -ne "0" ]; then
  webhook_troubled "Site switch failed"
  mail_failure
else
  webhook_deployed "Site deployed successfully"
  mail_success
fi
# ----
# vim: ts=2:sw=2:et:ai:sts=2

19 May 2022

Ulrike Uhlig: How do kids conceive the internet? - part 2

I promised a follow up to my post about interviews about how children conceptualize the internet. Here it is. (Maybe not the last one!)

The internet, it s that thing that acts up all the time, right? As said in my first post, I abandoned the idea to interview children younger than 9 years because it seems they are not necessarily aware that they are using the internet. But it turns out that some do have heard about the internet. My friend Anna, who has 9 younger siblings, tried to win some of her brothers and sisters for an interview with me. At the dinner table, this turned into a discussion and she sent me an incredibly funny video where two of her brothers and sisters, aged 5 and 6, discuss with her about the internet. I won t share the video for privacy reasons besides, the kids speak in the wondrous dialect of Vorarlberg, a region in western Austria, close to the border with Liechtenstein. Here s a transcription of the dinner table discussion:
  • Anna: what is the internet?
  • both children: (shouting as if it was a game of who gets it first) photo! mobile! device! camera!
  • Anna: But one can have a camera without the internet
  • M.: Internet is the mobile phone charger! Mobile phone full!
  • J.: Internet is internet is
  • M.: I know! Internet is where you can charge something, the mobile phone and
  • Anna: You mean electricity?
  • M.: Yeah, that is the internet, electricity!
  • Anna: (laughs), Yes, the internet works a bit similarly, true.
  • J.: It s the electricity of the house!
  • Anna: The electricity of the house
(everyone is talking at the same time now.)
  • Anna: And what s WiFi?
  • M.: WiFi it s the TV!
  • Anna (laughs)
  • M.: WiFi is there so it doesn t act up!
  • Anna (laughs harder)
  • J. (repeats what M. just said): WiFi is there so it doesn t act up!
  • Anna: So that what doesn t act up?
  • M.: (moves her finger wildly drawing a small circle in the air) So that it doesn t spin!
  • Anna: Ah?
  • M.: When one wants to watch something on Youtube, well then that the thing doesn t spin like that!
  • Anna: Ahhh! so when you use Youtube, you need the internet, right?
  • J.: Yes, so that one can watch things.
I really like how the kids associate the internet with a thing that works all the time, except for when it doesn t work. Then they notice: The internet is acting up! Probably, when that happens, parents or older siblings say: the internet is acting up or let me check why the internet acts up again and maybe they get up from the sofa, switch a home router on and off again, which creates this association with electricity. (Just for the sake of clarity for fellow multilingualist readers, the kids used the German word spinnen , which I translated to acting up . In French that would be d conner .)

WiFi for everyone! I interviewed another of Anna s siblings, a 10 year old boy. He told me that he does not really use the internet by himself yet, and does not own any internet capable device. He watches when older family members look up stuff on Google, or put on a video on Youtube, Netflix, or Amazon he knew all these brand names though. In the living room, there s Alexa, he told me, and he uses the internet by asking Alexa to play music.
Then I say: Alexa, play this song!
Interestingly, he knew that, in order to listen to a CD, the internet was not needed. When asked how a drawing would look like that explains the internet, he drew a scheme of the living room at home, with the TV, Alexa, and some kind of WiFi dongle, maybe a repeater. (Unfortunately I did not manage to get his drawing.) If he could ask a wise and friendly dragon one thing about the internet that he always wanted to know, he would ask How much internet can one have and what are all the things one can do with the internet? If he could change the internet for the better for everyone, he would build a gigantic building which would provide the entire world with WiFi.

Cut out the stupid stuff from the internet His slightly older sister does own a laptop and a smartphone. She uses the internet to watch movies, or series, to talk with her friends, or to listen to music. When asked how she would explain the internet to an alien, she said that
one can do a lot of things on the internet, but on the internet there can be stupid things, but also good things, one can learn stuff on the internet, for example how to do crochet.
Most importantly, she noticed that
one needs the internet nowadays.
A child's drawing. On the left, a smartphone with WhatsApp, saying 'calls with WhatsApp'. In the middle a TV saying 'watching movies'. On the right, a laptop with lots of open windowns. Her drawing shows how she uses the internet: calls using WhatsApp, watching movies online, and a laptop with open windows on the screen. She would ask the dragon that can explain one thing she always wanted to know about the internet:
What is the internet? How does it work at all? How does it function?
What she would change has to do with her earlier remark about stupid things:
I would make it so that there are less stupid things. It would be good to use the internet for better things, but not for useless things, that one doesn t actually need.
When I asked her what she meant by stupid things , she replied:
Useless videos where one talks about nonsense. And one can also google stupid things, for example how long will i be alive? and stuff like that.

Patterns From the interviews I made until now, there seems to be a cut between then age where kids don t own a device and use the internet to watch movies, series or listen to music and the age where they start owning a device and then they start talking to their friends, and create accounts on social media. This seems to happen roughly at ages 9-10. I m still surprised at the amount of ideas that kids have, when asked what they would change on the internet if they could. I m sure there s more if one goes looking for it.

Thanks Thanks to my friends who made all these interviews possible either by allowing me to meet their children, or their younger siblings: Anna, Christa, Aline, Cindy, and Martina.

29 April 2022

Russ Allbery: Review: Interesting Times

Review: Interesting Times, by Terry Pratchett
Series: Discworld #17
Publisher: Harper
Copyright: 1994
Printing: February 2014
ISBN: 0-06-227629-8
Format: Mass market
Pages: 399
Interesting Times is the seventeenth Discworld novel and certainly not the place to start. At the least, you will probably want to read The Colour of Magic and The Light Fantastic before this book, since it's a sequel to those (although Rincewind has had some intervening adventures). Lord Vetinari has received a message from the Counterweight Continent, the first in ten years, cryptically demanding the Great Wizzard be sent immediately. The Agatean Empire is one of the most powerful states on the Disc. Thankfully for everyone else, it normally suits its rulers to believe that the lands outside their walls are inhabited only by ghosts. No one is inclined to try to change their minds or otherwise draw their attention. Accordingly, the Great Wizard must be sent, a task that Vetinari efficiently delegates to the Archchancellor. There is only the small matter of determining who the Great Wizzard is, and why it was spelled with two z's. Discworld readers with a better memory than I will recall Rincewind's hat. Why the Counterweight Continent would demanding a wizard notorious for his near-total inability to perform magic is a puzzle for other people. Rincewind is promptly located by a magical computer, and nearly as promptly transported across the Disc, swapping him for an unnecessarily exciting object of roughly equivalent mass and hurling him into an unexpected rescue of Cohen the Barbarian. Rincewind predictably reacts by running away, although not fast or far enough to keep him from being entangled in a glorious popular uprising. Or, well, something that has aspirations of being glorious, and popular, and an uprising. I hate to say this, because Pratchett is an ethically thoughtful writer to whom I am willing to give the benefit of many doubts, but this book was kind of racist. The Agatean Empire is modeled after China, and the Rincewind books tend to be the broadest and most obvious parodies, so that was already a recipe for some trouble. Some of the social parody is not too objectionable, albeit not my thing. I find ethnic stereotypes and making fun of funny-sounding names in other languages (like a city named Hunghung) to be in poor taste, but Pratchett makes fun of everyone's names and cultures rather equally. (Also, I admit that some of the water buffalo jokes, despite the stereotypes, were pretty good.) If it had stopped there, it would have prompted some eye-rolling but not much comment. Unfortunately, a significant portion of the plot depends on the idea that the population of the Agatean Empire has been so brainwashed into obedience that they have a hard time even imagining resistance, and even their revolutionaries are so polite that the best they can manage for slogans are things like "Timely Demise to All Enemies!" What they need are a bunch of outsiders, such as Rincewind or Cohen and his gang. More details would be spoilers, but there are several deliberate uses of Ankh-Morpork as a revolutionary inspiration and a great deal of narrative hand-wringing over how awful it is to so completely convince people they are slaves that you don't need chains. There is a depressingly tedious tendency of western writers, even otherwise thoughtful and well-meaning ones like Pratchett, to adopt a simplistic ranking of political systems on a crude measure of freedom. That analysis immediately encounters the problem that lots of people who live within systems that rate poorly on this one-dimensional scale seem inadequately upset about circumstances that are "obviously" horrific oppression. This should raise questions about the validity of the assumptions, but those assumptions are so unquestionable that the writer instead decides the people who are insufficiently upset about their lack of freedom must be defective. The more racist writers attribute that defectiveness to racial characteristics. The less racist writers, like Pratchett, attribute that defectiveness to brainwashing and systemic evil, which is not quite as bad as overt racism but still rests on a foundation of smug cultural superiority. Krister Stendahl, a bishop of the Church of Sweden, coined three famous rules for understanding other religions:
  1. When you are trying to understand another religion, you should ask the adherents of that religion and not its enemies.
  2. Don't compare your best to their worst.
  3. Leave room for "holy envy."
This is excellent advice that should also be applied to politics. Most systems exist for some reason. The differences from your preferred system are easy to see, particularly those that strike you as horrible. But often there are countervailing advantages that are less obvious, and those are more psychologically difficult to understand and objectively analyze. You might find they have something that you wish your system had, which causes discomfort if you're convinced you have the best political system in the world, or are making yourself feel better about the abuses of your local politics by assuring yourself that at least you're better than those people. I was particularly irritated to see this sort of simplistic stereotyping in Discworld given that Ankh-Morpork, the setting of most of the Discworld novels, is an authoritarian dictatorship. Vetinari quite capably maintains his hold on power, and yet this is not taken as a sign that the city's inhabitants have been brainwashed into considering themselves slaves. Instead, he's shown as adept at maintaining the stability of a precarious system with a lot of competing forces and a high potential for destructive chaos. Vetinari is an awful person, but he may be better than anyone who would replace him. Hmm. This sort of complexity is permitted in the "local" city, but as soon as we end up in an analog of China, the rulers are evil, the system lacks any justification, and the peasants only don't revolt because they've been trained to believe they can't. Gah. I was muttering about this all the way through Interesting Times, which is a shame because, outside of the ham-handed political plot, it has some great Pratchett moments. Rincewind's approach to any and all danger is a running (sorry) gag that keeps working, and Cohen and his gang of absurdly competent decrepit barbarians are both funnier here than they have been in any previous book and the rare highly-positive portrayal of old people in fantasy adventures who are not wizards or crones. Pretty Butterfly is a great character who deserved to be in a better plot. And I loved the trouble that Rincewind had with the Agatean tonal language, which is an excuse for Pratchett to write dialog full of frustrated non-sequiturs when Rincewind mispronounces a word. I do have to grumble about the Luggage, though. From a world-building perspective its subplot makes sense, but the Luggage was always the best character in the Rincewind stories, and the way it lost all of its specialness here was oddly sad and depressing. Pratchett also failed to convince me of the drastic retcon of The Colour of Magic and The Light Fantastic that he does here (and which I can't talk about in detail due to spoilers), in part because it's entangled in the orientalism of the plot. I'm not sure Pratchett could write a bad book, and I still enjoyed reading Interesting Times, but I don't think he gave the politics his normal care, attention, and thoughtful humanism. I hope later books in this part of the Disc add more nuance, and are less confident and judgmental. I can't really recommend this one, even though it has some merits. Also, just for the record, "may you live in interesting times" is not a Chinese curse. It's an English saying that likely was attributed to China to make it sound exotic, which is the sort of landmine that good-natured parody of other people's cultures needs to be wary of. Followed in publication order by Maskerade, and in Rincewind's personal timeline by The Last Continent. Rating: 6 out of 10

23 April 2022

Andrej Shadura: To England by train (part 2)

My attempt to travel to the UK by train last year didn t go quite as well as I expected. As I mentioned in that blog post, the NightJet to Brussels was cancelled, forcing me to fly instead. This disappointed me so much that I actually unpublished the blog post minutes after it was originally put online. The timing was nearly perfect: I type make publish and I get an email from BB saying they don t know if my train is going to run. Of course it didn t, as Deutsche Bahn workers went ahead with their strike. The blog post sat in the drafts for more than half a year until yesterday, when I finally updated and published it. The reason I have finally published it is that I m going to the UK by train once again. Now, unless railways decide to hold a strike again, fully by train both ways. Very expensive, especially compared to the price of Ryanair flights to my destination. Unfortunately, even though Eurostar added more daily services, they re still not cheap, especially on a shorter notice. This seems to apply to the BB s NightJet even more: I tried many routes between Vienna and London, and the cheapest still seemed to be the connection through Brussels. While researching the prices of the tickets, it seems all booking systems decided to stop co-operating. The Trainline refused to let me look up any trains at all, even with all tracking and advertisement allowed, SNCF kepts showing me overly generic errors (Sorry, an error has occurred.), while the GWR booking system kept crashing with a 500 Internal Server Error for about two hours.
Error messages at the websites of Trainline, GWR and SNCFTrainline, GWR and SNCF kept crashing
Eventually, having spent a lot of time and money, I ve booked my trains to, within and back from England. This time, Cambridge is among the destinations.
The map of the train route from Bratislava to Cambridge through Brussels and London The complete route
Date Station Arrival Departure Train
26.4 Bratislava hl.st. 18:37 REX 2529
Wien Hbf 19:44 20:13 NJ 40490
27.4 Bruxelles-Midi 9:54 15:56 EST 9145
London St Pancras 17:01 17:43 ThamesLink
Cambridge 18:43
I m not sure about it yet, but I may end up taking an earlier train from Bratislava just to ensure there s enough time for the change in Vienna; similarly, I m not sure how much time I will be spending at St Pancras, so I may take one of the later trains. P.S. The maps in this and other posts were created using uMap; the map data come from OpenStreetMap. The train route visualisation was generated with help of signal.eu.org.

8 April 2022

Jacob Adams: The Unexpected Importance of the Trailing Slash

For many using Unix-derived systems today, we take for granted that /some/path and /some/path/ are the same. Most shells will even add a trailing slash for you when you press the Tab key after the name of a directory or a symbolic link to one. However, many programs treat these two paths as subtly different in certain cases, which I outline below, as all three have tripped me up in various ways1.

POSIX and Coreutils Perhaps the trickiest use of the trailing slash in a distinguishing way is in POSIX2 which states:
When the final component of a pathname is a symbolic link, the standard requires that a trailing <slash> causes the link to be followed. This is the behavior of historical implementations3. For example, for /a/b and /a/b/, if /a/b is a symbolic link to a directory, then /a/b refers to the symbolic link, and /a/b/ refers to the directory to which the symbolic link points.
This leads to some unexpected behavior. For example, if you have the following structure of a directory dir containing a file dirfile with a symbolic link link pointing to dir. (which will be used in all shell examples throughout this article):
$ ls -lR
.:
total 4
drwxr-xr-x 2 jacob jacob 4096 Apr  3 00:00 dir
lrwxrwxrwx 1 jacob jacob    3 Apr  3 00:00 link -> dir
./dir:
total 0
-rw-r--r-- 1 jacob jacob 0 Apr  3 00:12 dirfile
On Unixes such as MacOS, FreeBSD or Illumos4, you can move a directory through a symbolic link by using a trailing slash:
$ mv link/ otherdir
$ ls
link	otherdir
On Linux5, mv will not rename the indirectly referenced directory and not the symbolic link, when given a symbolic link with a trailing slash as the source to be renamed. despite the coreutils documentation s claims to the contrary6, instead failing with Not a directory:
$ mv link/ other
mv: cannot move 'link/' to 'other': Not a directory
$ mkdir otherdir
$ mv link/ otherdir
mv: cannot move 'link/' to 'otherdir/link': Not a directory
$ mv link/ otherdir/
mv: cannot move 'link/' to 'otherdir/link': Not a directory
$ mv link otherdirlink
$ ls -l otherdirlink
lrwxrwxrwx 1 jacob jacob 3 Apr  3 00:13 otherdirlink -> dir
This is probably for the best, as it is very confusing behavior. There is still one advantage the trailing slash has when using mv, even on Linux, in that is it does not allow you to move a file to a non-existent directory, or move a file that you expect to be a directory that isn t.
$ mv dir/dirfile nonedir/
mv: cannot move 'dir/dirfile' to 'nonedir/': Not a directory
$ touch otherfile
$ mv otherfile/ dir
mv: cannot stat 'otherfile/': Not a directory
$ mv otherfile dir
$ ls dir
dirfile  otherfile
However, Linux still exhibits some confusing behavior of its own, like when you attempt to remove link recursively with a trailing slash:
rm -rvf link/
Neither link nor dir are removed, but the contents of dir are removed:
removed 'link/dirfile'
Whereas if you remove the trailing slash, you just remove the symbolic link:
$ rm -rvf link
removed 'link'
While on MacOS, FreeBSD or Illumos4, rm will also remove the source directory:
$ rm -rvf link
link/dirfile
link/
$ ls
link
The find and ls commands, in contrast, behave the same on all three operating systems. The find command only searches the contents of the directory a symbolic link points to if the trailing slash is added:
$ find link -name dirfile
$ find link/ -name dirfile
link/dirfile
The ls command acts similarly, showing information on just a symbolic link by itself unless a trailing slash is added, at which point it shows the contents of the directory that it links to:
$ ls -l link
lrwxrwxrwx 1 jacob jacob 3 Apr  3 00:13 link -> dir
$ ls -l link/
total 0
-rw-r--r-- 1 jacob jacob 0 Apr  3 00:13 dirfile

rsync The command rsync handles a trailing slash in an unusual way that trips up many new users. The rsync man page notes:
You can think of a trailing / on a source as meaning copy the contents of this directory as opposed to copy the directory by name , but in both cases the attributes of the containing directory are transferred to the containing directory on the destination.
That is to say, if we had two folders a and b each of which contained some files:
$ ls -R .
.:
a  b
./a:
a1  a2
./b:
b1  b2
Running rsync -av a b moves the entire directory a to directory b:
$ rsync -av a b
sending incremental file list
a/
a/a1
a/a2
sent 181 bytes  received 58 bytes  478.00 bytes/sec
total size is 0  speedup is 0.00
$ ls -R b
b:
a  b1  b2
b/a:
a1  a2
While running rsync -av a/ b moves the contents of directory a to b:
$ rsync -av a/ b
sending incremental file list
./
a1
a2
sent 170 bytes  received 57 bytes  454.00 bytes/sec
total size is 0  speedup is 0.00
$ ls b
a1  a2	b1  b2

Dockerfile COPY The Dockerfile COPY command also cares about the presence of the trailing slash, using it to determine whether the destination should be considered a file or directory. The Docker documentation explains the rules of the command thusly:
COPY [--chown=<user>:<group>] <src>... <dest>
If <src> is a directory, the entire contents of the directory are copied, including filesystem metadata. Note: The directory itself is not copied, just its contents. If <src> is any other kind of file, it is copied individually along with its metadata. In this case, if <dest> ends with a trailing slash /, it will be considered a directory and the contents of <src> will be written at <dest>/base(<src>). If multiple <src> resources are specified, either directly or due to the use of a wildcard, then <dest> must be a directory, and it must end with a slash /. If <dest> does not end with a trailing slash, it will be considered a regular file and the contents of <src> will be written at <dest>. If <dest> doesn t exist, it is created along with all missing directories in its path.
This means if you had a COPY command that moved file to a nonexistent containerfile without the slash, it would create containerfile as a file with the contents of file.
COPY file /containerfile
container$ stat -c %F containerfile
regular empty file
Whereas if you add a trailing slash, then file will be added as a file under the new directory containerdir:
COPY file /containerdir/
container$ stat -c %F containerdir
directory
Interestingly, at no point can you copy a directory completely, only its contents. Thus if you wanted to make a directory in the new container, you need to specify its name in both the source and the destination:
COPY dir /dirincontainer
container$ stat -c %F /dirincontainer
directory
Dockerfiles do also make good use of the trailing slash to ensure they re doing what you mean by requiring a trailing slash on the destination of multiple files:
COPY file otherfile /othercontainerdir
results in the following error:
When using COPY with more than one source file, the destination must be a directory and end with a /
  1. I m sure there are probably more than just these three cases, but these are the three I m familiar with. If you know of more, please tell me about them!.
  2. Some additional relevant sections are the Path Resolution Appendix and the section on Symbolic Links.
  3. The sentence This is the behavior of historical implementations implies that this probably originated in some ancient Unix derivative, possibly BSD or even the original Unix. I don t really have a source on that though, so please reach out if you happen to have any more knowledge on what this refers to.
  4. I tested on MacOS 11.6.5, FreeBSD 12.0 and OmniOS 5.11 2
  5. unless the source is a directory trailing slashes give -ENOTDIR
  6. In fairness to the coreutils maintainers, it seems to be true on all other Unix platforms, but it probably deserves a mention in the documentation when Linux is the most common platform on which coreutils is used. I should submit a patch.

5 April 2022

Kees Cook: security things in Linux v5.10

Previously: v5.9 Linux v5.10 was released in December, 2020. Here s my summary of various security things that I found interesting: AMD SEV-ES
While guest VM memory encryption with AMD SEV has been supported for a while, Joerg Roedel, Thomas Lendacky, and others added register state encryption (SEV-ES). This means it s even harder for a VM host to reconstruct a guest VM s state. x86 static calls
Josh Poimboeuf and Peter Zijlstra implemented static calls for x86, which operates very similarly to the static branch infrastructure in the kernel. With static branches, an if/else choice can be hard-coded, instead of being run-time evaluated every time. Such branches can be updated too (the kernel just rewrites the code to switch around the branch ). All these principles apply to static calls as well, but they re for replacing indirect function calls (i.e. a call through a function pointer) with a direct call (i.e. a hard-coded call address). This eliminates the need for Spectre mitigations (e.g. RETPOLINE) for these indirect calls, and avoids a memory lookup for the pointer. For hot-path code (like the scheduler), this has a measurable performance impact. It also serves as a kind of Control Flow Integrity implementation: an indirect call got removed, and the potential destinations have been explicitly identified at compile-time. network RNG improvements
In an effort to improve the pseudo-random number generator used by the network subsystem (for things like port numbers and packet sequence numbers), Linux s home-grown pRNG has been replaced by the SipHash round function, and perturbed by (hopefully) hard-to-predict internal kernel states. This should make it very hard to brute force the internal state of the pRNG and make predictions about future random numbers just from examining network traffic. Similarly, ICMP s global rate limiter was adjusted to avoid leaking details of network state, as a start to fixing recent DNS Cache Poisoning attacks. SafeSetID handles GID
Thomas Cedeno improved the SafeSetID LSM to handle group IDs (which required teaching the kernel about which syscalls were actually performing setgid.) Like the earlier setuid policy, this lets the system owner define an explicit list of allowed group ID transitions under CAP_SETGID (instead of to just any group), providing a way to keep the power of granting this capability much more limited. (This isn t complete yet, though, since handling setgroups() is still needed.) improve kernel s internal checking of file contents
The kernel provides LSMs (like the Integrity subsystem) with details about files as they re loaded. (For example, loading modules, new kernel images for kexec, and firmware.) There wasn t very good coverage for cases where the contents were coming from things that weren t files. To deal with this, new hooks were added that allow the LSMs to introspect the contents directly, and to do partial reads. This will give the LSMs much finer grain visibility into these kinds of operations. set_fs removal continues
With the earlier work landed to free the core kernel code from set_fs(), Christoph Hellwig made it possible for set_fs() to be optional for an architecture. Subsequently, he then removed set_fs() entirely for x86, riscv, and powerpc. These architectures will now be free from the entire class of kernel address limit attacks that only needed to corrupt a single value in struct thead_info. sysfs_emit() replaces sprintf() in /sys
Joe Perches tackled one of the most common bug classes with sprintf() and snprintf() in /sys handlers by creating a new helper, sysfs_emit(). This will handle the cases where kernel code was not correctly dealing with the length results from sprintf() calls, which might lead to buffer overflows in the PAGE_SIZE buffer that /sys handlers operate on. With the helper in place, it was possible to start the refactoring of the many sprintf() callers. nosymfollow mount option
Mattias Nissler and Ross Zwisler implemented the nosymfollow mount option. This entirely disables symlink resolution for the given filesystem, similar to other mount options where noexec disallows execve(), nosuid disallows setid bits, and nodev disallows device files. Quoting the patch, it is useful as a defensive measure for systems that need to deal with untrusted file systems in privileged contexts. (i.e. for when /proc/sys/fs/protected_symlinks isn t a big enough hammer.) Chrome OS uses this option for its stateful filesystem, as symlink traversal as been a common attack-persistence vector. ARMv8.5 Memory Tagging Extension support
Vincenzo Frascino added support to arm64 for the coming Memory Tagging Extension, which will be available for ARMv8.5 and later chips. It provides 4 bits of tags (covering multiples of 16 byte spans of the address space). This is enough to deterministically eliminate all linear heap buffer overflow flaws (1 tag for free , and then rotate even values and odd values for neighboring allocations), which is probably one of the most common bugs being currently exploited. It also makes use-after-free and over/under indexing much more difficult for attackers (but still possible if the target s tag bits can be exposed). Maybe some day we can switch to 128 bit virtual memory addresses and have fully versioned allocations. But for now, 16 tag values is better than none, though we do still need to wait for anyone to actually be shipping ARMv8.5 hardware. fixes for flaws found by UBSAN
The work to make UBSAN generally usable under syzkaller continues to bear fruit, with various fixes all over the kernel for stuff like shift-out-of-bounds, divide-by-zero, and integer overflow. Seeing these kinds of patches land reinforces the the rationale of shifting the burden of these kinds of checks to the toolchain: these run-time bugs continue to pop up. flexible array conversions
The work on flexible array conversions continues. Gustavo A. R. Silva and others continued to grind on the conversions, getting the kernel ever closer to being able to enable the -Warray-bounds compiler flag and clear the path for saner bounds checking of array indexes and memcpy() usage. That s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.11.

2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

11 March 2022

Santiago Garc a Manti n: tcpping-nmap a substitute for tcpping based on nmap

I was about to setup a tcpping based monitoring on smokeping but then I discovered this was based on tcptraceroute which on Debian comes setuid root and the alternative is to use sudo, so, anyway you put it... this runs with root privileges.I didn't like what I saw, so, I said... couldn't we do this with nmap without needing root?And so I started to write a little script that could mimic what tcpping and tcptraceroute were outputing but using nmap.The result is tcpping-nmap which does this. The only little thing is that nmap only outputs miliseconds while the tcpping gets to microseconds.Hope you enjoy it :-)

22 February 2022

Russ Allbery: Review: Children of Earth and Sky

Review: Children of Earth and Sky, by Guy Gavriel Kay
Publisher: New American Library
Copyright: 2016
ISBN: 0-698-18327-4
Format: Kindle
Pages: 572
Nine hundred years have passed since the events of Lord of Emperors. Twenty-five years ago, Sarantium, queen of cities, fell to the Osmanlis, who have renamed it Asharias in honor of their Asherite faith. The repercussions are still echoing through the western world, as the Osmanlis attempt each spring to push farther west and the forces of Rodolfo, Holy Emperor in Obravic and defender of the Jaddite faith, hold them back. Seressa and Dubrava are city-state republics built on the sea trade. Seressa is the larger and most renown, money-lenders to Rodolfo and notorious for their focus on business and profit, including willingness to trade with the Osmanlis. Dubrava has a more tenuous position: smaller, reliant on trade and other assistance from Seressa, but also holding a more-favored trading position with Asharias. Both are harassed by piracy from Senjan, a fiercely Jaddite raiding city north up the coast from Dubrava and renown for its bravery against the Asherites. The Senjani are bad for business. Seressa would love to wipe them out, but they have the favor of the Holy Emperor. They settled for attempting to starve the city with a blockade. As Children of Earth and Sky opens, Seressa is sending out new spies. One is a woman named Leonora Valeri, who will present herself as the wife of a doctor that Seressa is sending to Dubrava. She is neither his wife nor Seressani, but this assignment gets her out of the convent to which her noble father exiled her after an unapproved love affair. The other new spy is the young artist Pero Villani, a minor painter whose only notable work was destroyed by the woman who commissioned it for being too revealing. Pero's destination is farther east: Grand Khalif Gur u the Destroyer, the man whose forces took Sarantium, wants to be painted in the western style. Pero will do so, and observe all he can, and if the opportunity arises to do more than that, well, so much the better. Pero and Leonora are traveling on a ship owned by Marin Djivo, the younger son of a wealthy Dubravan merchant family, when their ship is captured by Senjani raiders. Among the raiders is Danica Gradek, the archer who broke the Seressani blockade of Senjan. This sort of piracy, while tense, should be an economic transaction: some theft, some bargaining, some ransom, and everyone goes on their way. That is not what happens. Moments later, two men lie dead, and Danica's life has become entangled with Dubravan merchants and Seressani spies. Children of Earth and Sky is in some sense a sequel to the Sarantine Mosaic, and knowing the events of that series adds some emotional depth and significant moments to this story, but you can easily read it as a stand-alone novel. (That said, I recommend the Sarantine Mosaic regardless.) As with nearly all of Kay's work, it's historical fiction with the names changed (less this time than in most of this books) and a bit of magic added. The setting is the middle of the 15th century. Seressa is, of course, Venice. The Osmanlis are the Ottoman Turks, and Asharias is Istanbul, the captured Constantinople. Rodolfo is a Habsburg Holy Roman Emperor, holding court in an amalgam of northern cities that (per the afterward) is primarily Prague. Dubrava, which is central to much of this book, is Dubrovnik in Croatia. As usual with Kay's novels, you don't need to know this to enjoy the story, but it may spark some fun secondary reading. The touch of magic is present in several places, but comes primarily from Danica, whose grandfather resides as a voice in her head. He is the last of her family that she is in contact with. Her father and older brother were killed by Osmanli raiders, and her younger brother taken as a slave to be raised as a djanni warrior in the khalif's infantry. (Djannis are akin to Mamluks in our world.) Damaz, as he is now known, is the remaining major viewpoint character I've not mentioned. There are a couple of key events in the book that have magic at the center, generally involving Danica or Damaz, but most of the story is straight historical fiction (albeit with significant divergences from our world). I'd talked myself out of starting this novel several times before I finally picked it up. Like most of Kay's, it's a long book, and I wasn't sure if I was in the mood for epic narration and a huge cast. And indeed, I found it slow at the start. Once the story got underway, though, I was as enthralled as always. There is a bit of sag in the middle of the book, in part because Kay didn't follow up on some relationships that I wish were more central to the plot and in part because he overdoes the narrative weight in one scene, but the ending is exceptional. Guy Gavriel Kay is the master of a specific type of omniscient tight third person narration, one in which the reader sees what a character is thinking but also gets narrative commentary, foreshadowing, and emotional emphasis apart from the character's thoughts. It can feel heavy-handed; if something is important, Kay tells you, explicitly and sometimes repetitively, and the foreshadowing frequently can be described as portentous. But in return, Kay gets fine control of pacing and emphasis. The narrative commentary functions like a soundtrack in a movie. It tells you when to pay close attention and when you can relax, what moments are important, where to slow down, when to brace yourself, and when you can speed up. That in turn requires trust; if you're not in the mood for the author to dictate your reading pace to the degree Kay is attempting, it can be irritating. If you are in the mood, though, it makes his novels easy to relax into. The narrator will ensure that you don't miss anything important, and it's an effective way to build tension. Kay also strikes just the right balance between showing multiple perspectives on a single moment and spending too much time retelling the same story. He will often switch viewpoint characters in the middle of a scene, but he avoids the trap of replaying the scene and thus losing the reader's interest. There is instead just a moment of doubled perspective or retrospective commentary, just enough information for the reader to extrapolate the other character's experience backwards, and then the story moves on. Kay has an excellent feel for when I badly wanted to see another character's perspective on something that just happened. Some of Kay's novels revolve around a specific event or person. Children of Earth and Sky is not one of those. It's a braided novel following five main characters, each with their own story. Some of those stories converge; some of them touch for a while and then diverge again. About three-quarters of the way through, I wasn't sure how Kay would manage a satisfying conclusion for the numerous separate threads that didn't feel rushed, but I need not have worried. The ending had very little of the shape that I had expected, focused more on the small than the large (although there are some world-changing events here), but it was an absolute delight, with some beautiful moments of happiness that took the rest of the novel to set up. This is not the sort of novel with a clear theme, but insofar as it has one, it's a story about how much of the future shape and events of the world are unknowable. All we can control is our own choices, and we may never know their impact. Each individual must decide who they want to be and attempt to live their life in accordance with that decision, hopefully with some grace towards others in the world. The novel does, alas, still have some of Kay's standard weaknesses. There is (at last!) an important female friendship, and I had great hopes for a second one, but sadly it lasted only a scant handful of pages. Men interact with each other and with women; women interact almost exclusively with men. Kay does slightly less awarding of women to male characters than in some previous books (although it still happens), but this world is still weirdly obsessed with handing women to men for sex as a hospitality gesture. None of this is too belabored or central to the story, or I would be complaining more, but as soon as one sees how regressive the gender roles typically are in a Kay novel, it's hard to unsee. And, as always for Kay, the sex in this book is weirdly off-putting to me. I think this goes hand in hand with Kay's ability to write some of the best conversations in fantasy. Kay's characters spar and thrust with every line and read nuance into small details of wording. Frequently, the turn of the story rests on the outcome of a careful conversation. This is great reading; it's the part of Kay's writing I enjoy the most. But I'm not sure he knows how to turn it off between characters who love and trust each other. The characters never fully relax; sex feels like another move in ongoing chess games, which in turn makes it feel weirdly transactional or manipulative instead of open-hearted and intimate. It doesn't help that Kay appears to believe that arousal is a far more irresistible force for men than I do. Those problems did get in the way of my enjoyment occasionally, but I didn't think they ruined the book. The rest of the story is too good. Danica in particular is a wonderful character: thoughtful, brave, determined, and deeply honest with herself in that way that is typical of the best of Kay's characters. I wanted to read the book where Danica's and Leonora's stories stayed more entwined; alas, that's not the story Kay was writing. But I am in awe at Kay's ability to write characters who feel thoughtful and insightful even when working at cross purposes, in a world that mostly avoids simple villains, with a plot that never hinges on someone doing something stupid. I love reading about these people. Their triumphs, when they finally come, are deeply satisfying. Children of Earth and Sky is probably not in the top echelon of Kay's works with the Sarantine Mosaic and Under Heaven, but it's close. If you like his other writing, you will like this as well. Highly recommended. Rating: 9 out of 10

23 January 2022

Matthieu Caneill: Debsources, python3, and funky file names

Rumors are running that python2 is not a thing anymore. Well, I'm certainly late to the party, but I'm happy to report that sources.debian.org is now running python3. Wait, it wasn't? Back when development started, python3 was very much a real language, but it was hard to adopt because it was not supported by many libraries. So python2 was chosen, meaning print-based debugging was used in lieu of print()-based debugging, and str were bytes, not unicode. And things were working just fine. One day python2 EOL was announced, with a date far in the future. Far enough to procrastinate for a long time. Combine this with a codebase that is stable enough to not see many commits, and the fact that Debsources is a volunteer-based project that happens at best on week-ends, and you end up with a dormant software and a missed deadline. But, as dormant as the codebase is, the instance hosted at sources.debian.org is very popular and gets 200k to 500k hits per day. Largely enough to be worth a proper maintenance and a transition to python3. Funky file names While transitioning to python3 and juggling left and right with str, bytes and unicode for internal objects, files, database entries and HTTP content, I stumbled upon a bug that has been there since day 1. Quick recap if you're unfamiliar with this tool: Debsources displays the content of the source packages in the Debian archive. In other words, it's a bit like GitHub, but for the Debian source code. And some pieces of software out there, that ended up in Debian packages, happen to contain files whose names can't be decoded to UTF-8. Interestingly enough, there's no such thing as a standard for file names: with a few exceptions that vary by operating system, any sequence of bytes can be a legit file name. And some sequences of bytes are not valid UTF-8. Of course those files are rare, and using ASCII characters to name a file is a much more common practice than using bytes in a non-UTF-8 character encoding. But when you deal with almost 100 million files on which you have no control (those files come from free software projects, and make their way into Debian without any renaming), it happens. Now back to the bug: when trying to display such a file through the web interface, it would crash because it can't convert the file name to UTF-8, which is needed for the HTML representation of the page. Bugfix An often valid approach when trying to represent invalid UTF-8 content is to ignore errors, and replace them with ? or . This is what Debsources actually does to display non-UTF-8 file content. Unfortunately, this best-effort approach is not suitable for file names, as file names are also identifiers in Debsources: among other places, they are part of URLs. If an URL were to use placeholder characters to replace those bytes, there would be no deterministic way to match it with a file on disk anymore. The representation of binary data into text is a known problem. Multiple lossless solutions exist, such as base64 and its variants, but URLs looking like https://sources.debian.org/src/Y293c2F5LzMuMDMtOS4yL2Nvd3NheS8= are not readable at all compared to https://sources.debian.org/src/cowsay/3.03-9.2/cowsay/. Plus, not backwards-compatible with all existing links. The solution I chose is to use double-percent encoding: this allows the representation of any byte in an URL, while keeping allowed characters unchanged - and preventing CGI gateways from trying to decode non-UTF-8 bytes. This is the best of both worlds: regular file names get to appear normally and are human-readable, and funky file names only have percent signs and hex numbers where needed. Here is an example of such an URL: https://sources.debian.org/src/aspell-is/0.51-0-4/%25EDslenska.alias/. Notice the %25ED to represent the percentage symbol itself (%25) followed by an invalid UTF-8 byte (%ED). Transitioning to this was quite a challenge, as those file names don't only appear in URLs, but also in web pages themselves, log files, database tables, etc. And everything was done with str: made sense in python2 when str were bytes, but not much in python3. What are those files? What's their network? I was wondering too. Let's list them!
import os
with open('non-utf-8-paths.bin', 'wb') as f:
    for root, folders, files in os.walk(b'/srv/sources.debian.org/sources/'):
        for path in folders + files:
            try:
                path.decode('utf-8')
            except UnicodeDecodeError:
                f.write(root + b'/' + path + b'\n')
Running this on the Debsources main instance, which hosts pretty much all Debian packages that were part of a Debian release, I could find 307 files (among a total of almost 100 million files). Without looking deep into them, they seem to fall into 2 categories:
  • File names that are not valid UTF-8, but are valid in a different charset. Not all software is developed in English or on UTF-8 systems.
  • File names that can't be decoded to UTF-8 on purpose, to be used as input to test suites, and assert resilience of the software to non-UTF-8 data.
That last point hits home, as it was clearly lacking in Debsources. A funky file name is now part of its test suite. ;)

Next.

Previous.