Search Results: "tina"

2 February 2024

Ian Jackson: UPS, the Useless Parcel Service; VAT and fees

I recently had the most astonishingly bad experience with UPS, the courier company. They severely damaged my parcels, and were very bad about UK import VAT, ultimately ending up harassing me on autopilot. The only thing that got their attention was my draft Particulars of Claim for intended legal action. Surprisingly, I got them to admit in writing that the disbursement fee they charge recipients alongside the actual VAT, is just something they made up with no legal basis. What happened Autumn last year I ordered some furniture from a company in Germany. This was to be shipped by them to me by courier. The supplier chose UPS. UPS misrouted one of the three parcels to Denmark. When everything arrived, it had been sat on by elephants. The supplier had to replace most of it, with considerable inconvenience and delay to me, and of course a loss to the supplier. But this post isn t mostly about that. This post is about VAT. You see, import VAT was due, because of fucking Brexit. UPS made a complete hash of collecting that VAT. Their computers can t issue coherent documents, their email helpdesk is completely useless, and their automated debt collection systems run along uninfluenced by any external input. The crazy, including legal threats and escalating late payment fees, continued even after I paid the VAT discrepancy (which I did despite them not yet having provided any coherent calculation for it). This kind of behaviour is a very small and mild version of the kind of things British Gas did to Lisa Ferguson, who eventually won substantial damages for harassment, plus 10K of costs. Having tried asking nicely, and sending stiff letters, I too threatened litigation. I would have actually started a court claim, but it would have included a claim under the Protection from Harassment Act. Those have to be filed under the Part 8 procedure , which involves sending all of the written evidence you re going to use along with the claim form. Collating all that would be a good deal of work, especially since UPS and ControlAccount didn t engage with me at all, so I had no idea which things they might actually dispute. So I decided that before issuing proceedings, I d send them a copy of my draft Particulars of Claim, along with an offer to settle if they would pay me a modest sum and stop being evil robots at me. Rather than me typing the whole tale in again, you can read the full gory details in the PDF of my draft Particulars of Claim. (I ve redacted the reference numbers). Outcome The draft Particulars finally got their attention. UPS sent me an offer: they agreed to pay me 50, in full and final settlement. That was close enough to my offer that I accepted it. I mostly wanted them to stop, and they do seem to have done so. And I ve received the 50. VAT calculation They also finally included an actual explanation of the VAT calculation. It s absurd, but it s not UPS s absurd:
The clearance was entered initially with estimated import charges of 400.03, consisting of 387.83 VAT, and 12.20 disbursement fee. This original entry regrettably did not include the freight cost for calculating the VAT, and as such when submitted for final entry the VAT value was adjusted to include this and an amended invoice was issued for an additional 39.84. HMRC calculate the amount against which VAT is raised using the value of goods, insurance and freight, however they also may apply a VAT adjustment figure. The VAT Adjustment is based on many factors (Incidental costs in regards to a shipment), which includes charge for currency conversion if the invoice does not list values in Sterling, but the main is due to the inland freight from airport of destination to the final delivery point, as this charge varies, for example, from EMA to Edinburgh would be 150, from EMA to Derby would be 1, so each year UPS must supply HMRC with all values incurred for entry build up and they give an average which UPS have to use on the entry build up as the VAT Adjustment. The correct calculation for the import charges is therefore as follows: Goods value divided by exchange rate 2,489.53 EUR / 1.1683 = 2,130.89 GBP Duty: Goods value plus freight (%) 2,130.89 GBP + 5% = 2,237.43 GBP. That total times the duty rate. X 0 % = 0 GBP VAT: Goods value plus freight (100%) 2,130.89 GBP + 0 = 2,130.89 GBP That total plus duty and VAT adjustment 2,130.89 GBP + 0 GBP + 7.49 GBP = 2,348.08 GBP. That total times 20% VAT = 427.67 GBP As detailed above we must confirm that the final VAT charges applied to the shipment were correct, and that no refund of this is therefore due.
This looks very like HMRC-originated nonsense. If only they had put it on the original bills! It s completely ridiculous that it took four months and near-litigation to obtain it. Disbursement fee One more thing. UPS billed me a 12 disbursement fee . When you import something, there s often tax to pay. The courier company pays that to the government, and the consignee pays it to the courier. Usually the courier demands it before final delivery, since otherwise they end up having to chase it as a debt. It is common for parcel companies to add a random fee of their own. As I note in my Particulars, there isn t any legal basis for this. In my own offer of settlement I proposed that UPS should:
State under what principle of English law (such as, what enactment or principle of Common Law), you levy the disbursement fee (or refund it).
To my surprise they actually responded to this in their own settlement letter. (They didn t, for example, mention the harassment at all.) They said (emphasis mine):
A disbursement fee is a fee for amounts paid or processed on behalf of a client. It is an established category of charge used by legal firms, amongst other companies, for billing of various ancillary costs which may be incurred in completion of service. Disbursement fees are not covered by a specific law, nor are they legally prohibited. Regarding UPS disbursement fee this is an administrative charge levied for the use of UPS deferment account to prepay import charges for clearance through CDS. This charge would therefore be billed to the party that is responsible for the import charges, normally the consignee or receiver of the shipment in question. The disbursement fee as applied is legitimate, and as you have stated is a commonly used and recognised charge throughout the courier industry, and I can confirm that this was charged correctly in this instance.
On UPS s analysis, they can just make up whatever fee they like. That is clearly not right (and I don t even need to refer to consumer protection law, which would also make it obviously unlawful). And, that everyone does it doesn t make it lawful. There are so many things that are ubiquitous but unlawful, especially nowadays when much of the legal system - especially consumer protection regulators - has been underfunded to beyond the point of collapse. Next time this comes up I might have a go at getting the fee back. (Obviously I ll have to pay it first, to get my parcel.) ParcelForce and Royal Mail I think this analysis doesn t apply to ParcelForce and (probably) Royal Mail. I looked into this in 2009, and I found that Parcelforce had been given the ability to write their own private laws: Schemes made under section 89 of the Postal Services Act 2000. This is obviously ridiculous but I think it was the law in 2009. I doubt the intervening governments have fixed it. Furniture Oh, yes, the actual furniture. The replacements arrived intact and are great :-).

comment count unavailable comments

16 January 2024

Russ Allbery: Review: Making Money

Review: Making Money, by Terry Pratchett
Series: Discworld #36
Publisher: Harper
Copyright: October 2007
Printing: November 2014
ISBN: 0-06-233499-9
Format: Mass market
Pages: 473
Making Money is the 36th Discworld novel, the second Moist von Lipwig book, and a direct sequel to Going Postal. You could start the series with Going Postal, but I would not start here. The post office is running like a well-oiled machine, Adora Belle is out of town, and Moist von Lipwig is getting bored. It's the sort of boredom that has him picking his own locks, taking up Extreme Sneezing, and climbing buildings at night. He may not realize it, but he needs something more dangerous to do. Vetinari has just the thing. The Royal Bank of Ankh-Morpork, unlike the post office before Moist got to it, is still working. It is a stolid, boring institution doing stolid, boring things for rich people. It is also the battleground for the Lavish family past-time: suing each other and fighting over money. The Lavishes are old money, the kind of money carefully entangled in trusts and investments designed to ensure the family will always have money regardless of how stupid their children are. Control of the bank is temporarily in the grasp of Joshua Lavish's widow Topsy, who is not a true Lavish, but the vultures are circling. Meanwhile, Vetinari has grand city infrastructure plans, and to carry them out he needs financing. That means he needs a functional bank, and preferably one that is much less conservative. Moist is dubious about running a bank, and even more reluctant when Topsy Lavish sees him for exactly the con artist he is. His hand is forced when she dies, and Moist discovers he has inherited her dog, Mr. Fusspot. A dog that now owns 51% of the Royal Bank and therefore is the chairman of the bank's board of directors. A dog whose safety is tied to Moist's own by way of an expensive assassination contract. Pratchett knew he had a good story with Going Postal, so here he runs the same formula again. And yes, I was happy to read it again. Moist knows very little about banking but quite a lot about pretending something will work until it does, which has more to do with banking than it does with running a post office. The bank employs an expert, Mr. Bent, who is fanatically devoted to the gold standard and the correctness of the books and has very little patience for Moist. There are golem-related hijinks. The best part of this book is Vetinari, who is masterfully manipulating everyone in the story and who gets in some great lines about politics.
"We are not going to have another wretched empire while I am Patrician. We've only just got over the last one."
Also, Vetinari processing dead letters in the post office was an absolute delight. Making Money does have the recurring Pratchett problem of having a fairly thin plot surrounded by random... stuff. Moist's attempts to reform the city currency while staying ahead of the Lavishes is only vaguely related to Mr. Bent's plot arc. The golems are unrelated to the rest of the plot other than providing a convenient deus ex machina. There is an economist making water models in the bank basement with an Igor, which is a great gag but has essentially nothing to do with the rest of the book. One of the golems has been subjected to well-meaning older ladies and 1950s etiquette manuals, which I thought was considerably less funny (and somewhat creepier) than Pratchett did. There are (sigh) clowns, which continue to be my least favorite Ankh-Morpork world-building element. At least the dog was considerably less annoying than I was afraid it was going to be. This grab-bag randomness is a shame, since I think there was room here for a more substantial plot that engaged fully with the high weirdness of finance. Unfortunately, this was a bit like the post office in Going Postal: Pratchett dives into the subject just enough to make a few wry observations and a few funny quips, and then resolves the deeper issues off-camera. Moist tries to invent fiat currency, because of course he does, and Pratchett almost takes on the gold standard, only to veer away at the last minute into vigorous hand-waving. I suspect part of the problem is that I know a little bit too much about finance, so I kept expecting Pratchett to take the humorous social commentary a couple of levels deeper. On a similar note, the villains have great potential that Pratchett undermines by adding too much over-the-top weirdness. I wish Cosmo Lavish had been closer to what he appears to be at the start of the book: a very wealthy and vindictive man (and a reference to Cosimo de Medici) who doesn't have Moist's ability to come up with wildly risky gambits but who knows considerably more than he does about how banking works. Instead, Pratchett gives him a weird obsession that slowly makes him less sinister and more pathetic, which robs the book of a competent antagonist for Moist. The net result is still a fun book, and a solid Discworld entry, but it lacks the core of the best series entries. It felt more like a skit comedy show than a novel, but it's an excellent skit comedy show with the normal assortment of memorable Pratchettisms. Certainly if you've read this far, or even if you've only read Going Postal, you'll want to read Making Money as well. Followed by Unseen Academicals. The next Moist von Lipwig book is Raising Steam. Rating: 8 out of 10

14 January 2024

Debian Brasil: MiniDebConf BH 2024 - abertura de inscri o e chamada de atividades

MiniDebConf BH 2024 Est aberta a inscri o de participantes e a chamada de atividades para a MiniDebConf Belo Horizonte 2024 e para o FLISOL - Festival Latino-americano de Instala o de Software Livre. Veja abaixo algumas informa es importantes: Data e local da MiniDebConf e do FLISOL A MiniDebConf acontecer de 27 a 30 de abril no Campus Pampulha da UFMG - Universidade Federal de Minas Gerais. No dia 27 (s bado) tamb m realizaremos uma edi o do FLISOL - Festival Latino-americano de Instala o de Software Livre, evento que acontece no mesmo dia em v rias cidades da Am rica Latina. Enquanto a MiniDebConf ter atividades focados no Debian, o FLISOL ter atividades gerais sobre Software Livre e temas relacionados como linguagem de programa o, CMS, administra o de redes e sistemas, filosofia, liberdade, licen as, etc. Inscri o gratuita e oferta de bolsas Voc j pode realizar a sua inscri o gratuita para a MiniDebConf Belo Horizonte 2024. A MiniDebConf um evento aberto a todas as pessoas, independente do seu n vel de conhecimento sobre Debian. O mais importante ser reunir a comunidade para celebrar um dos maiores projeto de Software Livre no mundo, por isso queremos receber desde usu rios(as) inexperientes que est o iniciando o seu contato com o Debian at Desenvolvedores(as) oficiais do projeto. Ou seja, est o todos(as) convidados(as)! Este ano estamos ofertando bolsas de hospedagem e passagens para viabilizar a vinda de pessoas de outras cidades que contribuem para o Projeto Debian. Contribuidores(as) n o oficiais, DMs e DDs podem solicitar as bolsas usando o formul rio de inscri o. Tamb m estamos ofertando bolsas de alimenta o para todos(as) os(as) participantes, mesmo n o contribuidores(as), e pessoas que moram na regi o de BH. Os recursos financeiros s o bastante limitados, mas tentaremos atender o m ximo de pedidos. Se voc pretende pedir alguma dessas bolsas, acesse este link e veja mais informa es antes de realizar a sua inscri o: A inscri o (sem bolsas) poder ser feita at a data do evento, mas temos uma data limite para o pedido de bolsas de hospedagem e passagens, por isso fique atento(a) ao prazo final: at 18 de fevereiro. Como estamos usando mesmo formul rio para os dois eventos, a inscri o ser v lida tanto para a MiniDebConf quanto para o FLISOL. Para se inscrever, acesse o site, v em Criar conta. Criei a sua conta (preferencialmente usando o Salsa) e acesse o seu perfil. L voc ver o bot o de Se inscrever. https://bh.mini.debconf.org Chamada de atividades Tamb m est aberta a chamada de atividades tanto para MiniDebConf quanto para o FLISOL. Para mais informa es, acesse este link. Fique atento ao prazo final para enviar sua proposta de atividade: at 18 de fevereiro. Contato Qualquer d vida, mande um email para contato@debianbrasil.org.br Organiza o Debian Brasil Debian Debian MG DCC

7 January 2024

Valhalla's Things: A Corset or Two

Posted on January 7, 2024
Tags: madeof:atoms, craft:sewing, period:victorian, FreeSoftWear
a black coutil midbust corset, from a 3/4 front view, showing the busk closure, a waist tape and external boning channels made of the same twill tape and placed about 1-2 cm from each other at waist level. CW for body size change mentions I needed a corset, badly. Years ago I had a chance to have my measurements taken by a former professional corset maker and then a lesson in how to draft an underbust corset, and that lead to me learning how nice wearing a well-fitted corset feels. Later I tried to extend that pattern up for a midbust corset, with success. And then my body changed suddenly, and I was no longer able to wear either of those, and after a while I started missing them. Since my body was still changing (if no longer drastically so), and I didn t want to use expensive materials for something that had a risk of not fitting after too little time, I decided to start by making myself a summer lightweight corset in aida cloth and plastic boning (for which I had already bought materials). It fitted, but not as well as the first two ones, and I ve worn it quite a bit. I still wanted back the feeling of wearing a comfy, heavy contraption of coutil and steel, however. After a lot of procrastination I redrafted a new pattern, scrapped everything, tried again, had my measurements taken by a dressmaker [#dressmaker], put them in the draft, cut a first mock-up in cheap cotton, fixed the position of a seam, did a second mock-up in denim [#jeans] from an old pair of jeans, and then cut into the cheap herringbone coutil I was planning to use. And that s when I went to see which one of the busks in my stash would work, and realized that I had used a wrong vertical measurement and the front of the corset was way too long for a midbust corset. a corset busk basted to a mock-up with scraps of fabric between each stud / loop. Luckily I also had a few longer busks, I basted one to the denim mock up and tried to wear it for a few hours, to see if it was too long to be comfortable. It was just a bit, on the bottom, which could be easily fixed with the Power Tools1. Except, the more I looked at it the more doing this felt wrong: what I needed most was a midbust corset, not an overbust one, which is what this was starting to be. I could have trimmed it down, but I knew that I also wanted this corset to be a wearable mockup for the pattern, to refine it and have it available for more corsets. And I still had more than half of the cheap coutil I was using, so I decided to redo the pattern and cut new panels. And this is where the or two comes in: I m not going to waste the overbust panels: I had been wanting to learn some techniques to make corsets with a fashion fabric layer, rather than just a single layer of coutil, and this looks like an excellent opportunity for that, together with a piece of purple silk that I know I have in the stash. This will happen later, however, first I m giving priority to the underbust. Anyway, a second set of panels was cut, all the seam lines marked with tailor tacks, and I started sewing by inserting the busk. And then realized that the pre-made boning channel tape I had was too narrow for the 10 mm spiral steel I had plenty of. And that the 25 mm twill tape was also too narrow for a double boning channel. On the other hand, the 18 mm twill tape I had used for the waist tape was good for a single channel, so I decided to put a single bone on each seam, and then add another piece of boning in the middle of each panel. Since I m making external channels, making them in self fabric would have probably looked better, but I no longer had enough fabric, because of the cutting mishap, and anyway this is going to be a strictly underwear only corset, so it s not a big deal. Once the boning channel situation was taken care of, everything else proceeded quite smoothly and I was able to finish the corset during the Christmas break, enlisting again my SO to take care of the flat steel boning while I cut the spiral steels myself with wire cutters. The same corset straight from the front: the left side is a few mm longer than the right side I could have been a bit more precise with the binding, as it doesn t align precisely at the front edge, but then again, it s underwear, nobody other than me and everybody who reads this post is going to see it and I was in a hurry to see it finished. I will be more careful with the next one. The same corset from the back, showing cross lacing with bunny ears at the waist and a lacing gap of about 8 cm. I also think that I haven t been careful enough when pressing the seams and applying the tape, and I ve lost about a cm of width per part, so I m using a lacing gap that is a bit wider than I planned for, but that may change as the corset gets worn, and is still within tolerance. Also, on the morning after I had finished the corset I woke up and realized that I had forgotten to add garter tabs at the bottom edge. I don t know whether I will ever use them, but I wanted the option, so maybe I ll try to add them later on, especially if I can do it without undoing the binding. The next step would have been flossing, which I proceeded to postpone until I ve worn the corset for a while: not because there is any reason for it, but because I still don t know how I want to do it :) What was left was finishing and uploading the pattern and instructions, that are now on my sewing pattern website as #FreeSoftWear, and finally I could post this on the blog.

  1. i.e. by asking my SO to cut and sand it, because I m lazy and I hate doing that part :D

4 January 2024

Aigars Mahinovs: Figuring out finances part 5

At the end of the last part of this, we got a Home Assistant OS installation that contains in itself a Firefly III instance and that contains all the current financial information. They are communicating and calculating predictions for me. The only part that I was not 100% happy with was accounting of cash transactions. You see payments in cash are mostly made away from computer and sometimes even in areas without a mobile Internet connection. And all Firefly III apps that I tried failed at the task of creating a new transaction when offline. Even the one recommended Telegram bot from Firefly III page used a dialog-based approach for creating even a simplest transaction. Issue asking for a one-shot transaction creation option stands as unresolved. Theoretically it would have been best if I could simply contribute that feature to that particular Telegram bot ... but it's written in Javascript. By mapping the API onto tasks somehow. After about 4 hours I still could not figure out where in the code anything actually happens. It all looked like just sugar or spagetty. Connectors on connectors on mappers. So I did the real open-source thing and just wrote my own tool. firefly3_telegram_oneshot is a maximally simple Telegram bot based on python-telegram-bot library. So what does it do? The primary usage for me is to simply send a message to the bot at any time with text like "23.2 coffee and cake" and when the message eventually reaches the bot, it then should create a new transaction from my "cash" account to "Unknown" account in amount of 23.20 and description "coffee and cake". That is the key. Everything else in the bot is comfort. For example "/undo" command deletes the last transaction in cash account (presumably added by error) and "/last" shows which transaction the "/undo" would delete. And to help with expense categorisation one can also do a message like "6.1 beer, dest=Edeka, cat=alcohol" that would search for a destination account that would fuzzy match to "Edeka" (a supermarket in Germany) and add the transation to the category fuzzy matched to "alcohol", like "Shopping - alcoholic drinks". And to make that fuzzy matching more reliable I also added "/cat something" and "/dest something" commands that would show which category or destination account would be matched with a given string. All of that in around 250 lines of Python code and executed by a 17 line Dockerfile (via the Portainer on my Home Assistant OS). One remaining function that could be nice is creating a category or destination account on request (for example when the first character of the supplied string is "+"). I am really plesantly surprised about how much can be done with how little code using the above Python library. And you never need to have any open incoming ports anywhere to runs such bots, so the attack surface for such bot-based service is much tighter. All in all the system works and works well. The only exception is that for my particual bank there is still no automatic way of extracting data about credit card transactions. For those I still have to manually log into the Internet bank, export a CSV file and then feed that into the Firefly III importer. Annoying. And I am not really motivated to try to hack my bank :D Has this been useful to any of you? Any ideas to expand or improve what I have? Just find me as "aigarius" on any social media and speak up :)

25 December 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Automatic Versioning Using semantic-release

This post describes how I m using semantic-release on gitlab-ci to manage versioning automatically for different kinds of projects following a simple workflow (a develop branch where changes are added or merged to test new versions, a temporary release/#.#.# to generate the release candidate versions and a main branch where the final versions are published).

What is semantic-releaseIt is a Node.js application designed to manage project versioning information on Git Repositories using a Continuous integration system (in this post we will use gitlab-ci)

How does it workBy default semantic-release uses semver for versioning (release versions use the format MAJOR.MINOR.PATCH) and commit messages are parsed to determine the next version number to publish. If after analyzing the commits the version number has to be changed, the command updates the files we tell it to (i.e. the package.json file for nodejs projects and possibly a CHANGELOG.md file), creates a new commit with the changed files, creates a tag with the new version and pushes the changes to the repository. When running on a CI/CD system we usually generate the artifacts related to a release (a package, a container image, etc.) from the tag, as it includes the right version number and usually has passed all the required tests (it is a good idea to run the tests again in any case, as someone could create a tag manually or we could run extra jobs when building the final assets if they fail it is not a big issue anyway, numbers are cheap and infinite, so we can skip releases if needed).

Commit messages and versioningThe commit messages must follow a known format, the default module used to analyze them uses the angular git commit guidelines, but I prefer the conventional commits one, mainly because it s a lot easier to use when you want to update the MAJOR version. The commit message format used must be:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
The system supports three types of branches: release, maintenance and pre-release, but for now I m not using maintenance ones. The branches I use and their types are:
  • main as release branch (final versions are published from there)
  • develop as pre release branch (used to publish development and testing versions with the format #.#.#-SNAPSHOT.#)
  • release/#.#.# as pre release branches (they are created from develop to publish release candidate versions with the format #.#.#-rc.# and once they are merged with main they are deleted)
On the release branch (main) the version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last version change (it looks for tags on the same branch).
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last version change.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.
On the pre release branches (develop and release/#.#.#) the version and pre release numbers are always calculated from the last published version available on the branch (i. e. if we published version 1.3.2 on main we need to have the commit with that tag on the develop or release/#.#.# branch to get right what will be the next version). The version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last released version.In our example it was 1.3.2 and the version is updated to 2.0.0-SNAPSHOT.1 or 2.0.0-rc.1 depending on the branch.
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last released version.In our example the release was 1.3.2 and the version is updated to 1.4.0-SNAPSHOT.1 or 1.4.0-rc.1 depending on the branch.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.In our example the release was 1.3.2 and the version is updated to 1.3.3-SNAPSHOT.1 or 1.3.3-rc.1 depending on the branch.
  4. The pre release number is incremented if the MAJOR, MINOR and PATCH numbers are not going to be changed but there is a commit that would otherwise update the version (i.e. a fix on 1.3.3-SNAPSHOT.1 will set the version to 1.3.3-SNAPSHOT.2, a fix or feat on 1.4.0-rc.1 will set the version to 1.4.0-rc.2 an so on).

How do we manage its configurationAlthough the system is designed to work with nodejs projects, it can be used with multiple programming languages and project types. For nodejs projects the usual place to put the configuration is the project s package.json, but I prefer to use the .releaserc file instead. As I use a common set of CI templates, instead of using a .releaserc on each project I generate it on the fly on the jobs that need it, replacing values related to the project type and the current branch on a template using the tmpl command (lately I use a branch of my own fork while I wait for some feedback from upstream, as you will see on the Dockerfile).

Container used to run itAs we run the command on a gitlab-ci job we use the image built from the following Dockerfile:
Dockerfile
# Semantic release image
FROM golang:alpine AS tmpl-builder
#RUN go install github.com/krakozaure/tmpl@v0.4.0
RUN go install github.com/sto/tmpl@v0.4.0-sto.2
FROM node:lts-alpine
COPY --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl
RUN apk update &&\
  apk upgrade &&\
  apk add curl git jq openssh-keygen yq zip &&\
  npm install --location=global\
    conventional-changelog-conventionalcommits@6.1.0\
    @qiwi/multi-semantic-release@7.0.0\
    semantic-release@21.0.7\
    @semantic-release/changelog@6.0.3\
    semantic-release-export-data@1.0.1\
    @semantic-release/git@10.0.1\
    @semantic-release/gitlab@9.5.1\
    @semantic-release/release-notes-generator@11.0.4\
    semantic-release-replace-plugin@1.2.7\
    semver@7.5.4\
  &&\
  rm -rf /var/cache/apk/*
CMD ["/bin/sh"]

How and when is it executedThe job that runs semantic-release is executed when new commits are added to the develop, release/#.#.# or main branches (basically when something is merged or pushed) and after all tests have passed (we don t want to create a new version that does not compile or passes at least the unit tests). The job is something like the following:
semantic_release:
  image: $SEMANTIC_RELEASE_IMAGE
  rules:
    - if: '$CI_COMMIT_BRANCH =~ /^(develop main release\/\d+.\d+.\d+)$/'
      when: always
  stage: release
  before_script:
    - echo "Loading scripts.sh"
    - . $ASSETS_DIR/scripts.sh
  script:
    - sr_gen_releaserc_json
    - git_push_setup
    - semantic-release
Where the SEMANTIC_RELEASE_IMAGE variable contains the URI of the image built using the Dockerfile above and the sr_gen_releaserc_json and git_push_setup are functions defined on the $ASSETS_DIR/scripts.sh file:
  • The sr_gen_releaserc_json function generates the .releaserc.json file using the tmpl command.
  • The git_push_setup function configures git to allow pushing changes to the repository with the semantic-release command, optionally signing them with a SSH key.

The sr_gen_releaserc_json functionThe code for the sr_gen_releaserc_json function is the following:
sr_gen_releaserc_json()
 
  # Use nodejs as default project_type
  project_type="$ PROJECT_TYPE:-nodejs "
  # REGEX to match the rc_branch name
  rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$'
  # PATHS on the local ASSETS_DIR
  assets_dir="$ CI_PROJECT_DIR /$ ASSETS_DIR "
  sr_local_plugin="$ assets_dir /local-plugin.cjs"
  releaserc_tmpl="$ assets_dir /releaserc.json.tmpl"
  pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml"
  pipeline_values_yaml="$ assets_dir /values_$ project_type _project.yaml"
  # Destination PATH
  releaserc_json=".releaserc.json"
  # Create an empty pipeline_values_yaml if missing
  test -f "$pipeline_values_yaml"   : >"$pipeline_values_yaml"
  # Create the pipeline_runtime_values_yaml file
  echo "branch: $ CI_COMMIT_BRANCH " >"$pipeline_runtime_values_yaml"
  echo "gitlab_url: $ CI_SERVER_URL " >"$pipeline_runtime_values_yaml"
  # Add the rc_branch name if we are on an rc_branch
  if [ "$(echo "$CI_COMMIT_BRANCH"   sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_COMMIT_BRANCH " >>"$pipeline_runtime_values_yaml"
  elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"  
      sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_MERGE_REQUEST_SOURCE_BRANCH_NAME " \
      >>"$pipeline_runtime_values_yaml"
  fi
  echo "sr_local_plugin: $ sr_local_plugin " >>"$pipeline_runtime_values_yaml"
  # Create the releaserc_json file
  tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \
    "$releaserc_tmpl"   jq . >"$releaserc_json"
  # Remove the pipeline_runtime_values_yaml file
  rm -f "$pipeline_runtime_values_yaml"
  # Print the releaserc_json file
  print_file_collapsed "$releaserc_json"
  # --*-- BEG: NOTE --*--
  # Rename the package.json to ignore it when calling semantic release.
  # The idea is that the local-plugin renames it back on the first step of the
  # semantic-release process.
  # --*-- END: NOTE --*--
  if [ -f "package.json" ]; then
    echo "Renaming 'package.json' to 'package.json_disabled'"
    mv "package.json" "package.json_disabled"
  fi
 
Almost all the variables used on the function are defined by gitlab except the ASSETS_DIR and PROJECT_TYPE; in the complete pipelines the ASSETS_DIR is defined on a common file included by all the pipelines and the project type is defined on the .gitlab-ci.yml file of each project. If you review the code you will see that the file processed by the tmpl command is named releaserc.json.tmpl, its contents are shown here:
 
  "plugins": [
     - if .sr_local_plugin  
    "  .sr_local_plugin  ",
     - end  
    [
      "@semantic-release/commit-analyzer",
       
        "preset": "conventionalcommits",
        "releaseRules": [
            "breaking": true, "release": "major"  ,
            "revert": true, "release": "patch"  ,
            "type": "feat", "release": "minor"  ,
            "type": "fix", "release": "patch"  ,
            "type": "perf", "release": "patch"  
        ]
       
    ],
     - if .replacements  
    [
      "semantic-release-replace-plugin",
        "replacements":   .replacements   toJson    
    ],
     - end  
    "@semantic-release/release-notes-generator",
     - if eq .branch "main"  
    [
      "@semantic-release/changelog",
        "changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog"  
    ],
     - end  
    [
      "@semantic-release/git",
       
        "assets":   if .assets   .assets   toJson   else  []  end  ,
        "message": "ci(release): v$ nextRelease.version \n\n$ nextRelease.notes "
       
    ],
    [
      "@semantic-release/gitlab",
        "gitlabUrl": "  .gitlab_url  ", "successComment": false  
    ]
  ],
  "branches": [
      "name": "develop", "prerelease": "SNAPSHOT"  ,
     - if .rc_branch  
      "name": "  .rc_branch  ", "prerelease": "rc"  ,
     - end  
    "main"
  ]
 
The values used to process the template are defined on a file built on the fly (releaserc_values.yaml) that includes the following keys and values:
  • branch: the name of the current branch
  • gitlab_url: the URL of the gitlab server (the value is taken from the CI_SERVER_URL variable)
  • rc_branch: the name of the current rc branch; we only set the value if we are processing one because semantic-release only allows one branch to match the rc prefix and if we use a wildcard (i.e. release/*) but the users keep more than one release/#.#.# branch open at the same time the calls to semantic-release will fail for sure.
  • sr_local_plugin: the path to the local plugin we use (shown later)
The template also uses a values_$ project_type _project.yaml file that includes settings specific to the project type, the one for nodejs is as follows:
replacements:
  - files:
      - "package.json"
    from: "\"version\": \".*\""
    to: "\"version\": \"$ nextRelease.version \""
assets:
  - "CHANGELOG.md"
  - "package.json"
The replacements section is used to update the version field on the relevant files of the project (in our case the package.json file) and the assets section includes the files that will be committed to the repository when the release is published (looking at the template you can see that the CHANGELOG.md is only updated for the main branch, we do it this way because if we update the file on other branches it creates a merge nightmare and we are only interested on it for released versions anyway). The local plugin adds code to rename the package.json_disabled file to package.json if present and prints the last and next versions on the logs with a format that can be easily parsed using sed:
local-plugin.cjs
// Minimal plugin to:
// - rename the package.json_disabled file to package.json if present
// - log the semantic-release last & next versions
function verifyConditions(pluginConfig, context)  
  var fs = require('fs');
  if (fs.existsSync('package.json_disabled'))  
    fs.renameSync('package.json_disabled', 'package.json');
    context.logger.log( verifyConditions: renamed 'package.json_disabled' to 'package.json' );
   
 
function analyzeCommits(pluginConfig, context)  
  if (context.lastRelease && context.lastRelease.version)  
    context.logger.log( analyzeCommits: LAST_VERSION=$ context.lastRelease.version  );
   
 
function verifyRelease(pluginConfig, context)  
  if (context.nextRelease && context.nextRelease.version)  
    context.logger.log( verifyRelease: NEXT_VERSION=$ context.nextRelease.version  );
   
 
module.exports =  
  verifyConditions,
  analyzeCommits,
  verifyRelease
 

The git_push_setup functionThe code for the git_push_setup function is the following:
git_push_setup()
 
  # Update global credentials to allow git clone & push for all the group repos
  git config --global credential.helper store
  cat >"$HOME/.git-credentials" <<EOF
https://fake-user:$ GITLAB_REPOSITORY_TOKEN @gitlab.com
EOF
  # Define user name, mail and signing key for semantic-release
  user_name="$SR_USER_NAME"
  user_email="$SR_USER_EMAIL"
  ssh_signing_key="$SSH_SIGNING_KEY"
  # Export git user variables
  export GIT_AUTHOR_NAME="$user_name"
  export GIT_AUTHOR_EMAIL="$user_email"
  export GIT_COMMITTER_NAME="$user_name"
  export GIT_COMMITTER_EMAIL="$user_email"
  # Sign commits with ssh if there is a SSH_SIGNING_KEY variable
  if [ "$ssh_signing_key" ]; then
    echo "Configuring GIT to sign commits with SSH"
    ssh_keyfile="/tmp/.ssh-id"
    : >"$ssh_keyfile"
    chmod 0400 "$ssh_keyfile"
    echo "$ssh_signing_key"   tr -d '\r' >"$ssh_keyfile"
    git config gpg.format ssh
    git config user.signingkey "$ssh_keyfile"
    git config commit.gpgsign true
  fi
 
The function assumes that the GITLAB_REPOSITORY_TOKEN variable (set on the CI/CD variables section of the project or group we want) contains a token with read_repository and write_repository permissions on all the projects we are going to use this function. The SR_USER_NAME and SR_USER_EMAIL variables can be defined on a common file or the CI/CD variables section of the project or group we want to work with and the script assumes that the optional SSH_SIGNING_KEY is exported as a CI/CD default value of type variable (that is why the keyfile is created on the fly) and git is configured to use it if the variable is not empty.
Warning: Keep in mind that the variables GITLAB_REPOSITORY_TOKEN and SSH_SIGNING_KEY contain secrets, so probably is a good idea to make them protected (if you do that you have to make the develop, main and release/* branches protected too).
Warning: The semantic-release user has to be able to push to all the projects on those protected branches, it is a good idea to create a dedicated user and add it as a MAINTAINER for the projects we want (the MAINTAINERS need to be able to push to the branches), or, if you are using a Gitlab with a Premium license you can use the api to allow the semantic-release user to push to the protected branches without allowing it for any other user.

The semantic-release commandOnce we have the .releaserc file and the git configuration ready we run the semantic-release command. If the branch we are working with has one or more commits that will increment the version, the tool does the following (note that the steps are described are the ones executed if we use the configuration we have generated):
  1. It detects the commits that will increment the version and calculates the next version number.
  2. Generates the release notes for the version.
  3. Applies the replacements defined on the configuration (in our example updates the version field on the package.json file).
  4. Updates the CHANGELOG.md file adding the release notes if we are going to publish the file (when we are on the main branch).
  5. Creates a commit if all or some of the files listed on the assets key have changed and uses the commit message we have defined, replacing the variables for their current values.
  6. Creates a tag with the new version number and the release notes.
  7. As we are using the gitlab plugin after tagging it also creates a release on the project with the tag name and the release notes.

Notes about the git workflows and merges between branchesIt is very important to remember that semantic-release looks at the commits of a given branch when calculating the next version to publish, that has two important implications:
  1. On pre release branches we need to have the commit that includes the tag with the released version, if we don t have it the next version is not calculated correctly.
  2. It is a bad idea to squash commits when merging a branch to another one, if we do that we will lose the information semantic-release needs to calculate the next version and even if we use the right prefix for the squashed commit (fix, feat, ) we miss all the messages that would otherwise go to the CHANGELOG.md file.
To make sure that we have the right commits on the pre release branches we should merge the main branch changes into the develop one after each release tag is created; in my pipelines the fist job that processes a release tag creates a branch from the tag and an MR to merge it to develop. The important thing about that MR is that is must not be squashed, if we do that the tag commit will probably be lost, so we need to be careful. To merge the changes directly we can run the following code:
# Set the SR_TAG variable to the tag you want to process
SR_TAG="v1.3.2"
# Fetch all the changes
git fetch --all --prune
# Switch to the main branch
git switch main
# Pull all the changes
git pull
# Switch to the development branch
git switch develop
# Pull all the changes
git pull
# Create followup branch from tag
git switch -c "followup/$SR_TAG" "$SR_TAG"
# Change files manually & commit the changed files
git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop"
# Switch to the development branch
git switch develop
# Merge the followup branch into the development one using the --no-ff option
git merge --no-ff "followup/$SR_TAG"
# Remove the followup branch
git branch -d "followup/$SR_TAG"
# Push the changes
git push
If we can t push directly to develop we can create a MR pushing the followup branch after committing the changes, but we have to make sure that we don t squash the commits when merging or it will not work as we want.

24 December 2023

Russ Allbery: Review: Liberty's Daughter

Review: Liberty's Daughter, by Naomi Kritzer
Publisher: Fairwood Press
Copyright: November 2023
ISBN: 1-958880-16-7
Format: Kindle
Pages: 257
Liberty's Daughter is a stand-alone near-future science fiction fix-up novel. The original stories were published in Fantasy and Science Fiction between 2012 and 2015. Beck Garrison lives on New Minerva (Min), one of a cluster of libertarian seasteads 220 nautical miles off the coast of Los Angeles. Her father brought her to Min when she was four, so it's the only life she knows. As this story opens, she's picked up a job for pocket change: finding very specific items that people want to buy. Since any new goods have to be shipped in and the seasteads have an ambiguous legal status, they don't get Amazon deliveries, but there are enough people (and enough tourists who bring high-value goods for trade) that someone probably has whatever someone else is looking for. Even sparkly high-heeled sandals size eight. Beck's father is high in the informal power structure of the seasteads for reasons that don't become apparent until very late in this book. Beck therefore has a comfortable, albeit cramped, life. The social protections, self-confidence, and feelings of invincibility that come with that wealth serve her well as a finder. After the current owner of the sandals bargains with her to find a person rather than an object, that privilege also lets her learn quite a lot before she starts getting into trouble. The political background of this novel is going to require some suspension of disbelief. The premise is that one of those harebrained libertarian schemes to form a freedom utopia has been successful enough to last for 49 years and attract 80,000 permanent residents. (It's a libertarian seastead so a lot of those residents are indentured slaves, as one does in libertarian philosophy. The number of people with shares, like Beck's father, is considerably smaller.) By the end of the book, Kritzer has offered some explanations for why the US would allow such a place to continue to exist, but the chances of the famously fractious con artists and incompetents involved in these types of endeavors creating something that survived internal power struggles for that long seem low. One has to roll with it for story reasons: Kritzer needs the population to be large enough for a plot, and the history to be long enough for Beck to exist as a character. The strength of this book is Beck, and specifically the fact that Beck is a second-generation teenager who grew up on the seastead. Unlike a lot of her age peers with their Cayman Islands vacations, she's never left and has no experience with life on land. She considers many things to be perfectly normal that are not at all normal to the reader and the various reader surrogates who show up over the course of the book. She also has the instinctive feel for seastead politics of the child of a prominent figure in a small town. And, most importantly, she has formed her own sense of morality and social structure that matches neither that of the reader nor that of her father. Liberty's Daughter is told in first-person by Beck. Judging the authenticity of Gen-Z thought processes is not one of my strengths, but Beck felt right to me. Her narration is dryly matter-of-fact, with only brief descriptions of her emotional reactions, but her personality shines in the occasional sarcasm and obstinacy. Kritzer has the teenage bafflement at the stupidity of adults down pat, as well as the tendency to jump head-first into ideas and make some decisions through sheer stubbornness. This is not one of those fix-up novels where the author has reworked the stories sufficiently that the original seams don't show. It is very episodic; compared to a typical novel of this length, there's more plot but less character growth. It's a good book when you want to be pulled into a stream of events that moves right along. This is not the book for deep philosophical examinations of the basis of a moral society, but it does have, around the edges, is the humans build human societies and develop elaborate social conventions and senses of belonging no matter how stupid the original philosophical foundations were. Even societies built on nasty exploitation can engender a sort of loyalty. Beck doesn't support the worst parts of her weird society, but she wants to fix it, not burn it to the ground. I thought there was a profound observation there. That brings me to my complaint: I hated the ending. Liberty's Daughter is in part Beck's fight for her own autonomy, both moral and financial, and the beginnings of an effort to turn her home into the sort of home she wants. By the end of the book, she's testing the limits of what she can accomplish, solidifying her own moral compass, and deciding how she wants to use the social position she inherited. It felt like the ending undermined all of that and treated her like a child. I know adolescence comes with those sorts of reversals, but I was still so mad. This is particularly annoying since I otherwise want to recommend this book. It's not ground-breaking, it's not that deep, but it was a thoroughly enjoyable day's worth of entertainment with a likable protagonist. Just don't read the last chapter, I guess? Or have more tolerance than I have for people treating sixteen-year-olds as if they're not old enough to make decisions. Content warnings: pandemic. Rating: 7 out of 10

22 December 2023

Gunnar Wolf: Pushing some reviews this way

Over roughly the last year and a half I have been participating as a reviewer in ACM s Computing Reviews, and have even been honored as a Featured Reviewer. Given I have long enjoyed reading friends reviews of their reading material (particularly, hats off to the very active Russ Allbery, who both beats all of my frequency expectations (I could never sustain the rythm he reads to!) and holds documented records for his >20 years as a book reader, with far more clarity and readability than I can aim for!), I decided to explicitly share my reviews via this blog, as the audience is somewhat congruent; I will also link here some reviews that were not approved for publication, clearly marking them so. I will probably work on wrangling my Jekyll site to display an (auto-)updated page and RSS feed for the reviews. In the meantime, the reviews I have published are:

19 December 2023

Antoine Beaupr : (Re)introducing screentest

I have accidentally rewritten screentest, an old X11/GTK2 program that I was previously using to, well, test screens.

Screentest is dead It was removed from Debian in May 2023 but had already missed two releases (Debian 11 "bullseye" and 12 "bookworm") due to release critical bugs. The stated reason for removal was:
The package is orphaned and its upstream is no longer developed. It depends on gtk2, has a low popcon and no reverse dependencies.
So I had little hope to see this program back in Debian. The git repository shows little activity, the last being two years ago. Interestingly, I do not quite remember what it was testing, but I do remember it to find dead pixels, confirm native resolution, and various pixel-peeping. Here's a screenshot of one of the screentest screens: screentest screenshot showing a white-on-black checkered background, with some circles in the corners, shades of gray and colors in the middle Now, I think it's safe to assume this program is dead and buried, and anyways I'm running wayland now, surely there's something better? Well, no. Of course not. Someone would know about it and tell me before I go on a random coding spree in a fit of procrastination... riiight? At least, the Debconf video team didn't seem to know of any replacement. They actually suggested I just "invoke gstreamer directly" and "embrace the joy of shell scripting".

Screentest reborn So, I naively did exactly that and wrote a horrible shell script. Then I realized the next step was to write an command line parser and monitor geometry guessing, and thought "NOPE, THIS IS WHERE THE SHELL STOPS", and rewrote the whole thing in Python. Now, screentest lives as a ~400-line Python script, half of which is unit test data and command-line parsing.

Why screentest Some smarty pants is going to complain and ask why the heck one would need something like that (and, well, someone already did), so maybe I can lay down a list of use case:
  • testing color output, in broad terms (answering the question of "is it just me or this project really yellow?")
  • testing focus and keystone ("this looks blurry, can you find a nice sharp frame in that movie to adjust focus?")
  • test for native resolution and sharpness ("does this projector really support 4k for 30$? that sounds like bullcrap")
  • looking for dead pixels ("i have a new monitor, i hope it's intact")

What does screentest do? Screentest displays a series of "patterns" on screen. The list of patterns is actually hardcoded in the script, copy-pasted from this list from the videotestsrc gstreamer plugin, but you can pass any pattern supported by your gstreamer installation with --patterns. A list of patterns relevant to your installation is available with the gst-inspect-1.0 videotestsrc command. By default, screentest goes through all patterns. Each pattern runs indefinitely until the you close the window, then the next pattern starts. You can restrict to a subset of patterns, for example this would be a good test for dead pixels:
screentest --patterns black,white,red,green,blue
This would be a good sharpness test:
screentest --patterns pinwheel,spokes,checkers-1,checkers-2,checkers-4,checkers-8
A good generic test is the classic SMPTE color bars and is the first in the list, but you can run only that test with:
screentest --patterns smpte
(I will mention, by the way, that as a system administrator with decades of experience, it is nearly impossible to type SMPTE without first typing SMTP and re-typing it again a few times before I get it right. I fully expect this post to have numerous typos.)
Here's an example of the SMPTE pattern from Wikipedia: SMPTE color bars For multi-monitor setups, screentest also supports specifying which output to use as a native resolution, with --output. Failing that, it will try to look at the outputs and use the first it will find. If it fails to find anything, you can specify a resolution with --resolution WIDTHxHEIGHT. I have tried to make it go full screen by default, but stumbled a bug in Sway that crashes gst-launch. If your Wayland compositor supports it, you can possibly enable full screen with --sink waylandsink fullscreen=true. Otherwise it will create a new window that you will have to make fullscreen yourself. For completeness, there's also an --audio flag that will emit the classic "drone", a sine wave at 440Hz at 40% volume (the audiotestsrc gstreamer plugin. And there's a --overlay-name option to show the pattern name, in case you get lost and want to start with one of them again.

How this works Most of the work is done by gstreamer. The script merely generates a pipeline and calls gst-launch to show the output. That both limits what it can do but also makes it much easier to use than figuring out gst-launch. There might be some additional patterns that could be useful, but I think those are better left to gstreamer. I, for example, am somewhat nostalgic of the Philips circle pattern that used to play for TV stations that were off-air in my area. But that, in my opinion, would be better added to the gstreamer plugin than into a separate thing. The script shows which command is being ran, so it's a good introduction to gstreamer pipelines. Advanced users (and the video team) will possibly not need screentest and will design their own pipelines with their own tools. I've previously worked with ffmpeg pipelines (in another such procrastinated coding spree, video-proxy-magic), and I found gstreamer more intuitive, even though it might be slightly less powerful. In retrospect, I should probably have picked a new name, to avoid crashing the namespace already used by the project, which is now on GitHub. Who knows, it might come back to life after this blog post; it would not be the first time. For now, the project lives along side the rest of my scripts collection but if there's sufficient interest, I might move it to its own git repositories. Comments, feedback, contributions are as usual welcome. And naturally, if you know something better for this kind of stuff, I'm happy to learn more about your favorite tool! So now I have finally found something to test my projector, which will likely confirm what I've already known all along: that it's kind of a piece of crap and I need to get a proper one.

9 December 2023

Simon Josefsson: Classic McEliece goes to IETF and OpenSSH

My earlier work on Streamlined NTRU Prime has been progressing along. The IETF document on sntrup761 in SSH has passed several process points. GnuPG s libgcrypt has added support for sntrup761. The libssh support for sntrup761 is working, but the merge request is stuck mostly due to lack of time to debug why the regression test suite sporadically errors out in non-sntrup761 related parts with the patch. The foundation for lattice-based post-quantum algorithms has some uncertainty around it, and I have felt that there is more to the post-quantum story than adding sntrup761 to implementations. Classic McEliece has been mentioned to me a couple of times, and I took some time to learn it and did a cut n paste job of the proposed ISO standard and published draft-josefsson-mceliece in the IETF to make the algorithm easily available to the IETF community. A high-quality implementation of Classic McEliece has been published as libmceliece and I ve been supporting the work of Jan Moj to package libmceliece for Debian, alas it has been stuck in the ftp-master NEW queue for manual review for over two months. The pre-dependencies librandombytes and libcpucycles are available in Debian already. All that text writing and packaging work set the scene to write some code. When I added support for sntrup761 in libssh, I became familiar with the OpenSSH code base, so it was natural to return to OpenSSH to experiment with a new SSH KEX for Classic McEliece. DJB suggested to pick mceliece6688128 and combine it with the existing X25519+sntrup761 or with plain X25519. While a three-algorithm hybrid between X25519, sntrup761 and mceliece6688128 would be a simple drop-in for those that don t want to lose the benefits offered by sntrup761, I decided to start the journey on a pure combination of X25519 with mceliece6688128. The key combiner in sntrup761x25519 is a simple SHA512 call and the only good I can say about that is that it is simple to describe and implement, and doesn t raise too many questions since it is already deployed. After procrastinating coding for months, once I sat down to work it only took a couple of hours until I had a successful Classic McEliece SSH connection. I suppose my brain had sorted everything in background before I started. To reproduce it, please try the following in a Debian testing environment (I use podman to get a clean environment).
# podman run -it --rm debian:testing-slim
apt update
apt dist-upgrade -y
apt install -y wget python3 librandombytes-dev libcpucycles-dev gcc make git autoconf libz-dev libssl-dev
cd ~
wget -q -O- https://lib.mceliece.org/libmceliece-20230612.tar.gz   tar xfz -
cd libmceliece-20230612/
./configure
make install
ldconfig
cd ..
git clone https://gitlab.com/jas/openssh-portable
cd openssh-portable
git checkout jas/mceliece
autoreconf
./configure # verify 'libmceliece support: yes'
make # CC="cc -DDEBUG_KEX=1 -DDEBUG_KEXDH=1 -DDEBUG_KEXECDH=1"
You should now have a working SSH client and server that supports Classic McEliece! Verify support by running ./ssh -Q kex and it should mention mceliece6688128x25519-sha512@openssh.com. To have it print plenty of debug outputs, you may remove the # character on the final line, but don t use such a build in production. You can test it as follows:
./ssh-keygen -A # writes to /usr/local/etc/ssh_host_...
# setup public-key based login by running the following:
./ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
adduser --system sshd
mkdir /var/empty
while true; do $PWD/sshd -p 2222 -f /dev/null; done &
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date
On the client you should see output like this:
OpenSSH_9.5p1, OpenSSL 3.0.11 19 Sep 2023
...
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: mceliece6688128x25519-sha512@openssh.com
debug1: kex: host key algorithm: ssh-ed25519
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: SSH2_MSG_KEX_ECDH_REPLY received
debug1: Server host key: ssh-ed25519 SHA256:YognhWY7+399J+/V8eAQWmM3UFDLT0dkmoj3pIJ0zXs
...
debug1: Host '[localhost]:2222' is known and matches the ED25519 host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey in after 134217728 blocks
...
debug1: Sending command: date
debug1: pledge: fork
debug1: permanently_set_uid: 0/0
Environment:
  USER=root
  LOGNAME=root
  HOME=/root
  PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
  MAIL=/var/mail/root
  SHELL=/bin/bash
  SSH_CLIENT=::1 46894 2222
  SSH_CONNECTION=::1 46894 ::1 2222
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0
Sat Dec  9 22:22:40 UTC 2023
debug1: channel 0: free: client-session, nchannels 1
Transferred: sent 1048044, received 3500 bytes, in 0.0 seconds
Bytes per second: sent 23388935.4, received 78108.6
debug1: Exit status 0
Notice the kex: algorithm: mceliece6688128x25519-sha512@openssh.com output. How about network bandwidth usage? Below is a comparison of a complete SSH client connection such as the one above that log in and print date and logs out. Plain X25519 is around 7kb, X25519 with sntrup761 is around 9kb, and mceliece6688128 with X25519 is around 1MB. Yes, Classic McEliece has large keys, but for many environments, 1MB of data for the session establishment will barely be noticeable.
./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date 2>&1   grep ^Transferred
Transferred: sent 3028, received 3612 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 4212, received 4596 bytes, in 0.0 seconds
./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date 2>&1   grep ^Transferred
Transferred: sent 1048044, received 3764 bytes, in 0.0 seconds
So how about session establishment time?
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=curve25519-sha256 date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:19 UTC 2023
Sat Dec  9 22:39:25 UTC 2023
# 6 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=sntrup761x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:29 UTC 2023
Sat Dec  9 22:39:38 UTC 2023
# 9 seconds
date; i=0; while test $i -le 100; do ./ssh -v -p 2222 localhost -oKexAlgorithms=mceliece6688128x25519-sha512@openssh.com date > /dev/null 2>&1; i= expr $i + 1 ; done; date
Sat Dec  9 22:39:55 UTC 2023
Sat Dec  9 22:40:07 UTC 2023
# 12 seconds
I never noticed adding sntrup761, so I m pretty sure I wouldn t notice this increase either. This is all running on my laptop that runs Trisquel so take it with a grain of salt but at least the magnitude is clear. Future work items include: Happy post-quantum SSH ing! Update: Changing the mceliece6688128_keypair call to mceliece6688128f_keypair (i.e., using the fully compatible f-variant) results in McEliece being just as fast as sntrup761 on my machine. Update 2023-12-26: An initial IETF document draft-josefsson-ssh-mceliece-00 published.

26 November 2023

Niels Thykier: Providing online reference documentation for debputy

I do not think seasoned Debian contributors quite appreciate how much knowledge we have picked up and internalized. As an example, when I need to look up documentation for debhelper, I generally know which manpage to look in. I suspect most long time contributors would be able to a similar thing (maybe down 2-3 manpages). But new contributors does not have the luxury of years of experience. This problem is by no means unique to debhelper. One thing that debhelper does very well, is that it is hard for users to tell where a addon "starts" and debhelper "ends". It is clear you use addons, but the transition in and out of third party provided tools is generally smooth. This is a sign that things "just work(tm)". Except when it comes to documentation. Here, debhelper's static documentation does not include documentation for third party tooling. If you think from a debhelper maintainer's perspective, this seems obvious. Embedding documentation for all the third-party code would be very hard work, a layer-violation, etc.. But from a user perspective, we should not have to care "who" provides "what". As as user, I want to understand how this works and the more hoops I have to jump through to get that understanding, the more frustrated I will be with the toolstack. With this, I came to the conclusion that the best way to help users and solve the problem of finding the documentation was to provide "online documentation". It should be possible to ask debputy, "What attributes can I use in install-man?" or "What does path-metadata do?". Additionally, the lookup should work the same no matter if debputy provided the feature or some third-party plugin did. In the future, perhaps also other types of documentation such as tutorials or how-to guides. Below, I have some tentative results of my work so far. There are some improvements to be done. Notably, the commands for these documentation features are still treated a "plugin" subcommand features and should probably have its own top level "ask-me-anything" subcommand in the future.
Automatic discard rules Since the introduction of install rules, debputy has included an automatic filter mechanism that prunes out unwanted content. In 0.1.9, these filters have been named "Automatic discard rules" and you can now ask debputy to list them.
$ debputy plugin list automatic-discard-rules
+-----------------------+-------------+
  Name                    Provided By  
+-----------------------+-------------+
  python-cache-files      debputy      
  la-files                debputy      
  backup-files            debputy      
  version-control-paths   debputy      
  gnu-info-dir-file       debputy      
  debian-dir              debputy      
  doxygen-cruft-files     debputy      
+-----------------------+-------------+
For these rules, the provider can both provide a description but also an example of their usage.
$ debputy plugin show automatic-discard-rules la-files
Automatic Discard Rule: la-files
================================
Documentation: Discards any .la files beneath /usr/lib
Example
-------
    /usr/lib/libfoo.la        << Discarded (directly by the rule)
    /usr/lib/libfoo.so.1.0.0
The example is a live example. That is, the provider will provide debputy with a scenario and the expected outcome of that scenario. Here is the concrete code in debputy that registers this example:
api.automatic_discard_rule(
    "la-files",
    _debputy_prune_la_files,
    rule_reference_documentation="Discards any .la files beneath /usr/lib",
    examples=automatic_discard_rule_example(
        "usr/lib/libfoo.la",
        ("usr/lib/libfoo.so.1.0.0", False),
    ),
)
When showing the example, debputy will validate the example matches what the plugin provider intended. Lets say I was to introduce a bug in the code, so that the discard rule no longer worked. Then debputy would start to show the following:
# Output if the code or example is broken
$ debputy plugin show automatic-discard-rules la-files
[...]
Automatic Discard Rule: la-files
================================
Documentation: Discards any .la files beneath /usr/lib
Example
-------
    /usr/lib/libfoo.la        !! INCONSISTENT (code: keep, example: discard)
    /usr/lib/libfoo.so.1.0.0
debputy: warning: The example was inconsistent. Please file a bug against the plugin debputy
Obviously, it would be better if this validation could be added directly as a plugin test, so the CI pipeline would catch it. That is one my personal TODO list. :) One final remark about automatic discard rules before moving on. In 0.1.9, debputy will also list any path automatically discarded by one of these rules in the build output to make sure that the automatic discard rule feature is more discoverable.
Plugable manifest rules like the install rule In the manifest, there are several places where rules can be provided by plugins. To make life easier for users, debputy can now since 0.1.8 list all provided rules:
$ debputy plugin list plugable-manifest-rules
+-------------------------------+------------------------------+-------------+
  Rule Name                       Rule Type                      Provided By  
+-------------------------------+------------------------------+-------------+
  install                         InstallRule                    debputy      
  install-docs                    InstallRule                    debputy      
  install-examples                InstallRule                    debputy      
  install-doc                     InstallRule                    debputy      
  install-example                 InstallRule                    debputy      
  install-man                     InstallRule                    debputy      
  discard                         InstallRule                    debputy      
  move                            TransformationRule             debputy      
  remove                          TransformationRule             debputy      
  [...]                           [...]                          [...]        
  remove                          DpkgMaintscriptHelperCommand   debputy      
  rename                          DpkgMaintscriptHelperCommand   debputy      
  cross-compiling                 ManifestCondition              debputy      
  can-execute-compiled-binaries   ManifestCondition              debputy      
  run-build-time-tests            ManifestCondition              debputy      
  [...]                           [...]                          [...]        
+-------------------------------+------------------------------+-------------+
(Output trimmed a bit for space reasons) And you can then ask debputy to describe any of these rules:
$ debputy plugin show plugable-manifest-rules install
Generic install ( install )
===========================
The generic  install  rule can be used to install arbitrary paths into packages
and is *similar* to how  dh_install  from debhelper works.  It is a two "primary" uses.
  1) The classic "install into directory" similar to the standard  dh_install 
  2) The "install as" similar to  dh-exec 's  foo => bar  feature.
Attributes:
 -  source  (conditional): string
    sources  (conditional): List of string
   A path match ( source ) or a list of path matches ( sources ) defining the
   source path(s) to be installed. [...]
 -  dest-dir  (optional): string
   A path defining the destination *directory*. [...]
 -  into  (optional): string or a list of string
   A path defining the destination *directory*. [...]
 -  as  (optional): string
   A path defining the path to install the source as. [...]
 -  when  (optional): manifest condition (string or mapping of string)
   A condition as defined in [Conditional rules](https://salsa.debian.org/debian/debputy/-/blob/main/MANIFEST-FORMAT.md#Conditional rules).
This rule enforces the following restrictions:
 - The rule must use exactly one of:  source ,  sources 
 - The attribute  as  cannot be used with any of:  dest-dir ,  sources 
[...]
(Output trimmed a bit for space reasons) All the attributes and restrictions are auto-computed by debputy from information provided by the plugin. The associated documentation for each attribute is supplied by the plugin itself, The debputy API validates that all attributes are covered and the documentation does not describe non-existing fields. This ensures that you as a plugin provider never forget to document new attributes when you add them later. The debputy API for manifest rules are not quite stable yet. So currently only debputy provides rules here. However, it is my intention to lift that restriction in the future. I got the idea of supporting online validated examples when I was building this feature. However, sadly, I have not gotten around to supporting it yet.
Manifest variables like PACKAGE I also added a similar documentation feature for manifest variables such as PACKAGE . When I implemented this, I realized listing all manifest variables by default would probably be counter productive to new users. As an example, if you list all variables by default it would include DEB_HOST_MULTIARCH (the most common case) side-by-side with the the much less used DEB_BUILD_MULTIARCH and the even lessor used DEB_TARGET_MULTIARCH variable. Having them side-by-side implies they are of equal importance, which they are not. As an example, the ballpark number of unique packages for which DEB_TARGET_MULTIARCH is useful can be counted on two hands (and maybe two feet if you consider gcc-X distinct from gcc-Y). This is one of the cases, where experience makes us blind. Many of us probably have the "show me everything and I will find what I need" mentality. But that requires experience to be able to pull that off - especially if all alternatives are presented as equals. The cross-building terminology has proven to notoriously match poorly to people's expectation. Therefore, I took a deliberate choice to reduce the list of shown variables by default and in the output explicitly list what filters were active. In the current version of debputy (0.1.9), the listing of manifest-variables look something like this:
$ debputy plugin list manifest-variables
+----------------------------------+----------------------------------------+------+-------------+
  Variable (use via:   NAME  )   Value                                    Flag   Provided by  
+----------------------------------+----------------------------------------+------+-------------+
  DEB_HOST_ARCH                      amd64                                           debputy      
  [... other DEB_HOST_* vars ...]    [...]                                           debputy      
  DEB_HOST_MULTIARCH                 x86_64-linux-gnu                                debputy      
  DEB_SOURCE                         debputy                                         debputy      
  DEB_VERSION                        0.1.8                                           debputy      
  DEB_VERSION_EPOCH_UPSTREAM         0.1.8                                           debputy      
  DEB_VERSION_UPSTREAM               0.1.8                                           debputy      
  DEB_VERSION_UPSTREAM_REVISION      0.1.8                                           debputy      
  PACKAGE                            <package-name>                                  debputy      
  path:BASH_COMPLETION_DIR           /usr/share/bash-completion/completions          debputy      
+----------------------------------+----------------------------------------+------+-------------+
+-----------------------+--------+-------------------------------------------------------+
  Variable type           Value    Option                                                 
+-----------------------+--------+-------------------------------------------------------+
  Token variables         hidden   --show-token-variables OR --show-all-variables         
  Special use variables   hidden   --show-special-case-variables OR --show-all-variables  
+-----------------------+--------+-------------------------------------------------------+
I will probably tweak the concrete listing in the future. Personally, I am considering to provide short-hands variables for some of the DEB_HOST_* variables and then hide the DEB_HOST_* group from the default view as well. Maybe something like ARCH and MULTIARCH, which would default to their DEB_HOST_* counter part. This variable could then have extended documentation that high lights DEB_HOST_<X> as its source and imply that there are special cases for cross-building where you might need DEB_BUILD_<X> or DEB_TARGET_<X>. Speaking of variable documentation, you can also lookup the documentation for a given manifest variable:
$ debputy plugin show manifest-variables path:BASH_COMPLETION_DIR
Variable: path:BASH_COMPLETION_DIR
==================================
Documentation: Directory to install bash completions into
Resolved: /usr/share/bash-completion/completions
Plugin: debputy
This was my update on online reference documentation for debputy. I hope you found it useful. :)
Thanks On a closing note, I would like to thanks Jochen Sprickerhof, Andres Salomon, Paul Gevers for their recent contributions to debputy. Jochen and Paul provided a number of real world cases where debputy would crash or not work, which have now been fixed. Andres and Paul also provided corrections to the documentation.

21 November 2023

Russ Allbery: Review: Thud!

Review: Thud!, by Terry Pratchett
Series: Discworld #34
Publisher: Harper
Copyright: October 2005
Printing: November 2014
ISBN: 0-06-233498-0
Format: Mass market
Pages: 434
Thud! is the 34th Discworld novel and the seventh Watch novel. It is partly a sequel to The Fifth Elephant, partly a sequel to Night Watch, and references many of the previous Watch novels. This is not a good place to start. Dwarfs and trolls have a long history of conflict, as one might expect between a race of creatures who specialize in mining and a race of creatures whose vital organs are sometimes the targets of that mining. The first battle of Koom Valley was the place where that enmity was made concrete and given a symbol. Now that there are large dwarf and troll populations in Ankh-Morpork, the upcoming anniversary of that battle is the excuse for rising tensions. Worse, Grag Hamcrusher, a revered deep-down dwarf and a dwarf supremacist, is giving incendiary speeches about killing all trolls and appears to be tunneling under the city. Then whispers run through the city's dwarfs that Hamcrusher has been murdered by a troll. Vimes has no patience for racial tensions, or for the inspection of the Watch by one of Vetinari's excessively competent clerks, or the political pressure to add a vampire to the Watch over his prejudiced objections. He was already grumpy before the murder and is in absolutely no mood to be told by deep-down dwarfs who barely believe that humans exist that the murder of a dwarf underground is no affair of his. Meanwhile, The Battle of Koom Valley by Methodia Rascal has been stolen from the Ankh-Morpork Royal Art Museum, an impressive feat given that the painting is ten feet high and fifty feet long. It was painted in impressive detail by a madman who thought he was a chicken, and has been the spark for endless theories about clues to some great treasure or hidden knowledge, culminating in the conspiratorial book Koom Valley Codex. But the museum prides itself on allowing people to inspect and photograph the painting to their heart's content and was working on a new room to display it. It's not clear why someone would want to steal it, but Colon and Nobby are on the case. This was a good time to read this novel. Sadly, the same could be said of pretty much every year since it was written. "Thud" in the title is a reference to Hamcrusher's murder, which was supposedly done by a troll club that was found nearby, but it's also a reference to a board game that we first saw in passing in Going Postal. We find out a lot more about Thud in this book. It's an asymmetric two-player board game that simulates a stylized battle between dwarf and troll forces, with one player playing the trolls and the other playing the dwarfs. The obvious comparison is to chess, but a better comparison would be to the old Steve Jackson Games board game Ogre, which also featured asymmetric combat mechanics. (I'm sure there are many others.) This board game will become quite central to the plot of Thud! in ways that I thought were ingenious. I thought this was one of Pratchett's best-plotted books to date. There are a lot of things happening, involving essentially every member of the Watch that we've met in previous books, and they all matter and I was never confused by how they fit together. This book is full of little callbacks and apparently small things that become important later in a way that I found delightful to read, down to the children's book that Vimes reads to his son and that turns into the best scene of the book. At this point in my Discworld read-through, I can see why the Watch books are considered the best sub-series. It feels like Pratchett kicks the quality of writing up a notch when he has Vimes as a protagonist. In several books now, Pratchett has created a villain by taking some human characteristic and turning it into an external force that acts on humans. (See, for instance the Gonne in Men at Arms, or the hiver in A Hat Full of Sky.) I normally do not like this plot technique, both because I think it lets humans off the hook in a way that cheapens the story and because this type of belief has a long and bad reputation in religions where it is used to dodge personal responsibility and dehumanize one's enemies. When another of those villains turned up in this book, I was dubious. But I think Pratchett pulls off this type of villain as well here as I've seen it done. He lifts up a facet of humanity to let the reader get a better view, but somehow makes it explicit that this is concretized metaphor. This force is something people create and feed and choose and therefore are responsible for. The one sour note that I do have to complain about is that Pratchett resorts to some cheap and annoying "men are from Mars, women are from Venus" nonsense, mostly around Nobby's subplot but in a few other places (Sybil, some of Angua's internal monologue) as well. It's relatively minor, and I might let it pass without grumbling in other books, but usually Pratchett is better on gender than this. I expected better and it got under my skin. Otherwise, though, this was a quietly excellent book. It doesn't have the emotional gut punch of Night Watch, but the plotting is superb and the pacing is a significant improvement over The Fifth Elephant. The parody is of The Da Vinci Code, which is both more interesting than Pratchett's typical movie parodies and delightfully subtle. We get more of Sybil being a bad-ass, which I am always here for. There's even some lovely world-building in the form of dwarven Devices. I love how Pratchett has built Vimes up into one of the most deceptively heroic figures on Discworld, but also shows all of the support infrastructure that ensures Vimes maintain his principles. On the surface, Thud! has a lot in common with Vimes's insistently moral stance in Jingo, but here it is more obvious how Vimes's morality happens in part because his wife, his friends, and his boss create the conditions for it to thrive. Highly recommended to anyone who has gotten this far. Rating: 9 out of 10

11 November 2023

Gunnar Wolf: There once was a miniDebConf in Uruguay...

Meeting Debian people for having a good time together, for some good hacking, for learning, for teaching Is always fun and welcome. It brings energy, life and joy. And this year, due to the six-months-long relocation my family and me decided to have to Argentina, I was unable to attend the real deal, DebConf23 at India. And while I know DebConf is an experience like no other, this year I took part in two miniDebConfs. One I have already shared in this same blog: I was in MiniDebConf Tamil Nadu in India, followed by some days of pre-DebConf preparation and scouting in Kochi proper, where I got to interact with the absolutely great and loving team that prepared DebConf. The other one is still ongoing (but close to finishing). Some months ago, I talked with Santiago Ruano, jokin as we were Spanish-speaking DDs announcing to the debian-private mailing list we d be relocating to around R o de la Plata. And things worked out normally: He has been for several months in Uruguay already, so he decided to rent a house for some days, and invite Debian people to do what we do best. I left Paran Tuesday night (and missed my online class at UNAM! Well, you cannot have everything, right?). I arrived early on Wednesday, and around noon came to the house of the keysigning (well, the place is properly called Casa Key , it s a publicity agency that is also rented as a guesthouse in a very nice area of Montevideo, close to Nuevo Pocitos beach). In case you don t know it, Montevideo is on the Northern (or Eastern) shore of R o de la Plata, the widest river in the world (up to 300Km wide, with current and non-salty water). But most important for some Debian contributors: You can even come here by boat! That first evening, we received Ilu, who was in Uruguay by chance for other issues (and we were very happy about it!) and a young and enthusiastic Uruguayan, Felipe, interested in getting involved in Debian. We spent the evening talking about life, the universe and everything Which was a bit tiring, as I had to interface between Spanish and English, talking with two friends that didn t share a common language On Thursday morning, I went out for an early walk at the beach. And lets say, if only just for the narrative, that I found a lost penguin emerging from R o de la Plata! For those that don t know (who d be most of you, as he has not been seen at Debian events for 15 years), that s Lisandro Dami n Nicanor P rez Meyer (or just lisandro), long-time maintainer of the Qt ecosystem, and one of our embedded world extraordinaires. So, after we got him dry and fed him fresh river fishes, he gave us a great impromptu talk about understanding and finding our way around the Device Tree Source files for development boards and similar machines, mostly in the ARM world. From Argentina, we also had Emanuel (eamanu) crossing all the way from La Rioja. I spent most of our first workday getting my laptop in shape to be useful as the driver for my online class on Thursday (which is no small feat people that know the particularities of my much loved ARM-based laptop will understand), and running a set of tests again on my Raspberry Pi labortory, which I had not updated in several months. I am happy to say we are also finally also building Raspberry images for Trixie (Debian 13, Testing)! Sadly, I managed to burn my USB-to-serial-console (UART) adaptor, and could neither test those, nor the oldstable ones we are still building (and will probably soon be dropped, if not for anything else, to save disk space). We enjoyed a lot of socialization time. An important highlight of the conference for me was that we reconnected with a long-lost DD, Eduardo Tr pani, and got him interested in getting involved in the project again! This second day, another local Uruguayan, Mauricio, joined us together with his girlfriend, Alicia, and Felipe came again to hang out with us. Sadly, we didn t get photographic evidence of them (nor the permission to post it). The nice house Santiago got for us was very well equipped for a miniDebConf. There were a couple of rounds of pool played by those that enjoyed it (I was very happy just to stand around, take some photos and enjoy the atmosphere and the conversation). Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be leaving the house by noon. It was also a very productive day! We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org. It has been a great couple of days! Sadly, it s coming to an end But this at least gives me the opportunity (and moral obligation!) to write a long blog post. And to thank Santiago for organizing this, and Debian, for sponsoring our trip, stay, foods and healthy enjoyment!

23 October 2023

Russ Allbery: Review: Going Postal

Review: Going Postal, by Terry Pratchett
Series: Discworld #33
Publisher: Harper
Copyright: October 2004
Printing: November 2014
ISBN: 0-06-233497-2
Format: Mass market
Pages: 471
Going Postal is the 33rd Discworld novel. You could probably start here if you wanted to; there are relatively few references to previous books, and the primary connection (to Feet of Clay) is fully re-explained. I suspect that's why Going Postal garnered another round of award nominations. There are arguable spoilers for Feet of Clay, however. Moist von Lipwig is a con artist. Under a wide variety of names, he's swindled and forged his way around the Disc, always confident that he can run away from or talk his way out of any trouble. As Going Postal begins, however, it appears his luck has run out. He's about to be hanged. Much to his surprise, he wakes up after his carefully performed hanging in Lord Vetinari's office, where he's offered a choice. He can either take over the Ankh-Morpork post office, or he can die. Moist, of course, immediately agrees to run the post office, and then leaves town at the earliest opportunity, only to be carried back into Vetinari's office by a relentlessly persistent golem named Mr. Pump. He apparently has a parole officer. The clacks, Discworld's telegraph system first seen in The Fifth Elephant, has taken over most communications. The city is now dotted with towers, and the Grand Trunk can take them at unprecedented speed to even far-distant cities like Genua. The post office, meanwhile, is essentially defunct, as Moist quickly discovers. There are two remaining employees, the highly eccentric Junior Postman Groat who is still Junior because no postmaster has lasted long enough to promote him, and the disturbingly intense Apprentice Postman Stanley, who collects pins. Other than them, the contents of the massive post office headquarters are a disturbing mail sorting machine designed by Bloody Stupid Johnson that is not picky about which dimension or timeline the sorted mail comes from, and undelivered mail. A lot of undelivered mail. Enough undelivered mail that there may be magical consequences. All Moist has to do is get the postal system running again. Somehow. And not die in mysterious accidents like the previous five postmasters. Going Postal is a con artist story, but it's also a startup and capitalism story. Vetinari is, as always, solving a specific problem in his inimitable indirect way. The clacks were created by engineers obsessed with machinery and encodings and maintenance, but it's been acquired by... well, let's say private equity, because that's who they are, although Discworld doesn't have that term. They immediately did what private equity always did: cut out everything that didn't extract profit, without regard for either the service or the employees. Since the clacks are an effective monopoly and the new owners are ruthless about eliminating any possible competition, there isn't much to stop them. Vetinari's chosen tool is Moist. There are some parts of this setup that I love and one part that I'm grumbly about. A lot of the fun of this book is seeing Moist pulled into the mission of resurrecting the post office despite himself. He starts out trying to wriggle out of his assigned task, but, after a few early successes and a supernatural encounter with the mail, he can't help but start to care. Reformed con men often make good protagonists because one can enjoy the charisma without disliking the ethics. Pratchett adds the delightfully sharp-witted and cynical Adora Belle Dearheart as a partial reader stand-in, which makes the process of Moist becoming worthy of his protagonist role even more fun. I think that a properly functioning postal service is one of the truly monumental achievements of human society and doesn't get nearly enough celebration (or support, or pay, or good working conditions). Give me a story about reviving a postal service by someone who appreciates the tradition and social role as much as Pratchett clearly does and I'm there. The only frustration is that Going Postal is focused more on an immediate plot, so we don't get to see the larger infrastructure recovery that is clearly needed. (Maybe in later books?) That leads to my grumble, though. Going Postal and specifically the takeover of the clacks is obviously inspired by corporate structures in the later Industrial Revolution, but this book was written in 2004, so it's also a book about private equity and startups. When Vetinari puts a con man in charge of the post office, he runs it like a startup: do lots of splashy things to draw attention, promise big and then promise even bigger, stumble across a revenue source that may or may not be sustainable, hire like mad, and hope it all works out. This makes for a great story in the same way that watching trapeze artists or tightrope walkers is entertaining. You know it's going to work because that's the sort of book you're reading, so you can enjoy the audacity and wonder how Moist will manage to stay ahead of his promises. But it is still a con game applied to a public service, and the part of me that loves the concept of the postal service couldn't stop feeling like this is part of the problem. The dilemma that Vetinari is solving is a bit too realistic, down to the requirement that the post office be self-funding and not depend on city funds and, well, this is repugnant to me. Public services aren't businesses. Societies spend money to build things that they need to maintain society, and postal service is just as much one of those things as roads are. The ability of anyone to send a letter to anyone else, no matter how rural the address is, provides infrastructure on which a lot of important societal structure is built. Pratchett made me care a great deal about Ankh-Morpork's post office (not hard to do), and now I want to see it rebuilt properly, on firm foundations, without splashy promises and without a requirement that it pay for itself. Which I realize is not the point of Discworld at all, but the concept of running a postal service like a startup hits maybe a bit too close to home. Apart from that grumble, this is a great book if you're in the mood for a reformed con man story. I thought the gold suit was a bit over the top, but I otherwise thought Moist's slow conversion to truly caring about his job was deeply satisfying. The descriptions of the clacks are full of askew Discworld parodies of computer networking and encoding that I enjoyed more than I thought I would. This is also the book that introduced the now-famous (among Pratchett fans at least) GNU instruction for the clacks, and I think that scene is the most emotionally moving bit of Pratchett outside of Night Watch. Going Postal is one of the better books in the Discworld series to this point (and I'm sadly getting near the end). If you have less strongly held opinions about management and funding models for public services, or at least are better at putting them aside when reading fantasy novels, you're likely to like it even more than I did. Recommended. Followed by Thud!. The thematic sequel is Making Money. Rating: 8 out of 10

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

17 October 2023

Russ Allbery: Review: A Hat Full of Sky

Review: A Hat Full of Sky, by Terry Pratchett
Series: Discworld #32
Publisher: HarperTrophy
Copyright: 2004
Printing: 2005
ISBN: 0-06-058662-1
Format: Mass market
Pages: 407
A Hat Full of Sky is the 32nd Discworld novel and the second Tiffany Aching young adult novel. You should not start here, but you could start with The Wee Free Men. As with that book, some parts of the story carry more weight if you are already familiar with Granny Weatherwax. Tiffany is a witch, but she needs to be trained. This is normally done by apprenticeship, and in Tiffany's case it seemed wise to give her exposure to more types of witching. Thus, Tiffany, complete with new boots and a going-away present from the still-somewhat-annoying Roland, is off on an apprenticeship to the sensible Miss Level. (The new boots feel wrong and get swapped out for her concealed old boots at the first opportunity.) Unbeknownst to Tiffany, her precocious experiments with leaving her body as a convenient substitute for a mirror have attracted something very bad, something none of the witches are expecting. The Nac Mac Feegle know a hiver as soon as they feel it, but they have a new kelda now, and she's not sure she wants them racing off after their old kelda. Terry Pratchett is very good at a lot of things, but I don't think villains are one of his strengths. He manages an occasional memorable one (the Auditors, for example, at least before the whole chocolate thing), but I find most of them a bit boring. The hiver is one of the boring ones. It serves mostly as a concretized metaphor about the temptations of magical power, but those temptations felt so unlike the tendencies of Tiffany's personality that I didn't think the metaphor worked in the story. The interesting heart of this book to me is the conflict between Tiffany's impatience with nonsense and Miss Level's arguably excessive willingness to help everyone regardless of how demanding they get. There's something deeper in here about female socialization and how that interacts with Pratchett's conception of witches that got me thinking, although I don't think Pratchett landed the point with full force. Miss Level is clearly a good witch to her village and seems comfortable with how she lives her life, so perhaps they're not taking advantage of her, but she thoroughly slots herself into the helper role. If Tiffany attempted the same role, people would be taking advantage of her, because the role doesn't fit her. And yet, there's a lesson here she needs to learn about seeing other people as people, even if it wouldn't be healthy for her to move all the way to Miss Level's mindset. Tiffany is a precocious kid who is used to being underestimated, and who has reacted by becoming independent and somewhat judgmental. She's also had a taste of real magical power, which creates a risk of her getting too far into her own head. Miss Level is a fount of empathy and understanding for the normal people around her, which Tiffany resists and needed to learn. I think Granny Weatherwax is too much like Tiffany to teach her that. She also has no patience for fools, but she's older and wiser and knows Tiffany needs a push in that direction. Miss Level isn't a destination, but more of a counterbalance. That emotional journey, a conclusion that again focuses on the role of witches in questions of life and death, and Tiffany's fascinatingly spiky mutual respect with Granny Weatherwax were the best parts of this book for me. The middle section with the hiver was rather tedious and forgettable, and the Nac Mac Feegle were entertaining but not more than that. It felt like the story went in a few different directions and only some of them worked, in part because the villain intended to tie those pieces together was more of a force of nature than a piece of Tiffany's emotional puzzle. If the hiver had resonated with the darker parts of Tiffany's natural personality, the plot would have worked better. Pratchett was gesturing in that direction, but he never convinced me it was consistent with what we'd already seen of her. Like a lot of the Discworld novels, the good moments in A Hat Full of Sky are astonishing, but the plot is somewhat forgettable. It's still solidly entertaining, though, and if you enjoyed The Wee Free Men, I think this is slightly better. Followed by Going Postal in publication order. The next Tiffany Aching novel is Wintersmith. Rating: 8 out of 10

15 October 2023

Aigars Mahinovs: Figuring out finances part 2

A week ago I started to migrate my financial planning from a closed source system to a new system based on an open source, self-hosted solution. Main candidate is Firefly III - a relatively simple financial planner with a rather rich feature set and a solid user base and developer support. Starting it up with a Docker-Compose file was quite easy, following the official documentation. The same Compose file also managed the MySQL database, the importer app and a cron container for regular imports. The separate importer app allows both imports from CSV files and also from external bank account connection services. For whatever weird reason both of those services support exporting data from my regular bank accounts, but not from my credit cards for exactly the same bank. So for those cards I would need to periodically download CVS transaction exports and feed them into the importer. The combination of the cron container and the importer app allows for both of these functions. To do this you first do the import via the Web UI first and configure all the mappings - configure file and date formats, map account names in the bank output to account names that you added in Firefly, configure what otehr columns mean and what is the mapping of other values, like expense and income accounts. At the end of the process you can download a json file representing all the settings of the import you just did. Putting such a json config file in the input folder of the cron container and telling it to do an import (weekly, for example) would do such import periodically. Putting a credit card CSV file along with a config file for credit card import would also auto-import that. So far so good. However, when I tried importing exports from MoneyWiz or even when I tried re-creating account history directly from the bank data, I hit a very annoying problem. Transfers. Thing is detecting transfers between the real (asset) accounts that you are managing is really essential. For one those are not real expenses and incomes, so failing to mark them as transfers would show weird income/expense numbers. But if you do detect them as transfers and correctly map the destination account, but fail to match the transations between another you get a double-transaction. This becomes really hard when transactions happen on different dates and have different descriptions. So you get both a +1000 transaction "Credit card reset" on 14.09.2023 in the credit card account and a -1000$ transaction "Repayment of own credit card" on 24.09.2023 in the main account of the bank. Matching them to recognise that it is just one transfer and not two is ... non-trivial. The best solution I could come up with so far is to always map the "Opposing account name" for such transfers to a virtual "Transfers" asset account. That has the benefit of being actually able to correctly represent transfers that take several days to move between accounts and showing you how much money is still in transit at any point in time. So after I figured this out, finally the account balances started matching up with reality. Setting up spending categories required another change - the author of the Firefly III does not like complexity so it does not support nesting categories. Categories can only be in a single, flat list. It is suggested that if you do need to track multiple categories and also their combination, then category groups might help you. I will survive for now by simplifying the categories that I do actually use. Might actually make the reports more usable. A new feature for me will be the ability to use automation rules to assign transactions to categories based on their contents, like expense accounts of keywords in descriptions. Setting up regular bills (with matching rules to assign incoming transactions to those bills) is another feature that is very important to me. The feature itself works just fine in Firefly III, but it has two restrictions that the author of the software does not actually want to be changed. For one it can not be used to track prodictable incomes (like salary), presumably because it is only there for bills (subscriptions in v3 UI). And for other, there is nothing in the base software that actually uses the data from bills to look forward. The author does not like to try to predict the future. Which for me is basically the one of two reasons to use this kind of software at all. I want to know - given all manually entered regular and 100% predictable expenses and incomes, what will be the state of my accounts at particular date? It seems that to get at that I will need to write my own oracle script using the rich Firefly III API. The last question I had for this week was - how will I enter a new cash puchace of potatoes on a farmers market into the system that sits in a private server in my home network? Turns out there are actually multiple Android apps for Firefly III that can be easily used for this using a robust OAuth or shared token authentification. Except the web UI needs to be exposed online for this to work (and ideally protected by HTTPS). Other users have also created Telegram bots that allow using chat messages to create transaction entries. This is a bit harder to use and more narrow scope, but should be easier to setup. Will have to try both apporaches. But before I really get going on this, I need to fix another thing that I have been postponing for a while: I will need to migrate my Home Assistant installation from a Docker container installation to a Home Assistant OS installation so that I can install Addons, including Firefly III, MySQL and Portainer to have a bit more organised and less hand-knitted home server setup. Let's see how that goes next week :D

14 October 2023

Ravi Dwivedi: Kochi - Wayanad Trip in August-September 2023

A trip full of hitchhiking, beautiful places and welcoming locals.

Day 1: Arrival in Kochi Kochi is a city in the state of Kerala, India. This year s DebConf was to be held in Kochi from 3rd September to 17th of September, which I was planning to attend. My friend Suresh, who was planning to join, told me that 29th August 2023 will be Onam, a major festival of the state of Kerala. So, we planned a Kerala trip before the DebConf. We booked early morning flights for Kochi from Delhi and reached Kochi on 28th August. We had booked a hostel named Zostel in Ernakulam. During check-in, they asked me to fill a form which required signing in using a Google account. I told them I don t have a Google account and I don t want to create one either. The people at the front desk seemed receptive, so I went ahead with telling them the problems of such a sign-in being mandatory for check-in. Anyways, they only took a photo of my passport and let me check-in without a Google account. We stayed in a ten room dormitory, which allowed travellers of any gender. The dormitory room was air-conditioned, spacious, clean and beds were also comfortable. There were two bathrooms in the dormitory and they were clean. Plus, there was a separate dormitory room in the hostel exclusive for females. I noticed that that Zostel was not added in the OpenStreetMap and so, I added it :) . The hostel had a small canteen for tea and snacks, a common sitting area outside the dormitories, which had beds too. There was a separate silent room, suitable for people who want to work.
Dormitory room in Zostel Ernakulam, Kochi.
Beds in Zostel Ernakulam, Kochi.
We had lunch at a nearby restaurant and it was hard to find anything vegetarian for me. I bought some freshly made banana chips from the street and they were tasty. As far as I remember, I had a big glass of pineapple juice for lunch. Then I went to the Broadway market and bought some cardamom and cinnamon for home. I also went to a nearby supermarket and bought Matta brown rice for home. Then, I looked for a courier shop to send the things home but all of them were closed due to Onam festival. After returning to the Zostel, I overslept till 9 PM and in the meanwhile, Suresh planned with Saidut and Shwetank (who met us during our stay in Zostel) to go to a place in Fort Kochi for dinner. I suspected I will be disappointed by lack of vegetarian options as they were planning to have fish. I already had a restaurant in mind - Brindhavan restaurant (suggested by Anupa), which was a pure vegetarian restaurant. To reach there, I got off at Palarivattom metro station and started looking for an auto-rickshaw to get to the restaurant. I didn t get any for more than 5 minutes. Since that restaurant was not added to the OpenStreetMap, I didn t even know how far that was and which direction to go to. Then, I saw a Zomato delivery person on a motorcycle and asked him where the restaurant was. It was already 10 PM and the restaurant closes at 10:30. So, I asked him whether he can drop me off. He agreed and dropped me off at that restaurant. It was 4-5 km from that metro station. I tipped him and expressed my gratefulness for the help. He refused to take the tip, but I insisted and he accepted. I entered the restaurant and it was coming to a close, so many items were not available. I ordered some Kadhai Paneer (only item left) with naan. It tasted fine. Since the next day was Thiruvonam, I asked the restaurant about the Sadya thali menu and prices for the next day. I planned to eat Sadya thali at that restaurant, but my plans got changed later.
Onam sadya menu from Brindhavan restaurant.

Day 2: Onam celebrations Next day, on 29th of August 2023, we had plan to leave for Wayanad. Wayanad is a hill station in Kerala and a famous tourist spot. Praveen suggested to visit Munnar as it is far closer to Kochi than Wayanad (80 km vs 250 km). But I had already visited Munnar in my previous trips, so we chose Wayanad. We had a train late night from Ernakulam Junction (at 23:30 hours) to Kozhikode, which is the nearest railway station from Wayanad. So, we checked out in the morning as we had plans to roam around in Kochi before taking the train. Zostel was celebrating Onam on that day. To opt-in, we had to pay 400 rupees, which included a Sadya Thali and a mundu. Me and Suresh paid the amount and opted in for the celebrations. Sadya thali had Rice, Sambhar, Rasam, Avial, Banana Chips, Pineapple Pachadi, Pappadam, many types of pickels and chutneys, Pal Ada Payasam and Coconut jaggery Pasam. And, there was water too :). Those payasams were really great and I had one more round of them. Later, I had a lot of variety of payasams during the DebConf.
Sadya lined up for serving
Sadya thali served on banana leaf.
So, we hung out in the common room and put our luggage there. We played UNO and had conversations with other travellers in the hostel. I had a fun time there and I still think it is one of the best hostel experiences I had. We made good friends with Saiduth (Telangana) and Shwetank (Uttarakhand). They were already aware about the software like debian, and we had some detailed conversations about the Free Software movement. I remember explaining the difference between the terms Open Source and Free Software . I also told them about the Streetcomplete app, a beginner friendly app to edit OpenStreetMap. We had dinner at a place nearby (named Palaraam), but again, the vegetarian options were very limited! After dinner, we came back to the Zostel and me and Suresh left for Ernakulam Junction to catch our train Maveli Express (16604).

Day 3: Going to Wayanad Maveli Express was scheduled to reach Kozhikode at 03:25 (morning). I had set alarms from 03:00 to 03:30, with the gap of 10 minutes. Every time I woke up, I turned off the alarm. Then I woke up and saw train reaching the Kozhikode station and woke up Suresh for deboarding. But then I noticed that the train is actually leaving the station, not arriving! This means we missed our stop. Now we looked at the next stops and whether we can deboard there. I was very sleepy and wanted to take a retiring room at some station before continuing our journey to Wayanad. The next stop was Quilandi and we checked online that it didn t have a retiring room. So, we skipped this stop. We got off at the next stop named Vadakara and found out no retiring room was available. So, we asked about information regarding bus for Wayanad and they said that there is a bus to Wayanad around 07:00 hours from bus station which was a few kilometres from the railway station. We took a bus for Kalpetta (in Wayanad) at around 07:00. The destination of the buses were written in Malayalam, which we could not read. Once again, the locals helped us to get on to the bus to Kalpetta. Vadakara is not a big city and it can be hard to find people who know good Hindi or English, unlike Kochi. Despite language issues, I had no problem there in navigation, thanks to locals. I mostly spent time sleeping during the bus journey. A few hours later, the bus dropped us at Kalpetta. We had a booking at a hostel in Rippon village. It was 16 km from Kalpetta. On the way, we were treated with beautiful views of nature, which was present everywhere in Wayanad. The place was covered with tea gardens and our eyes were treated with beautiful scenery at every corner.
We were treated with such views during the Wayanad trip.
Rippon village was a very quiet place and I liked the calm atmosphere. This place is blessed by nature and has stunning scenery. I found English was more common than Hindi in Wayanad. Locals were very nice and helped me, even if they didn t know my language.
A road in Rippon.
After catching some sleep at the hostel, I went out in the afternoon. I hitchhiked to reach the main road from the hostel. I bought more spices from a nearby shop and realized that I should have waited for my visit to Wayanad to buy cardamom, which I already bought from Kochi. Then, I was looking for post office to send spices home. The people at the spices shop told me that the nearby Rippon post office was closed by that time, but the post office at Meppadi was open, which was 5 km from there. I went to Meppadi and saw the post office closes at 15:00, but I reached five minutes late. My packing was not very good and they asked me to pack it tighter. There was a shop near the post office and the people there gave me a cardboard and tapes, and helped pack my stuff for the post. By the time I went to the post office again, it was 15:30. But they accepted my parcel for post.

Day 4: Kanthanpara Falls, Zostel Wayanad and Karapuzha Dam Kanthanpara waterfalls were 2 km from the hostel. I hitchhiked to the place from the hostel on a scooty. Entry ticket was worth Rs 40. There were good views inside and nothing much to see except the waterfalls.
Entry to Kanthanpara Falls.
Kanthanpara Falls.
We had a booking at Zostel Wayanad for this day and so we shifted there. Again, as with their Ernakulam branch, they asked me to fill a form which required signing in using Google, but when I said I don t have a Google account they checked me in without that. There were tea gardens inside the Zostel boundaries and the property was beautiful.
A view of Zostel Wayanad.
A map of Wayanad showing tourist places.
A view from inside the Zostel Wayanad property.
Later in the evening, I went to Karapuzha Dam. I witnessed a beautiful sunset during the journey. Karapuzha dam had many activites, like ziplining, and was nice to roam around. Chembra Peak is near to the Zostel Wayanad. So, I was planning to trek to the heart shaped lake. It was suggested by Praveen and looking online, this trek seemed worth doing. There was an issue however. The charges for trek were Rs 1770 for upto five people. So, if I go alone I will have to spend Rs 1770 for the trek. If I go with another person, we split Rs 1770 into two, and so on. The optimal way to do it is to go in a group of five (you included :D). I asked front desk at Zostel if they can connect me with people going to Chembra peak the next day, and they told me about a group of four people planning to go to Chembra peak the next day. I got lucky! All four of them were from Kerala and worked in Qatar.

Day 5: Chembra peak trek The date was 1st September 2023. I woke up early (05:30 in the morning) for the Chembra peak trek. I had bought hiking shoes especially for trekking, which turned out to be a very good idea. The ticket counter opens at 07:00. The group of four with which I planned to trek met me around 06:00 in the Zostel. We went to the ticket counter around 06:30. We had breakfast at shops selling Maggi noodles and bread omlette near the ticket counter. It was a hot day and the trek was difficult for an inexperienced person like me. The scenery was green and beautiful throughout.
Terrain during trekking towards the Chembra peak.
Heart-shaped lake at the Chembra peak.
Me at the heart-shaped lake.
Views from the top of the Chembra peak.
View of another peak from the heart-shaped lake.
While returning from the trek, I found out a shop selling bamboo rice, which I bought and will make bamboo rice payasam out of it at home (I have some coconut milk from Kerala too ;)). We returned to Zostel in the afternoon. I had muscle pain after the trek and it has still not completely disappeared. At night, we took a bus from Kalpetta to Kozhikode in order to return to Kochi.

Day 6: Return to Kochi At midnight of 2nd of September, we reached Kozhikode bus stand. Then we roamed around for something to eat. I didn t find anything vegetarian to eat. No surprises there! Then we went to Kozhikode railway station and looked for retiring rooms, but no luck there. We waited at the station and took the next train to Kochi at 03:30 and reached Ernakulam Junction at 07:30 (half hours before train s scheduled time!). From there, we went to Zostel Fort Kochi and stayed one night there and checked out next morning.

Day 7: Roaming around in Fort Kochi On 3rd of September, we roamed around in Fort Kochi. We visited the usual places - St Francis Church, Dutch Palace, Jew Town, Pardesi Synagogue. I also visited some homestays and the owners were very happy to show their place even when I made it clear that I was not looking for a stay. In the evening, we went to Kakkanad to attend DebConf. The story continues in my DebConf23 blog post.

9 October 2023

Aigars Mahinovs: Figuring out finances part 1

I have been managing my finances and getting an overview of where I am financially and where I am going month-to-month for a few years already. That means that I already have a method of doing my finances and a method of thinking about them. So far this has been supported by a commerical tool called MoneyWiz. I was happy to pay them for the ability to have a solid product and be able to easily access my finances from my phone (to enter cash transactions directly in the field) and sync data across multiple devices with their cloud offering. However, MoneyWiz development stalled. So much so that they stopped updating their Android app and cancelled web version plans to just focus on a iPhone and MacOS clients. And even there the development did not seem very successful over the last year. So I am cancelling their subscription, extracting all my data (as CSV) and looking for alternatives. So let's start with the basics - what I want from finace software?
  • Open source and self-hosted
  • Accounts to represent real accounts in my bank, credit cards, cash account
  • Transactions to represent money moving in/out/between accounts
  • Tree of categories for transactions to categorise spending
  • Category reports on arbitrary time periods
  • Ability to see the expected balance on each account at any historic point in time from given transactions
  • Ability to do a corrective transaction - hmm, I don't know what happened, but right now I have X in cash
  • Adding past transactions before a corrective transaction should not change balance after correction
  • Ability to automatically import transactions from my bank
  • Ability to plan future, recurring transactions
  • Incoming transactions from bank import should be auto-matched with planned recuring transactions
  • Ability to see missed planned recurring transactions
  • Ability to see value deviations in planned recuring transactions
  • Ability to manually match a transaction with a planned recurring transaction for that month or skip a month
  • Ability to see a projected balance on my accounts at any future date given planned transactions
  • Credit card fully refilling its current (negative) balance to zero (from a given account) should be a plannable transaction
  • Accessible from multiple devices, like Linux, Android, iPhone, MacOS. Most likely that means web-based
  • Achiving accounts without loosing their past transactions with active accounts
  • API to access data, so that I could wire that up to a Home Assistant dashboard
I don't care about currencies (thanks Euro!), localization, taxes and double-entry bookkeeping. I don't have much use for tags, budgets or savings targets. It could be useful to be able to track stock investment values (but I do have a pretty good from the bank that does that anyway). And tracking loans (incoming and outgoing) could be a worthwhile thing as well, but that can also be simulated with separate accounts for each loan. Tracking refundable expenses (like medical expenses that insurances should refund later on) would also be nice. Could also be simulated by having a separate expense category (Medical - refundable) and putting the refund as a negative expense into that category. One weird feature that I really like is having transfer transactions that have arbitrary incoming and outgoing dates. For some transactions it is a simple transfer delay - money leaves the account A on Friday and only appears in the destination account B on Monday morning. However, this can also happen the other way around - a few credit cards I have seen work in a way that on 14th of each month the balance of the credit card gets reset to zero and then a week later the negative balance that was on that credit card gets taken from the base account that the card is bound to. So the money leaves the base account a week later than it arrives in the credit card account. Financial systems really are magic sometimes :D Ideally that should be represented by a single transaction with two dates - one date when money leaves one account and another date when it arrives in a different account, so that balances on both accounts in the days between would be correct (as in matching what the bank says they were on those dates). And it should automatch such transaction from bank data imports. And predict its value from the negative balance of the credit card. And a pony! But in worst case this could be simulated using a separate "In transfer" account. When I am thinking about my finances, the key metric for me is - how much will I spend this month - both in fixed spending (rent, Internet and phone bills, health insurance, subscriptions, ...) and in variable spending (groceries, eating out, clothing, ...). In theory that should be trivial, but in practice a simple calendar month view will fail because a bunch of fixed expenses occur at the month end/start boundary and can happen either at the end of previous month or at start on next month, depending on how the weekends happen to fall and how the systems of various providers decided to do direct debits this month. So 1st of month is not a good snapshoting time. Any date between 6th (last possible days for monthly bills) and 20th (earliest possible salary day) would be a better cut off point. Looking at the balance of my main account at that time point lets me know what was the difference between income and expenses for the past month. And that is the number I want the financial app to be able to predict as soon as possible. And additional complication is the way credit cards work here. The expenses booked to a credit card after 14th of September will only appear in the main account on 20th of October and thus will actually only count towards the expenses of month of October. That makes it very confusing to see a larger overspend in the main account on 6th of October and then use the categories to try to figure out where the money went, because all the spending that happened on the credit card after the 14th of September should not really be looked at in that context, but spending on the main account or cash account should be counted all the way to 6th of October. :D I am not optimistic that any finance software would able to doea with this mess correctly. So far I am tending to consider Firefly III which has most of what I need and looks great and well maintained, but has the tiny drawback of being written in PHP, so that basically excludes me from ability to contribute anything beyond ideas and feedback. :D I already did a quick test of Firefly via local Docker containers, so the next step would be to try to set it up as a production instance (using the same always-on mini PC that is running my Home Assistant instance), get the database up and working, get the Firefly III itself up and running, get account importer working for bank connection, import historical transactions from there, setup my categories and recurring transactions, import past data dumped out of MoneyWiz and see if this works well and gives me the insights I need. If you know a better solution, please let me know on any social or email channel!

18 September 2023

Bits from Debian: DebConf23 closes in Kochi and DebConf24 announced

DebConf23 group photo - click to enlarge On Sunday 17 September 2023, the annual Debian Developers and Contributors Conference came to a close. Over 474 attendees representing 35 countries from around the world came together for a combined 89 events made up of Talks, Discussons, Birds of a Feather (BoF) gatherings, workshops, and activities in support of furthering our distribution, learning from our mentors and peers, building our community, and having a bit of fun. The conference was preceded by the annual DebCamp hacking session held September 3d through September 9th where Debian Developers and Contributors convened to focus on their Individual Debian related projects or work in team sprints geared toward in-person collaboration in developing Debian. In particular this year Sprints took place to advance development in Mobian/Debian, Reproducible Builds, and Python in Debian. This year also featured a BootCamp that was held for newcomers staged by a team of dedicated mentors who shared hands-on experience in Debian and offered a deeper understanding of how to work in and contribute to the community. The actual Debian Developers Conference started on Sunday 10 September 2023. In addition to the traditional 'Bits from the DPL' talk, the continuous key-signing party, lightning talks and the announcement of next year's DebConf4, there were several update sessions shared by internal projects and teams. Many of the hosted discussion sessions were presented by our technical teams who highlighted the work and focus of the Long Term Support (LTS), Android tools, Debian Derivatives, Debian Installer, Debian Image, and the Debian Science teams. The Python, Perl, and Ruby programming language teams also shared updates on their work and efforts. Two of the larger local Debian communities, Debian Brasil and Debian India shared how their respective collaborations in Debian moved the project forward and how they attracted new members and opportunities both in Debian, F/OSS, and the sciences with their HowTos of demonstrated community engagement. The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the conference. Several activities that were unable to be held in past years due to the Global COVID-19 Pandemic were celebrated as they returned to the conference's schedule: a job fair, the open-mic and poetry night, the traditional Cheese and Wine party, the group photos and the Day Trips. For those who were not able to attend, most of the talks and sessions were videoed for live room streams with the recorded videos to be made available later through the Debian meetings archive website. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents which allowed remote attendees to 'be in the room' to ask questions or share comments with the speaker or assembled audience. DebConf23 saw over 4.3 TiB of data streamed, 55 hours of scheduled talks, 23 network access points, 11 network switches, 75 kb of equipment imported, 400 meters of gaffer tape used, 1,463 viewed streaming hours, 461 T-shirts, 35 country Geoip viewers, 5 day trips, and an average of 169 meals planned per day. All of these events, activies, conversations, and streams coupled with our love, interest, and participation in Debian annd F/OSS certainly made this conference an overall success both here in Kochi, India and On-line around the world. The DebConf23 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf24 will be held in Haifa, Israel. As tradition follows before the next DebConf the local organizers in Israel will start the conference activites with DebCamp with particular focus on individual and team work towards improving the distribution. DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct in DebConf23 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf23, particularly our Platinum Sponsors: Infomaniak, Proxmox, and Siemens. We also wish to thank our Video and Infrastructure teams, the DebConf23 and DebConf commitiees, our host nation of India, and each and every person who helped contribute to this event and to Debian overall. Thank you all for your work in helping Debian continue to be "The Universal Operating System". See you next year! About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/. About Infomaniak Infomaniak is a key player in the European cloud market and the leading developer of Web technologies in Switzerland. It aims to be an independent European alternative to the web giants and is committed to an ethical and sustainable Web that respects privacy and creates local jobs. Infomaniak develops cloud solutions (IaaS, PaaS, VPS), productivity tools for online collaboration and video and radio streaming services. About Proxmox Proxmox develops powerful, yet easy-to-use open-source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf23. About Siemens Siemens is technology company focused on industry, infrastructure and transport. From resource-efficient factories, resilient supply chains, smarter buildings and grids, to cleaner and more comfortable transportation, and advanced healthcare, the company creates technology with purpose adding real value for customers. By combining the real and the digital worlds, Siemens empowers its customers to transform their industries and markets, helping them to enhance the everyday of billions of people. Contact Information For further information, please visit the DebConf23 web page at https://debconf23.debconf.org/ or send mail to press@debian.org.

Next.

Previous.