Search Results: "tale"

18 April 2024

Thomas Koch: Rebuild search with trust

Posted on January 20, 2024
Finally there is a thing people can agree on: Apparently, Google Search is not good anymore. And I m not the only one thinking about decentralization to fix it: Honey I federated the search engine - finding stuff online post-big tech - a lightning talk at the recent chaos communication congress The speaker however did not mention, that there have already been many attempts at building distributed search engines. So why do I think that such an attempt could finally succeed? My definition of success is:
A mildly technical computer user (able to install software) has access to a search engine that provides them with superior search results compared to Google for at least a few predefined areas of interest.
The exact algorithm used by Google Search to rank websites is a secret even to most Googlers. Still it is clear, that it relies heavily on big data: billions of queries, a comprehensive web index and user behaviour data. - All this is not available to us. A distributed search engine however can instead rely on user input. Every admin of one node seeds the node ranking with their personal selection of trusted sites. They connect their node with nodes of people they trust. This results in a web of (transitive) trust much like pgp. For comparison, imagine you are searching for something in a world without computers: You ask the people around you. They probably forward your question to their peers. I already had a look at YaCy. It is active, somewhat usable and has a friendly maintainer. Unfortunately I consider the codebase to show its age. It takes a lot of time for a newcomer to find their way around and it contains a lot of cruft. Nevertheless, YaCy is a good example that a decentralized search software can be done even by a small team or just one person. I myself started working on a software in Haskell and keep my notes here: Populus:DezInV. Since I m learning Haskell along the way, there is nothing there to see yet. Additionally I took a yak shaving break to learn nix. By the way: DuckDuckGo is not the alternative. And while I would encourage you to also try Yandex for a second opinion, I don t consider this a solution.

12 April 2024

Freexian Collaborators: Debian Contributions: SSO Authentication for jitsi.debian.social, /usr-move updates, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services. P.S. We ve completed over a year of writing these blogs. If you have any suggestions on how to make them better or what you d like us to cover, or any other opinions/reviews you might have, et al, please let us know by dropping an email to us. We d be happy to hear your thoughts. :)

SSO Authentication for jitsi.debian.social, by Stefano Rivera Debian.social s jitsi instance has been getting some abuse by (non-Debian) people sharing sexually explicit content on the service. After playing whack-a-mole with this for a month, and shutting the instance off for another month, we opened it up again and the abuse immediately re-started. Stefano sat down and wrote an SSO Implementation that hooks into Jitsi s existing JWT SSO support. This requires everyone using jitsi.debian.social to have a Salsa account. With only a little bit of effort, we could change this in future, to only require an account to open a room, and allow guests to join the call.

/usr-move, by Helmut Grohne The biggest task this month was sending mitigation patches for all of the /usr-move issues arising from package renames due to the 2038 transition. As a result, we can now say that every affected package in unstable can either be converted with dh-sequence-movetousr or has an open bug report. The package set relevant to debootstrap except for the set that has to be uploaded concurrently has been moved to /usr and is awaiting migration. The move of coreutils happened to affect piuparts which hard codes the location of /bin/sync and received multiple updates as a result.

Miscellaneous contributions
  • Stefano Rivera uploaded a stable release update to python3.11 for bookworm, fixing a use-after-free crash.
  • Stefano uploaded a new version of python-html2text, and updated python3-defaults to build with it.
  • In support of Python 3.12, Stefano dropped distutils as a Build-Dependency from a few packages, and uploaded a complex set of patches to python-mitogen.
  • Stefano landed some merge requests to clean up dead code in dh-python, removed the flit plugin, and uploaded it.
  • Stefano uploaded new upstream versions of twisted, hatchling, python-flexmock, python-authlib, python mitogen, python-pipx, and xonsh.
  • Stefano requested removal of a few packages supporting the Opsis HDMI2USB hardware that DebConf Video team used to use for HDMI capture, as they are not being maintained upstream. They started to FTBFS, with recent sdcc changes.
  • DebConf 24 is getting ready to open registration, Stefano spent some time fixing bugs in the website, caused by infrastructure updates.
  • Stefano reviewed all the DebConf 23 travel reimbursements, filing requests for more information from SPI where our records mismatched.
  • Stefano spun up a Wafer website for the Berlin 2024 mini DebConf.
  • Roberto C. S nchez worked on facilitating the transfer of upstream maintenance responsibility for the dormant Shorewall project to a new team led by the current maintainer of the Shorewall packages in Debian.
  • Colin Watson fixed build failures in celery-haystack-ng, db1-compat, jsonpickle, libsdl-perl, kali, knews, openssh-ssh1, python-json-log-formatter, python-typing-extensions, trn4, vigor, and wcwidth. Some of these were related to the 64-bit time_t transition, since that involved enabling -Werror=implicit-function-declaration.
  • Colin fixed an off-by-one error in neovim, which was already causing a build failure in Ubuntu and would eventually have caused a build failure in Debian with stricter toolchain settings.
  • Colin added an sshd@.service template to openssh to help newer systemd versions make containers and VMs SSH-accessible over AF_VSOCK sockets.
  • Following the xz-utils backdoor, Colin spent some time testing and discussing OpenSSH upstream s proposed inline systemd notification patch, since the current implementation via libsystemd was part of the attack vector used by that backdoor.
  • Utkarsh reviewed and sponsored some Go packages for Lena Voytek and Rajudev.
  • Utkarsh also helped Mitchell Dzurick with the adoption of pyparted package.
  • Helmut sent 10 patches for cross build failures.
  • Helmut partially fixed architecture cross bootstrap tooling to deal with changes in linux-libc-dev and the recent gcc-for-host changes and also fixed a 64bit-time_t FTBFS in libtextwrap.
  • Thorsten Alteholz uploaded several packages from debian-printing: cjet, lprng, rlpr and epson-inkjet-printer-escpr were affected by the newly enabled compiler switch -Werror=implicit-function-declaration. Besides fixing these serious bugs, Thorsten also worked on other bugs and could fix one or the other.
  • Carles updated simplemonitor and python-ring-doorbell packages with new upstream versions.
  • Santiago is still working on the Salsa CI MRs to adapt the build jobs so they can rely on sbuild. Current work includes adapting the images used by the build job, implementing the basic sbuild support the related jobs, and adjusting the support for experimental and *-backports releases..
    Additionally, Santiago reviewed some MR such as Make timeout action explicit in the logs and the subsequent Implement conditional timeout verbosity, and the batch of MRs included in https://salsa.debian.org/salsa-ci-team/pipeline/-/merge_requests/482.
  • Santiago also reviewed applications for the improving Salsa CI in Debian GSoC 2024 project. We received applications from four very talented candidates. The selection process is currently ongoing. A huge thanks to all of them!
  • As part of the DebConf 24 organization, Santiago has taken part in the Content team discussions.

9 April 2024

Matthew Palmer: How I Tripped Over the Debian Weak Keys Vulnerability

Those of you who haven t been in IT for far, far too long might not know that next month will be the 16th(!) anniversary of the disclosure of what was, at the time, a fairly earth-shattering revelation: that for about 18 months, the Debian OpenSSL package was generating entirely predictable private keys. The recent xz-stential threat (thanks to @nixCraft for making me aware of that one), has got me thinking about my own serendipitous interaction with a major vulnerability. Given that the statute of limitations has (probably) run out, I thought I d share it as a tale of how huh, that s weird can be a powerful threat-hunting tool but only if you ve got the time to keep pulling at the thread.

Prelude to an Adventure Our story begins back in March 2008. I was working at Engine Yard (EY), a now largely-forgotten Rails-focused hosting company, which pioneered several advances in Rails application deployment. Probably EY s greatest claim to lasting fame is that they helped launch a little code hosting platform you might have heard of, by providing them free infrastructure when they were little more than a glimmer in the Internet s eye. I am, of course, talking about everyone s favourite Microsoft product: GitHub. Since GitHub was in the right place, at the right time, with a compelling product offering, they quickly started to gain traction, and grow their userbase. With growth comes challenges, amongst them the one we re focusing on today: SSH login times. Then, as now, GitHub provided SSH access to the git repos they hosted, by SSHing to git@github.com with publickey authentication. They were using the standard way that everyone manages SSH keys: the ~/.ssh/authorized_keys file, and that became a problem as the number of keys started to grow. The way that SSH uses this file is that, when a user connects and asks for publickey authentication, SSH opens the ~/.ssh/authorized_keys file and scans all of the keys listed in it, looking for a key which matches the key that the user presented. This linear search is normally not a huge problem, because nobody in their right mind puts more than a few keys in their ~/.ssh/authorized_keys, right?
2008-era GitHub giving monkey puppet side-eye to the idea that nobody stores many keys in an authorized_keys file
Of course, as a popular, rapidly-growing service, GitHub was gaining users at a fair clip, to the point that the one big file that stored all the SSH keys was starting to visibly impact SSH login times. This problem was also not going to get any better by itself. Something Had To Be Done. EY management was keen on making sure GitHub ran well, and so despite it not really being a hosting problem, they were willing to help fix this problem. For some reason, the late, great, Ezra Zygmuntowitz pointed GitHub in my direction, and let me take the time to really get into the problem with the GitHub team. After examining a variety of different possible solutions, we came to the conclusion that the least-worst option was to patch OpenSSH to lookup keys in a MySQL database, indexed on the key fingerprint. We didn t take this decision on a whim it wasn t a case of yeah, sure, let s just hack around with OpenSSH, what could possibly go wrong? . We knew it was potentially catastrophic if things went sideways, so you can imagine how much worse the other options available were. Ensuring that this wouldn t compromise security was a lot of the effort that went into the change. In the end, though, we rolled it out in early April, and lo! SSH logins were fast, and we were pretty sure we wouldn t have to worry about this problem for a long time to come. Normally, you d think patching OpenSSH to make mass SSH logins super fast would be a good story on its own. But no, this is just the opening scene.

Chekov s Gun Makes its Appearance Fast forward a little under a month, to the first few days of May 2008. I get a message from one of the GitHub team, saying that somehow users were able to access other users repos over SSH. Naturally, as we d recently rolled out the OpenSSH patch, which touched this very thing, the code I d written was suspect number one, so I was called in to help.
The lineup scene from the movie The Usual Suspects They're called The Usual Suspects for a reason, but sometimes, it really is Keyser S ze
Eventually, after more than a little debugging, we discovered that, somehow, there were two users with keys that had the same key fingerprint. This absolutely shouldn t happen it s a bit like winning the lottery twice in a row1 unless the users had somehow shared their keys with each other, of course. Still, it was worth investigating, just in case it was a web application bug, so the GitHub team reached out to the users impacted, to try and figure out what was going on. The users professed no knowledge of each other, neither admitted to publicising their key, and couldn t offer any explanation as to how the other person could possibly have gotten their key. Then things went from weird to what the ? . Because another pair of users showed up, sharing a key fingerprint but it was a different shared key fingerprint. The odds now have gone from winning the lottery multiple times in a row to as close to this literally cannot happen as makes no difference.
Milhouse from The Simpsons says that We're Through The Looking Glass Here, People
Once we were really, really confident that the OpenSSH patch wasn t the cause of the problem, my involvement in the problem basically ended. I wasn t a GitHub employee, and EY had plenty of other customers who needed my help, so I wasn t able to stay deeply involved in the on-going investigation of The Mystery of the Duplicate Keys. However, the GitHub team did keep talking to the users involved, and managed to determine the only apparent common factor was that all the users claimed to be using Debian or Ubuntu systems, which was where their SSH keys would have been generated. That was as far as the investigation had really gotten, when along came May 13, 2008.

Chekov s Gun Goes Off With the publication of DSA-1571-1, everything suddenly became clear. Through a well-meaning but ultimately disasterous cleanup of OpenSSL s randomness generation code, the Debian maintainer had inadvertently reduced the number of possible keys that could be generated by a given user from bazillions to a little over 32,000. With so many people signing up to GitHub some of them no doubt following best practice and freshly generating a separate key it s unsurprising that some collisions occurred. You can imagine the sense of oooooooh, so that s what s going on! that rippled out once the issue was understood. I was mostly glad that we had conclusive evidence that my OpenSSH patch wasn t at fault, little knowing how much more contact I was to have with Debian weak keys in the future, running a huge store of known-compromised keys and using them to find misbehaving Certificate Authorities, amongst other things.

Lessons Learned While I ve not found a description of exactly when and how Luciano Bello discovered the vulnerability that became CVE-2008-0166, I presume he first came across it some time before it was disclosed likely before GitHub tripped over it. The stable Debian release that included the vulnerable code had been released a year earlier, so there was plenty of time for Luciano to have discovered key collisions and go hmm, I wonder what s going on here? , then keep digging until the solution presented itself. The thought hmm, that s odd , followed by intense investigation, leading to the discovery of a major flaw is also what ultimately brought down the recent XZ backdoor. The critical part of that sequence is the ability to do that intense investigation, though. When I reflect on my brush with the Debian weak keys vulnerability, what sticks out to me is the fact that I didn t do the deep investigation. I wonder if Luciano hadn t found it, how long it might have been before it was found. The GitHub team would have continued investigating, presumably, and perhaps they (or I) would have eventually dug deep enough to find it. But we were all super busy myself, working support tickets at EY, and GitHub feverishly building features and fighting the fires in their rapidly-growing service. As it was, Luciano was able to take the time to dig in and find out what was happening, but just like the XZ backdoor, I feel like we, as an industry, got a bit lucky that someone with the skills, time, and energy was on hand at the right time to make a huge difference. It s a luxury to be able to take the time to really dig into a problem, and it s a luxury that most of us rarely have. Perhaps an understated takeaway is that somehow we all need to wrestle back some time to follow our hunches and really dig into the things that make us go hmm .

Support My Hunches If you d like to help me be able to do intense investigations of mysterious software phenomena, you can shout me a refreshing beverage on ko-fi.
  1. the odds are actually probably more like winning the lottery about twenty times in a row. The numbers involved are staggeringly huge, so it s easiest to just approximate it as really, really unlikely .

2 February 2024

Ian Jackson: UPS, the Useless Parcel Service; VAT and fees

I recently had the most astonishingly bad experience with UPS, the courier company. They severely damaged my parcels, and were very bad about UK import VAT, ultimately ending up harassing me on autopilot. The only thing that got their attention was my draft Particulars of Claim for intended legal action. Surprisingly, I got them to admit in writing that the disbursement fee they charge recipients alongside the actual VAT, is just something they made up with no legal basis. What happened Autumn last year I ordered some furniture from a company in Germany. This was to be shipped by them to me by courier. The supplier chose UPS. UPS misrouted one of the three parcels to Denmark. When everything arrived, it had been sat on by elephants. The supplier had to replace most of it, with considerable inconvenience and delay to me, and of course a loss to the supplier. But this post isn t mostly about that. This post is about VAT. You see, import VAT was due, because of fucking Brexit. UPS made a complete hash of collecting that VAT. Their computers can t issue coherent documents, their email helpdesk is completely useless, and their automated debt collection systems run along uninfluenced by any external input. The crazy, including legal threats and escalating late payment fees, continued even after I paid the VAT discrepancy (which I did despite them not yet having provided any coherent calculation for it). This kind of behaviour is a very small and mild version of the kind of things British Gas did to Lisa Ferguson, who eventually won substantial damages for harassment, plus 10K of costs. Having tried asking nicely, and sending stiff letters, I too threatened litigation. I would have actually started a court claim, but it would have included a claim under the Protection from Harassment Act. Those have to be filed under the Part 8 procedure , which involves sending all of the written evidence you re going to use along with the claim form. Collating all that would be a good deal of work, especially since UPS and ControlAccount didn t engage with me at all, so I had no idea which things they might actually dispute. So I decided that before issuing proceedings, I d send them a copy of my draft Particulars of Claim, along with an offer to settle if they would pay me a modest sum and stop being evil robots at me. Rather than me typing the whole tale in again, you can read the full gory details in the PDF of my draft Particulars of Claim. (I ve redacted the reference numbers). Outcome The draft Particulars finally got their attention. UPS sent me an offer: they agreed to pay me 50, in full and final settlement. That was close enough to my offer that I accepted it. I mostly wanted them to stop, and they do seem to have done so. And I ve received the 50. VAT calculation They also finally included an actual explanation of the VAT calculation. It s absurd, but it s not UPS s absurd:
The clearance was entered initially with estimated import charges of 400.03, consisting of 387.83 VAT, and 12.20 disbursement fee. This original entry regrettably did not include the freight cost for calculating the VAT, and as such when submitted for final entry the VAT value was adjusted to include this and an amended invoice was issued for an additional 39.84. HMRC calculate the amount against which VAT is raised using the value of goods, insurance and freight, however they also may apply a VAT adjustment figure. The VAT Adjustment is based on many factors (Incidental costs in regards to a shipment), which includes charge for currency conversion if the invoice does not list values in Sterling, but the main is due to the inland freight from airport of destination to the final delivery point, as this charge varies, for example, from EMA to Edinburgh would be 150, from EMA to Derby would be 1, so each year UPS must supply HMRC with all values incurred for entry build up and they give an average which UPS have to use on the entry build up as the VAT Adjustment. The correct calculation for the import charges is therefore as follows: Goods value divided by exchange rate 2,489.53 EUR / 1.1683 = 2,130.89 GBP Duty: Goods value plus freight (%) 2,130.89 GBP + 5% = 2,237.43 GBP. That total times the duty rate. X 0 % = 0 GBP VAT: Goods value plus freight (100%) 2,130.89 GBP + 0 = 2,130.89 GBP That total plus duty and VAT adjustment 2,130.89 GBP + 0 GBP + 7.49 GBP = 2,348.08 GBP. That total times 20% VAT = 427.67 GBP As detailed above we must confirm that the final VAT charges applied to the shipment were correct, and that no refund of this is therefore due.
This looks very like HMRC-originated nonsense. If only they had put it on the original bills! It s completely ridiculous that it took four months and near-litigation to obtain it. Disbursement fee One more thing. UPS billed me a 12 disbursement fee . When you import something, there s often tax to pay. The courier company pays that to the government, and the consignee pays it to the courier. Usually the courier demands it before final delivery, since otherwise they end up having to chase it as a debt. It is common for parcel companies to add a random fee of their own. As I note in my Particulars, there isn t any legal basis for this. In my own offer of settlement I proposed that UPS should:
State under what principle of English law (such as, what enactment or principle of Common Law), you levy the disbursement fee (or refund it).
To my surprise they actually responded to this in their own settlement letter. (They didn t, for example, mention the harassment at all.) They said (emphasis mine):
A disbursement fee is a fee for amounts paid or processed on behalf of a client. It is an established category of charge used by legal firms, amongst other companies, for billing of various ancillary costs which may be incurred in completion of service. Disbursement fees are not covered by a specific law, nor are they legally prohibited. Regarding UPS disbursement fee this is an administrative charge levied for the use of UPS deferment account to prepay import charges for clearance through CDS. This charge would therefore be billed to the party that is responsible for the import charges, normally the consignee or receiver of the shipment in question. The disbursement fee as applied is legitimate, and as you have stated is a commonly used and recognised charge throughout the courier industry, and I can confirm that this was charged correctly in this instance.
On UPS s analysis, they can just make up whatever fee they like. That is clearly not right (and I don t even need to refer to consumer protection law, which would also make it obviously unlawful). And, that everyone does it doesn t make it lawful. There are so many things that are ubiquitous but unlawful, especially nowadays when much of the legal system - especially consumer protection regulators - has been underfunded to beyond the point of collapse. Next time this comes up I might have a go at getting the fee back. (Obviously I ll have to pay it first, to get my parcel.) ParcelForce and Royal Mail I think this analysis doesn t apply to ParcelForce and (probably) Royal Mail. I looked into this in 2009, and I found that Parcelforce had been given the ability to write their own private laws: Schemes made under section 89 of the Postal Services Act 2000. This is obviously ridiculous but I think it was the law in 2009. I doubt the intervening governments have fixed it. Furniture Oh, yes, the actual furniture. The replacements arrived intact and are great :-).

comment count unavailable comments

24 January 2024

Dirk Eddelbuettel: RApiDatetime 0.0.9 on CRAN: Maintenance

A new maintenance release of our RApiDatetime package is now on CRAN RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change. This release responds to a CRAN request to clean up empty macros and sections in Rd files. Moreover, because the windows portion of the corresponding R-internal code underwent some changes, our (#ifdef conditional) coverage here is a little behind and created a warning under the newer UCRT setup. So starting with this release we are back to OS_type: unix meaning there will not be any Windows builds at CRAN. If you would like that to change, and ideally can work in the Windows portion, do not hesitate to get in touch. Details of the release follow based on the NEWS file.

Changes in RApiDatetime version 0.0.9 (2024-01-23)
  • Replace auto-generated stale RApitDatetime-package.Rd with macro-filled stanza to satisfy CRAN request.

Courtesy of my CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 December 2023

Russ Allbery: Review: Nettle & Bone

Review: Nettle & Bone, by T. Kingfisher
Publisher: Tor
Copyright: 2022
ISBN: 1-250-24403-X
Format: Kindle
Pages: 242
Nettle & Bone is a standalone fantasy novel with fairy tale vibes. T. Kingfisher is a pen name for Ursula Vernon. As the book opens, Marra is giving herself a blood infection by wiring together dog bones out of a charnel pit. This is the second of three impossible tasks that she was given by the dust-wife. Completing all three will give her the tools to kill a prince. I am a little cautious of which T. Kingfisher books I read since she sometimes writes fantasy and sometimes writes horror and I don't get along with horror. This one seemed a bit horrific in the marketing, so I held off on reading it despite the Hugo nomination. It turns out to be just on the safe side of my horror tolerance, with only a couple of parts that I read a bit quickly. One of those is the opening, which I am happy to report does not set the tone for the rest of the book. Marra starts the story in a wasteland full of disease, madmen, and cannibals (who, in typical Ursula Vernon fashion, turn out to be nicer than the judgmental assholes outside of the blistered land). She doesn't stay there long. By chapter two, the story moves on to flashbacks explaining how Marra ended up there, alternating with further (and less horrific) steps in her quest to kill the prince of the Northern Kingdom. Marra is a princess of a small, relatively poor coastal kingdom with a good harbor and acquisitive neighbors. Her mother, the queen, has protected the kingdom through arranged marriage of her daughters to the prince of the Northern Kingdom, who rules it in all but name given the mental deterioration of his father the king. Marra's eldest sister Damia was first, but she died suddenly and mysteriously in a fall. (If you're thinking about the way women are injured by "accident," you have the right idea.) Kania, the middle sister, is next to marry; she lives, but not without cost. Meanwhile, Marra is sent off to a convent to ensure that there are no complicating potential heirs, and to keep her on hand as a spare. I won't spoil the entire backstory, but you do learn it all. Marra is a typical Kingfisher protagonist: a woman who is way out of her depth who persists with stubbornness, curiosity, and innate decency because what else is there to do? She accumulates the typical group of misfits and oddballs common in Kingfisher's quest fantasies, characters that in the Chosen One male fantasy would be supporting characters at best. The bone-wife is a delight; her chicken is even better. There are fairy godmothers and a goblin market and a tooth extraction that was one of the creepiest things I've read without actually being horror. It is, in short, a Kingfisher fantasy novel, with a touch more horror than average but not enough to push it out of the fantasy genre. I think my favorite part of this book was not the main quest. It was the flashback scenes set in the convent, where Marra has the space (and the mentorship) to develop her sense of self.
"We're a mystery religion," said the abbess, when she'd had a bit more wine than usual, "for people who have too much work to do to bother with mysteries. So we simply get along as best we can. Occasionally someone has a vision, but [the goddess] doesn't seem to want anything much, and so we try to return the favor."
If you have read any other Kingfisher novels, much of this will be familiar: the speculative asides, the dogged determination, the slightly askew nature of the world, the vibes-based world-building that feels more like a fairy tale than a carefully constructed magic system, and the sense that the main characters (and nearly all of the supporting characters) are average people trying to play the hands they were dealt as ethically as they can. You will know that the tentative and woman-initiated romance is coming as soon as the party meets the paladin type who is almost always the romantic interest in one of these books. The emotional tone of the book is a bit predictable for regular readers, but Ursula Vernon's brain is such a delightful place to spend some time that I don't mind.
Marra had not managed to be pale and willowy and consumptive at any point in eighteen years of life and did not think she could achieve it before she died.
Nettle & Bone won the Hugo for Best Novel in 2023. I'm not sure why this specific T. Kingfisher novel won and not any of the half-dozen earlier novels she's written in a similar style, but sure, I have no objections. I'm glad one of them won; they're all worth reading and hopefully that will help more people discover this delightful style of fantasy that doesn't feel like what anyone else is doing. Recommended, although be prepared for a few more horror touches than normal and a rather grim first chapter. Content warnings: domestic abuse. The dog... lives? Is equally as alive at the end of the book as it was at the end of the first chapter? The dog does not die; I'll just leave it at that. (Neither does the chicken.) Rating: 8 out of 10

25 December 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Automatic Versioning Using semantic-release

This post describes how I m using semantic-release on gitlab-ci to manage versioning automatically for different kinds of projects following a simple workflow (a develop branch where changes are added or merged to test new versions, a temporary release/#.#.# to generate the release candidate versions and a main branch where the final versions are published).

What is semantic-releaseIt is a Node.js application designed to manage project versioning information on Git Repositories using a Continuous integration system (in this post we will use gitlab-ci)

How does it workBy default semantic-release uses semver for versioning (release versions use the format MAJOR.MINOR.PATCH) and commit messages are parsed to determine the next version number to publish. If after analyzing the commits the version number has to be changed, the command updates the files we tell it to (i.e. the package.json file for nodejs projects and possibly a CHANGELOG.md file), creates a new commit with the changed files, creates a tag with the new version and pushes the changes to the repository. When running on a CI/CD system we usually generate the artifacts related to a release (a package, a container image, etc.) from the tag, as it includes the right version number and usually has passed all the required tests (it is a good idea to run the tests again in any case, as someone could create a tag manually or we could run extra jobs when building the final assets if they fail it is not a big issue anyway, numbers are cheap and infinite, so we can skip releases if needed).

Commit messages and versioningThe commit messages must follow a known format, the default module used to analyze them uses the angular git commit guidelines, but I prefer the conventional commits one, mainly because it s a lot easier to use when you want to update the MAJOR version. The commit message format used must be:
<type>(optional scope): <description>
[optional body]
[optional footer(s)]
The system supports three types of branches: release, maintenance and pre-release, but for now I m not using maintenance ones. The branches I use and their types are:
  • main as release branch (final versions are published from there)
  • develop as pre release branch (used to publish development and testing versions with the format #.#.#-SNAPSHOT.#)
  • release/#.#.# as pre release branches (they are created from develop to publish release candidate versions with the format #.#.#-rc.# and once they are merged with main they are deleted)
On the release branch (main) the version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last version change (it looks for tags on the same branch).
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last version change.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.
On the pre release branches (develop and release/#.#.#) the version and pre release numbers are always calculated from the last published version available on the branch (i. e. if we published version 1.3.2 on main we need to have the commit with that tag on the develop or release/#.#.# branch to get right what will be the next version). The version number is updated as follows:
  1. The MAJOR number is incremented if a commit with a BREAKING CHANGE: footer or an exclamation (!) after the type/scope is found in the list of commits found since the last released version.In our example it was 1.3.2 and the version is updated to 2.0.0-SNAPSHOT.1 or 2.0.0-rc.1 depending on the branch.
  2. The MINOR number is incremented if the MAJOR number is not going to be changed and there is a commit with type feat in the commits found since the last released version.In our example the release was 1.3.2 and the version is updated to 1.4.0-SNAPSHOT.1 or 1.4.0-rc.1 depending on the branch.
  3. The PATCH number is incremented if neither the MAJOR nor the MINOR numbers are going to be changed and there is a commit with type fix in the the commits found since the last version change.In our example the release was 1.3.2 and the version is updated to 1.3.3-SNAPSHOT.1 or 1.3.3-rc.1 depending on the branch.
  4. The pre release number is incremented if the MAJOR, MINOR and PATCH numbers are not going to be changed but there is a commit that would otherwise update the version (i.e. a fix on 1.3.3-SNAPSHOT.1 will set the version to 1.3.3-SNAPSHOT.2, a fix or feat on 1.4.0-rc.1 will set the version to 1.4.0-rc.2 an so on).

How do we manage its configurationAlthough the system is designed to work with nodejs projects, it can be used with multiple programming languages and project types. For nodejs projects the usual place to put the configuration is the project s package.json, but I prefer to use the .releaserc file instead. As I use a common set of CI templates, instead of using a .releaserc on each project I generate it on the fly on the jobs that need it, replacing values related to the project type and the current branch on a template using the tmpl command (lately I use a branch of my own fork while I wait for some feedback from upstream, as you will see on the Dockerfile).

Container used to run itAs we run the command on a gitlab-ci job we use the image built from the following Dockerfile:
Dockerfile
# Semantic release image
FROM golang:alpine AS tmpl-builder
#RUN go install github.com/krakozaure/tmpl@v0.4.0
RUN go install github.com/sto/tmpl@v0.4.0-sto.2
FROM node:lts-alpine
COPY --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl
RUN apk update &&\
  apk upgrade &&\
  apk add curl git jq openssh-keygen yq zip &&\
  npm install --location=global\
    conventional-changelog-conventionalcommits@6.1.0\
    @qiwi/multi-semantic-release@7.0.0\
    semantic-release@21.0.7\
    @semantic-release/changelog@6.0.3\
    semantic-release-export-data@1.0.1\
    @semantic-release/git@10.0.1\
    @semantic-release/gitlab@9.5.1\
    @semantic-release/release-notes-generator@11.0.4\
    semantic-release-replace-plugin@1.2.7\
    semver@7.5.4\
  &&\
  rm -rf /var/cache/apk/*
CMD ["/bin/sh"]

How and when is it executedThe job that runs semantic-release is executed when new commits are added to the develop, release/#.#.# or main branches (basically when something is merged or pushed) and after all tests have passed (we don t want to create a new version that does not compile or passes at least the unit tests). The job is something like the following:
semantic_release:
  image: $SEMANTIC_RELEASE_IMAGE
  rules:
    - if: '$CI_COMMIT_BRANCH =~ /^(develop main release\/\d+.\d+.\d+)$/'
      when: always
  stage: release
  before_script:
    - echo "Loading scripts.sh"
    - . $ASSETS_DIR/scripts.sh
  script:
    - sr_gen_releaserc_json
    - git_push_setup
    - semantic-release
Where the SEMANTIC_RELEASE_IMAGE variable contains the URI of the image built using the Dockerfile above and the sr_gen_releaserc_json and git_push_setup are functions defined on the $ASSETS_DIR/scripts.sh file:
  • The sr_gen_releaserc_json function generates the .releaserc.json file using the tmpl command.
  • The git_push_setup function configures git to allow pushing changes to the repository with the semantic-release command, optionally signing them with a SSH key.

The sr_gen_releaserc_json functionThe code for the sr_gen_releaserc_json function is the following:
sr_gen_releaserc_json()
 
  # Use nodejs as default project_type
  project_type="$ PROJECT_TYPE:-nodejs "
  # REGEX to match the rc_branch name
  rc_branch_regex='^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$'
  # PATHS on the local ASSETS_DIR
  assets_dir="$ CI_PROJECT_DIR /$ ASSETS_DIR "
  sr_local_plugin="$ assets_dir /local-plugin.cjs"
  releaserc_tmpl="$ assets_dir /releaserc.json.tmpl"
  pipeline_runtime_values_yaml="/tmp/releaserc_values.yaml"
  pipeline_values_yaml="$ assets_dir /values_$ project_type _project.yaml"
  # Destination PATH
  releaserc_json=".releaserc.json"
  # Create an empty pipeline_values_yaml if missing
  test -f "$pipeline_values_yaml"   : >"$pipeline_values_yaml"
  # Create the pipeline_runtime_values_yaml file
  echo "branch: $ CI_COMMIT_BRANCH " >"$pipeline_runtime_values_yaml"
  echo "gitlab_url: $ CI_SERVER_URL " >"$pipeline_runtime_values_yaml"
  # Add the rc_branch name if we are on an rc_branch
  if [ "$(echo "$CI_COMMIT_BRANCH"   sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_COMMIT_BRANCH " >>"$pipeline_runtime_values_yaml"
  elif [ "$(echo "$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME"  
      sed -ne "/$rc_branch_regex/ p ")" ]; then
    echo "rc_branch: $ CI_MERGE_REQUEST_SOURCE_BRANCH_NAME " \
      >>"$pipeline_runtime_values_yaml"
  fi
  echo "sr_local_plugin: $ sr_local_plugin " >>"$pipeline_runtime_values_yaml"
  # Create the releaserc_json file
  tmpl -f "$pipeline_runtime_values_yaml" -f "$pipeline_values_yaml" \
    "$releaserc_tmpl"   jq . >"$releaserc_json"
  # Remove the pipeline_runtime_values_yaml file
  rm -f "$pipeline_runtime_values_yaml"
  # Print the releaserc_json file
  print_file_collapsed "$releaserc_json"
  # --*-- BEG: NOTE --*--
  # Rename the package.json to ignore it when calling semantic release.
  # The idea is that the local-plugin renames it back on the first step of the
  # semantic-release process.
  # --*-- END: NOTE --*--
  if [ -f "package.json" ]; then
    echo "Renaming 'package.json' to 'package.json_disabled'"
    mv "package.json" "package.json_disabled"
  fi
 
Almost all the variables used on the function are defined by gitlab except the ASSETS_DIR and PROJECT_TYPE; in the complete pipelines the ASSETS_DIR is defined on a common file included by all the pipelines and the project type is defined on the .gitlab-ci.yml file of each project. If you review the code you will see that the file processed by the tmpl command is named releaserc.json.tmpl, its contents are shown here:
 
  "plugins": [
     - if .sr_local_plugin  
    "  .sr_local_plugin  ",
     - end  
    [
      "@semantic-release/commit-analyzer",
       
        "preset": "conventionalcommits",
        "releaseRules": [
            "breaking": true, "release": "major"  ,
            "revert": true, "release": "patch"  ,
            "type": "feat", "release": "minor"  ,
            "type": "fix", "release": "patch"  ,
            "type": "perf", "release": "patch"  
        ]
       
    ],
     - if .replacements  
    [
      "semantic-release-replace-plugin",
        "replacements":   .replacements   toJson    
    ],
     - end  
    "@semantic-release/release-notes-generator",
     - if eq .branch "main"  
    [
      "@semantic-release/changelog",
        "changelogFile": "CHANGELOG.md", "changelogTitle": "# Changelog"  
    ],
     - end  
    [
      "@semantic-release/git",
       
        "assets":   if .assets   .assets   toJson   else  []  end  ,
        "message": "ci(release): v$ nextRelease.version \n\n$ nextRelease.notes "
       
    ],
    [
      "@semantic-release/gitlab",
        "gitlabUrl": "  .gitlab_url  ", "successComment": false  
    ]
  ],
  "branches": [
      "name": "develop", "prerelease": "SNAPSHOT"  ,
     - if .rc_branch  
      "name": "  .rc_branch  ", "prerelease": "rc"  ,
     - end  
    "main"
  ]
 
The values used to process the template are defined on a file built on the fly (releaserc_values.yaml) that includes the following keys and values:
  • branch: the name of the current branch
  • gitlab_url: the URL of the gitlab server (the value is taken from the CI_SERVER_URL variable)
  • rc_branch: the name of the current rc branch; we only set the value if we are processing one because semantic-release only allows one branch to match the rc prefix and if we use a wildcard (i.e. release/*) but the users keep more than one release/#.#.# branch open at the same time the calls to semantic-release will fail for sure.
  • sr_local_plugin: the path to the local plugin we use (shown later)
The template also uses a values_$ project_type _project.yaml file that includes settings specific to the project type, the one for nodejs is as follows:
replacements:
  - files:
      - "package.json"
    from: "\"version\": \".*\""
    to: "\"version\": \"$ nextRelease.version \""
assets:
  - "CHANGELOG.md"
  - "package.json"
The replacements section is used to update the version field on the relevant files of the project (in our case the package.json file) and the assets section includes the files that will be committed to the repository when the release is published (looking at the template you can see that the CHANGELOG.md is only updated for the main branch, we do it this way because if we update the file on other branches it creates a merge nightmare and we are only interested on it for released versions anyway). The local plugin adds code to rename the package.json_disabled file to package.json if present and prints the last and next versions on the logs with a format that can be easily parsed using sed:
local-plugin.cjs
// Minimal plugin to:
// - rename the package.json_disabled file to package.json if present
// - log the semantic-release last & next versions
function verifyConditions(pluginConfig, context)  
  var fs = require('fs');
  if (fs.existsSync('package.json_disabled'))  
    fs.renameSync('package.json_disabled', 'package.json');
    context.logger.log( verifyConditions: renamed 'package.json_disabled' to 'package.json' );
   
 
function analyzeCommits(pluginConfig, context)  
  if (context.lastRelease && context.lastRelease.version)  
    context.logger.log( analyzeCommits: LAST_VERSION=$ context.lastRelease.version  );
   
 
function verifyRelease(pluginConfig, context)  
  if (context.nextRelease && context.nextRelease.version)  
    context.logger.log( verifyRelease: NEXT_VERSION=$ context.nextRelease.version  );
   
 
module.exports =  
  verifyConditions,
  analyzeCommits,
  verifyRelease
 

The git_push_setup functionThe code for the git_push_setup function is the following:
git_push_setup()
 
  # Update global credentials to allow git clone & push for all the group repos
  git config --global credential.helper store
  cat >"$HOME/.git-credentials" <<EOF
https://fake-user:$ GITLAB_REPOSITORY_TOKEN @gitlab.com
EOF
  # Define user name, mail and signing key for semantic-release
  user_name="$SR_USER_NAME"
  user_email="$SR_USER_EMAIL"
  ssh_signing_key="$SSH_SIGNING_KEY"
  # Export git user variables
  export GIT_AUTHOR_NAME="$user_name"
  export GIT_AUTHOR_EMAIL="$user_email"
  export GIT_COMMITTER_NAME="$user_name"
  export GIT_COMMITTER_EMAIL="$user_email"
  # Sign commits with ssh if there is a SSH_SIGNING_KEY variable
  if [ "$ssh_signing_key" ]; then
    echo "Configuring GIT to sign commits with SSH"
    ssh_keyfile="/tmp/.ssh-id"
    : >"$ssh_keyfile"
    chmod 0400 "$ssh_keyfile"
    echo "$ssh_signing_key"   tr -d '\r' >"$ssh_keyfile"
    git config gpg.format ssh
    git config user.signingkey "$ssh_keyfile"
    git config commit.gpgsign true
  fi
 
The function assumes that the GITLAB_REPOSITORY_TOKEN variable (set on the CI/CD variables section of the project or group we want) contains a token with read_repository and write_repository permissions on all the projects we are going to use this function. The SR_USER_NAME and SR_USER_EMAIL variables can be defined on a common file or the CI/CD variables section of the project or group we want to work with and the script assumes that the optional SSH_SIGNING_KEY is exported as a CI/CD default value of type variable (that is why the keyfile is created on the fly) and git is configured to use it if the variable is not empty.
Warning: Keep in mind that the variables GITLAB_REPOSITORY_TOKEN and SSH_SIGNING_KEY contain secrets, so probably is a good idea to make them protected (if you do that you have to make the develop, main and release/* branches protected too).
Warning: The semantic-release user has to be able to push to all the projects on those protected branches, it is a good idea to create a dedicated user and add it as a MAINTAINER for the projects we want (the MAINTAINERS need to be able to push to the branches), or, if you are using a Gitlab with a Premium license you can use the api to allow the semantic-release user to push to the protected branches without allowing it for any other user.

The semantic-release commandOnce we have the .releaserc file and the git configuration ready we run the semantic-release command. If the branch we are working with has one or more commits that will increment the version, the tool does the following (note that the steps are described are the ones executed if we use the configuration we have generated):
  1. It detects the commits that will increment the version and calculates the next version number.
  2. Generates the release notes for the version.
  3. Applies the replacements defined on the configuration (in our example updates the version field on the package.json file).
  4. Updates the CHANGELOG.md file adding the release notes if we are going to publish the file (when we are on the main branch).
  5. Creates a commit if all or some of the files listed on the assets key have changed and uses the commit message we have defined, replacing the variables for their current values.
  6. Creates a tag with the new version number and the release notes.
  7. As we are using the gitlab plugin after tagging it also creates a release on the project with the tag name and the release notes.

Notes about the git workflows and merges between branchesIt is very important to remember that semantic-release looks at the commits of a given branch when calculating the next version to publish, that has two important implications:
  1. On pre release branches we need to have the commit that includes the tag with the released version, if we don t have it the next version is not calculated correctly.
  2. It is a bad idea to squash commits when merging a branch to another one, if we do that we will lose the information semantic-release needs to calculate the next version and even if we use the right prefix for the squashed commit (fix, feat, ) we miss all the messages that would otherwise go to the CHANGELOG.md file.
To make sure that we have the right commits on the pre release branches we should merge the main branch changes into the develop one after each release tag is created; in my pipelines the fist job that processes a release tag creates a branch from the tag and an MR to merge it to develop. The important thing about that MR is that is must not be squashed, if we do that the tag commit will probably be lost, so we need to be careful. To merge the changes directly we can run the following code:
# Set the SR_TAG variable to the tag you want to process
SR_TAG="v1.3.2"
# Fetch all the changes
git fetch --all --prune
# Switch to the main branch
git switch main
# Pull all the changes
git pull
# Switch to the development branch
git switch develop
# Pull all the changes
git pull
# Create followup branch from tag
git switch -c "followup/$SR_TAG" "$SR_TAG"
# Change files manually & commit the changed files
git commit -a --untracked-files=no -m "ci(followup): $SR_TAG to develop"
# Switch to the development branch
git switch develop
# Merge the followup branch into the development one using the --no-ff option
git merge --no-ff "followup/$SR_TAG"
# Remove the followup branch
git branch -d "followup/$SR_TAG"
# Push the changes
git push
If we can t push directly to develop we can create a MR pushing the followup branch after committing the changes, but we have to make sure that we don t squash the commits when merging or it will not work as we want.

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

24 September 2023

Sahil Dhiman: Abraham Raji

Abraham with Polito Man, you re no longer with us, but I am touched by the number of people you have positively impacted. Almost every DebConf23 presentations by locals I saw after you, carried how you were instrumental in bringing them there. How you were a dear friend and brother. It s a weird turn of events, that you left us during one thing we deeply cared and worked towards making possible since the last 3 years together. Who would have known, that Sahil, I m going back to my apartment tonight and casual bye post that would be the last conversation we ever had. Things were terrible after I heard the news. I had a hard time convincing myself to come see you one last time during your funeral. That was the last time I was going to get to see you, and I kept on looking at you. You, there in front of me, all calm, gave me peace. I ll carry that image all my life now. Your smile will always remain with me. Now, who ll meet and receive me on the door at almost every Debian event (just by sheer co-incidence?). Who ll help me speak out loud about all the Debian shortcomings (and then discuss solutions, when sober :)). Abraham and me during Debian discussion in DebUtsav Kochi It was a testament of the amount of time we had already spent together online, that when we first met during MDC Palakkad, it didn t feel we were physically meeting for the first time. The conversations just flowed. Now this song is associated with you due to your speech during post MiniDebConf Palakkad dinner. Hearing it reminds me of all the times we spent together chilling and talking community (which you cared deeply about). I guess, now we can t stop caring for the community, because your energy was contagious. Now, I can t directly dial your number to listen - Hey Sahil! What s up? from the other end, or Tell me, tell me on any mention of the problem. Nor would I be able to send reference usage of your Debian packaging guide in the wild. You already know how popular this guide of yours. How many people that guide has helped with getting started with packaging. Our last telegram text was me telling you about guide usage in Ravi s DebConf23 presentation. Did I ever tell you, I too got my first start with packaging from there. I started looking up to you from there, even before we met or talked. Now, I missed telling you, I was probably your biggest fan whenever you had the mic in hand and started speaking. You always surprised me all the insights and idea you brought and would kept on impressing me for someone who was just my age but was way more mature. Reading recent toots from Raju Dev made me realize how much I loved your writings. You wrote How the Future will remember Us , Doing what s right and many more. The level of depth in your thought was unparalleled. I loved reading those. That s why I kept pestering you to write more, which you slowly stopped. Now I fully understand why though. You were busy; really busy helping people out or just working for making things better. You were doing Debian, upstream projects, web development, designs, graphics, mentoring, free software evangelism while being the go-to person for almost everyone around. Everyone depended on you, because you were too kind to turn down anyone. Abraham and me just chilling around. We met for the first time there Man, I still get your spelling wrong :) Did I ever tell you that? That was the reason, I used to use AR instead online. You ll be missed and will always be part of our conversations, because you have left a profound impact on me, our friends, Debian India and everyone around. See you! the coolest man around. In memory: PS - Just found you even had a Youtube channel, you one heck of a talented man.

23 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using Rule Templates

This post describes how to define and use rule templates with semantic names using extends or !reference tags, how to define manual jobs using the same templates and how to use gitlab-ci inputs as macros to give names to regular expressions used by rules.

Basic rule templatesI keep my templates in a rules.yml file stored on a common repository used from different projects as I mentioned on my previous post, but they can be defined anywhere, the important thing is that the files that need them include their definition somehow. The first version of my rules.yml file was as follows:
.rules_common:
  # Common rules; we include them from others instead of forcing a workflow
  rules:
    # Disable branch pipelines while there is an open merge request from it
    - if: >-
        $CI_COMMIT_BRANCH &&
        $CI_OPEN_MERGE_REQUESTS &&
        $CI_PIPELINE_SOURCE != "merge_request_event"
      when: never
.rules_default:
  # Default rules, we need to add the when: on_success to make things work
  rules:
    - !reference [.rules_common, rules]
    - when: on_success
The main idea is that .rules_common defines a rule section to disable jobs as we can do on a workflow definition; in our case common rules only have if rules that apply to all jobs and are used to disable them. The example includes one that avoids creating duplicated jobs when we push to a branch that is the source of an open MR as explained here. To use the rules in a job we have two options, use the extends keyword (we do that when we want to use the rule as is) or declare a rules section and add a !reference to the template we want to use as described here (we do that when we want to add additional rules to disable a job before evaluating the template conditions). As an example, with the following definitions both jobs use the same rules:
job_1:
  extends:
    - .rules_default
  [...]
job_2:
  rules:
    - !reference [.rules_default, rules]
  [...]

Manual jobs and rule templatesTo make the jobs manual we have two options, create a version of the job that includes when: manual and defines if we want it to be optional or not (allow_failure: true makes the job optional, if we don t add that to the rule the job is blocking) or add the when: manual and the allow_failure value to the job (if we work at the job level the default value for allow_failure is false for when: manual, so it is optional by default, we have to add an explicit allow_failure = true it to make it blocking). The following example shows how we define blocking or optional manual jobs using rules with when conditions:
.rules_default_manual_blocking:
  # Default rules for optional manual jobs
  rules:
    - !reference [.rules_common, rules]
    - when: manual
      # allow_failure: false is implicit
.rules_default_manual_optional:
  # Default rules for optional manual jobs
  rules:
    - !reference [.rules_common, rules]
    - when: manual
      allow_failure: true
manual_blocking_job:
  extends:
    - .rules_default_manual_blocking
  [...]
manual_optional_job:
  extends:
    - .rules_default_manual_optional
  [...]
The problem here is that we have to create new versions of the same rule template to add the conditions, but we can avoid it using the keywords at the job level with the original rules to get the same effect; the following definitions create jobs equivalent to the ones defined earlier without creating additional templates:
manual_blocking_job:
  extends:
    - .rules_default
  when: manual
  allow_failure: false
  [...]
manual_optional_job:
  extends:
    - .rules_default
  when: manual
  # allow_failure: true is implicit
  [...]
As you can imagine, that is my preferred way of doing it, as it keeps the rules.yml file smaller and I see that the job is manual in its definition without problem.

Rules with allow_failure, changes, exists, needs or variablesUnluckily for us, for now there is no way to avoid creating additional templates as we did on the when: manual case when a rule is similar to an existing one but adds changes, exists, needs or variables to it. So, for now, if a rule needs to add any of those fields we have to copy the original rule and add the keyword section. Some notes, though:
  • we only need to add allow_failure if we want to change its value for a given condition, in other cases we can set the value at the job level.
  • if we are adding changes to the rule it is important to make sure that they are going to be evaluated as explained here.
  • when we add a needs value to a rule for a specific condition and it matches it replaces the job needs section; when using templates I would use two different job names with different conditions instead of adding a needs on a single job.

Defining rule templates with semantic namesI started to use rule templates to avoid repetition when defining jobs that needed the same rules and soon I noticed that giving them names with a semantic meaning they where easier to use and understand (we provide a name that tells us when we are going to execute the job, while the details of the variables names or values used on the rules are an implementation detail of the templates). We are not going to define real jobs on this post, but as an example we are going to define a set of rules that can be useful if we plan to follow a scaled trunk based development workflow when developing, that is, we are going to put the releasable code on the main branch and use short-lived branches to test and complete changes before pushing things to main. Using this approach we can define an initial set of rule templates with semantic names:
.rules_mr_to_main:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
.rules_mr_or_push_to_main:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
    - if: >-
        $CI_COMMIT_BRANCH == 'main'
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_main:
  rules:
    - !reference [.rules_common, rules]
    - if: >-
        $CI_COMMIT_BRANCH == 'main'
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_branch:
  rules:
    - !reference [.rules_common, rules]
    - if: >-
        $CI_COMMIT_BRANCH != 'main'
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_branch_or_mr_to_main:
  rules:
    - !reference [.rules_push_to_branch, rules]
    - if: >-
         $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME != 'main'
         &&
         $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
.rules_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG =~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/
.rules_non_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG !~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/
With those names it is clear when a job is going to be executed and when using the templates on real jobs we can add additional restrictions and make the execution manual if needed as described earlier.

Using inputs as macrosOn the previous rules we have used a regular expression to identify the release tag format and assumed that the general branches are the ones with a name different than main; if we want to force a format for those branch names we can replace the condition != 'main' by a regex comparison (=~ if we look for matches, !~ if we want to define valid branch names removing the invalid ones). When testing the new gitlab-ci inputs my colleague Jorge noticed that if you keep their default value they basically work as macros. The variables declared as inputs can t hold YAML values, the truth is that their value is always a string that is replaced by the value assigned to them when including the file (if given) or by their default value, if defined. If you don t assign a value to an input variable when including the file that declares it its occurrences are replaced by its default value, making them work basically as macros; this is useful for us when working with strings that can t managed as variables, like the regular expressions used inside if conditions. With those two ideas we can add the following prefix to the rules.yaml defining inputs for both regular expressions and replace the rules that can use them by the ones shown here:
spec:
  inputs:
    # Regular expression for branches; the prefix matches the type of changes
    # we plan to work on inside the branch (we use conventional commit types as
    # the branch prefix)
    branch_regex:
      default: '/^(build ci chore docs feat fix perf refactor style test)\/.+$/'
    # Regular expression for tags
    release_tag_regex:
      default: '/^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/'
---
[...]
.rules_push_to_changes_branch:
  rules:
    - !reference [.rules_common, rules]
    - if: >-
        $CI_COMMIT_BRANCH =~ $[[ inputs.branch_regex ]]
        &&
        $CI_PIPELINE_SOURCE != 'merge_request_event'
.rules_push_to_branch_or_mr_to_main:
  rules:
    - !reference [.rules_push_to_branch, rules]
    - if: >-
         $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ $[[ inputs.branch_regex ]]
         &&
         $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == 'main'
.rules_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG =~ $[[ inputs.release_tag_regex ]]
.rules_non_release_tag:
  rules:
    - !reference [.rules_common, rules]
    - if: $CI_COMMIT_TAG !~ $[[ inputs.release_tag_regex ]]

Creating rules reusing existing onesI m going to finish this post with a comment about how I avoid defining extra rule templates in some common cases. The idea is simple, we can use !reference tags to fine tune rules when we need to add conditions to disable them simply adding conditions with when: never before referencing the template. As an example, in some projects I m using different job definitions depending on the DEPLOY_ENVIRONMENT value to make the job manual or automatic; as we just said we can define different jobs referencing the same rule adding a condition to check if the environment is the one we are interested in:
deploy_job_auto:
  rules:
    # Only deploy automatically if the environment is 'dev' by skipping this job
    # for other values of the DEPLOY_ENVIRONMENT variable
    - if: $DEPLOY_ENVIRONMENT != "dev"
      when: never
    - !reference [.rules_release_tag, rules]
  [...]
deploy_job_manually:
  rules:
    # Disable this job if the environment is 'dev'
    - if: $DEPLOY_ENVIRONMENT == "dev"
      when: never
    - !reference [.rules_release_tag, rules]
  when: manual
  # Change this to  false  to make the deployment job blocking
  allow_failure: true
  [...]
If you think about it the idea of adding negative conditions is what we do with the .rules_common template; we add conditions to disable the job before evaluating the real rules. The difference in that case is that we reference them at the beginning because we want those negative conditions on all jobs and that is also why we have a .rules_default condition with an when: on_success for the jobs that only need to respect the default workflow (we need the last condition to make sure that they are executed if the negative rules don t match).

16 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project.

Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository:
dumb.sh
#!/bin/sh
echo "The script arguments are: '$@'"
If we run the following job on the common repository:
job:
  script:
    - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2
the output will be:
The script arguments are: 'ARG1 ARG2'
But if we run the same job from a different project that includes the same job definition the output will be different:
/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found
The problem here is that we include and expand the YAML files, but if a script wants to use other files from the common repository as an asset (configuration file, shell script, template, etc.), the execution fails if the files are not available on the project that includes the remote job definition.

SolutionsWe can solve the issue using multiple approaches, I ll describe two of them:
  • Create files using scripts
  • Download files from the common repository

Create files using scriptsOne way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using HERE documents. The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error prone) or encode the whole file into base64 or something similar, making maintenance harder. As an example, imagine that we want to use the dumb.sh script presented on the previous section and we want to call it from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden folder inside the project directory or use a PATH like /tmp/assets-$CI_JOB_ID to leave things outside the project folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh runner). To create the file we will use hidden jobs to write our script template and reference tags to add it to the scripts when we want to use them. Here we have a snippet that creates the file with cat:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      cat >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      #!/bin/sh
      echo "The script arguments are: '\$@'"
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Note that to make things work we ve added 6 spaces before the script code and escaped the dollar sign. To do the same using base64 we replace the previous snippet by this:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      base64 -d >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Again, we have to indent the base64 version of the file using 6 spaces (all lines of the base64 output have to be indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain. With either version we just need to add a !reference before using the script, if we add the call on the first lines of the before_script we can use the downloaded file in the before_script, script or after_script sections of the job without problems:
job:
  before_script:
    - !reference [.file_scripts, create_dumb_sh]
  script:
    - $ CI_PROJECT_DIR /assets/dumb.sh ARG1 ARG2
The output of a pipeline that uses this job will be the same as the one shown in the original example:
The script arguments are: 'ARG1 ARG2'

Download the files from the common repositoryAs we ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use. An alternative approach is to keep the assets on a directory of the common repository (in our examples we will name it assets) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where we want to download the files) and defines a script fragment to download the complete folder. Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the before_script of the jobs that use files from the assets directory and they will be available when needed. The following file is an example of the YAML file we just mentioned:
bootstrap.yml
variables:
  CI_TMPL_API_V4_URL: "$ CI_API_V4_URL /projects/common%2Fci-templates"
  CI_TMPL_ARCHIVE_URL: "$ CI_TMPL_API_V4_URL /repository/archive"
  CI_TMPL_ASSETS_DIR: "/tmp/assets-$ CI_JOB_ID "
.scripts_common:
  bootstrap_ci_templates:
    -  
      # Downloading assets
      echo "Downloading assets"
      mkdir -p "$CI_TMPL_ASSETS_DIR"
      wget -q -O - --header="PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN" \
        "$CI_TMPL_ARCHIVE_URL?path=assets&sha=$ CI_TMPL_REF:-main "  
        tar --strip-components 2 -C "$ASSETS_DIR" -xzf -
The file defines the following variables:
  • CI_TMPL_API_V4_URL: URL of the common project, in our case we are using the project ci-templates inside the common group (note that the slash between the group and the project is escaped, that is needed to reference the project by name, if we don t like that approach we can replace the url encoded path by the project id, i.e. we could use a value like $ CI_API_V4_URL /projects/31)
  • CI_TMPL_ARCHIVE_URL: Base URL to use the gitlab API to download files from a repository, we will add the arguments path and sha to select which sub path to download and from which commit, branch or tag (we will explain later why we use the CI_TMPL_REF, for now just keep in mind that if it is not defined we will download the version of the files available on the main branch when the job is executed).
  • CI_TMPL_ASSETS_DIR: Destination of the downloaded files.
And uses variables defined in other places:
  • CI_TMPL_READ_TOKEN: token that includes the read_api scope for the common project, we need it because the tokens created by the CI/CD pipelines of other projects can t be used to access the api of the common one.We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)
  • CI_TMPL_REF: branch or tag of the common repo from which to get the files (we need that to make sure we are using the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed tags to make sure that the assets don t change between executions unless we change the reference).We will set the value on the .gitlab-ci.yml file of the remote projects and will use the same reference when including the files to make sure that everything is coherent.
This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:
pipeline.yml
include:
  - /bootstrap.yaml
stages:
  - test
dumb_job:
  stage: test
  before_script:
    - !reference [.bootstrap_ci_templates, create_dumb_sh]
  script:
    - $ CI_TMPL_ASSETS_DIR /dumb.sh ARG1 ARG2
To use it from an external project we will use the following gitlab ci configuration:
gitlab-ci.yml
include:
  - project: 'common/ci-templates'
    ref: &ciTmplRef 'main'
    file: '/pipeline.yml'
variables:
  CI_TMPL_REF: *ciTmplRef
Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the CI_TMPL_REF variable (as far as I know we have to pass the ref value explicitly to know which reference was used when including the file, the anchor allows us to make sure that the value is always the same in both places). The reference we use is quite important for the reproducibility of the jobs, if we don t use fixed tags or commit hashes as references each time a job that downloads the files is executed we can get different versions of them. For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common repo the way everything is going to work is almost the same as having the pipelines directly in our repo). But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs that we want to test and they will download the latest versions of the asset files on the branch, speeding up the testing process. However keep in mind that the trick only works with the asset files, if we change a job or a pipeline on the YAML files restarting the job is not enough to test the new version as the restart uses the same job created with the current pipeline. To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the pipeline manually.

ConclusionFor now I m using the second solution and as it is working well my guess is that I ll keep using that approach unless giltab itself provides a better or simpler way of doing the same thing.

19 July 2023

Shirish Agarwal: RISC-V, Chips Act, Burning of Books, Manipur

RISC -V Motherboard, SBC While I didn t want to, a part of me is hyped about this motherboard. This would probably be launched somewhere in November. There are obvious issues in this, the first being unlike regular motherboards you wouldn t be upgrade as you would do.You can t upgrade your memory, can t upgrade the CPU (although new versions of instructions could be uploaded, similar to BIOS updates) but as the hardware is integrated (the quad-core SiFive Performance P550 core complex) it would really depend. If the final pricing is around INR 4-5k then it may be able to sell handsomely provided there are people to push and provide support around it. A 500 GB or 1 TB SSD coupled with it and a cheap display unit and you could use it anywhere although as the name says it s more for tinkering as the name suggests. Another board that could perhaps be of more immediate use would be the beagleboard. They launched the same couple of days back and called it Beagle V-Ahead. Again, costs are going to be a concern. Just a year before the pandemic the Beagleboard Black (BB) used to cost in the sub 4k range, today it costs 8k+ for the end user, more than twice the price. How much Brexit is to be blamed for this and how much the Indian customs we would never know. The RS Group that is behind that shop is head-quartered in the UK. As said before, we do not know the price of either board as it probably will take few months for v-ahead to worm its way in the Indian market, maybe another 6 months or so. Even so, with the limited info. on both the boards, I am tilting more towards the other HiFive one. We should come to know about the boards say in 3-5 months of time.

CHIPS Act I had shared about the Chips Act a few times here as well as on SM. Two articles do tell how the CHIPS Act 2023 is more of a political tool, an industrial defence policy rather than just business as most people tend to think.

Cancelation of Books, Books Burning etc. Almost 2400 years ago, Plato released his work called Plato s Republic and one of the seminal works within it is perhaps one of the most famous works was the Allegory of the Cave. That is used again and again in a myriad ways, mostly in science-fiction though and mostly to do with utopian, dystopian movies, webseries etc. I did share how books are being canceled in the States, also a bit here. But the most damning thing has happened throughout history, huge quantities of books burned almost all for politics  But part of it has been neglect as well as this time article shares. What we have lost and continue to lose is just priceless. Every book has a grain of truth in it, some more, some less but equally enjoyable. Most harmful is the neglect towards books and is more true today than any other time in history. Kids today have a wide variety of tools to keep themselves happy or occupied, from anime, VR, gaming the list goes on and on. In that scenario, how the humble books can compete. People think of Kindle but most e-readers like Kindle are nothing but obsolescence by design. I have tried out Kindle a few times but find it a bit on the flimsy side. Books are much better IMHO or call me old-school. While there are many advantages, one of the things that I like about books is that you can easily put yourself in either the protagonist or the antagonist or somewhere in the middle and think of the possible scenarios wherever you are in a particular book. I could go on but it will be a blog post or two in itself. Till later. Happy Reading.

Update:Manipur Extremely horrifying visuals, articles and statements continue to emanate from Manipur. Today, 19th July 2023, just couple of hours back, a video surfaced showing two Kuki women were shown as stripped, naked and Meitei men touching their private parts. Later on, we came to know that this was in response of a disinformation news spread by the Meitis of few women being raped although no documentary evidence of the same surfaced, no names nothing. While I don t want to share the video I will however share the statement shared by the Kuki-Zo tribal community on that. The print gives a bit more context to what has been going on.
Update, Few hours later : The Print also shared more of a context about six days ago. The reason we saw the video now was that for the last 2.5 months Manipur was in Internet shutdown so those videos got uploaded now. There was huge backlash from the Twitter community and GOI ordered the Manipur Police to issue this Press Release yesterday night or just few hours before with yesterday s time-stamp.
IndianExpress shared an article that does state that while an FIR had been registered immediately no arrests so far and this is when you can see the faces of all the accused. Not one of them tried to hide their face behind a mask or something. So, if the police wanted, they could have easily identified who they are. They know which community the accused belong to, they even know from where they came. If they wanted to, they could have easily used mobile data and triangulation to find the accused and their helpers. So, it does seem to be attempt to whitewash and protect a certain community while letting it prey on the other. Another news that did come in, is because of the furious reaction on Twitter, Youtube has constantly been taking down the video as some people are getting a sort of high more so from the majoritarian community and making lewd remarks. Twitter has been somewhat quick when people are making lewd remarks against the two girl/women. Quite a bit of the above seems like a cover-up. Lastly, apparently GOI has agreed to having a conversation about it in Lok Sabha but without any voting or passing any resolutions as of right now. Would update as an when things change. Update: Smriti Irani, the Child and Development Minister gave the weakest statement possible
As can be noticed, she said sexual assault rather than rape. The women were under police custody for safety when they were whisked away by the mob. No mention of that. She spoke to the Chief Minister who has been publicly known as one of the provocateurs or instigators for the whole thing. The CM had publicly called the Kukus and Nagas as foreigners although both of them claim to be residing for thousands of years and they apparently have documentary evidence of the same  . Also not clear who is doing the condemning here. No word of support for the women, no offer of intervention, why is she the Minister of Child and Women Development (CDW) if she can t use harsh words or give support to the women who have gone and going through horrific things  Update : CM Biren Singh s Statement after the video surfaced
This tweet is contradictory to the statements made by Mr. Singh couple of months ago. At that point in time, Mr. Singh had said that NIA, State Intelligence Departments etc. were giving him minute to minute report on the ground station. The Police itself has suo-moto (on its own) powers to investigate and apprehend criminals for any crime. In fact, the Police can call for questioning of anybody in any relation to any crime and question them for upto 48 hours before charging them. In fact, many cases have been lodged where innocent persons have been framed or they have served much more in the jail than the crime they are alleged to have been committed. For e.g. just a few days before there was a media report of a boy who has been in jail for 3 years. His alleged crime, stealing mere INR 200/- to feed himself. Court doesn t have time to listen to him yet. And there are millions like him. The quint eloquently shares the tale where it tells how both the State and the Centre have been explicitly complicit in the incidents ravaging Manipur. In fact, what has been shared in the article has been very true as far as greed for land is concerned. Just couple of weeks back there have been a ton of floods emanating from Uttarakhand and others. Just before the flooding began, what was the CM doing can be seen here. Apart from the newspapers I have shared and the online resources, most of the mainstream media has been silent on the above. In fact, they have been silent on the Manipur issue until the said video didn t come into limelight. Just now, in Lok Sabha everybody is present except the Prime Minister and the Home Minister. The PM did say that the law will take its own course, but that s about it. Again no support for the women concerned.  Update: CJI (Chief Justice of India) has taken suo-moto cognizance and has warned both the State and Centre to move quickly otherwise they will take the matter in their own hand.
Update: Within 2 hours of the CJI taking suo-moto cognizance, they have arrested one of the main accused Heera Das
The above tells you why the ban on Internet was put in the first place. They wanted to cover it all up. Of all the celebs, only one could find a bit of spine, a bit of backbone to speak about it, all the rest mum
Just imagine, one of the women is around my age while the young one could have been a daughter if I had married on time or a younger sister for sure. If ever I came face to face with them, I just wouldn t be able to look them in the eye. Even their whole whataboutery is built on sham. From their view Kukis are from Burma or Burmese descent. All of which could be easily proved by DNA of all. But let s leave that for a sec. Let s take their own argument that they are Burmese. Their idea of Akhand Bharat stretches all the way to Burma (now called Myanmar). They want all the land but no idea with what to do with the citizens living on it. Even after the video, the whataboutery isn t stopping, that shows how much hatred is there. And not knowing that they too will be victim of the same venom one or the other day  Update: Opposition was told there would be a debate on Manipur. The whole day went by, no debate. That s the shamelessness of this Govt.  Update 20th July 19:25 Center may act or not act against the perpetrators but they will act against Twitter who showed the crime. Talk about shooting the messenger
We are now in the last stage. In 2014, we were at 6

18 July 2023

Sergio Talens-Oliag: Testing cilium with k3d and kind

This post describes how to deploy cilium (and hubble) using docker on a Linux system with k3d or kind to test it as CNI and Service Mesh. I wrote some scripts to do a local installation and evaluate cilium to use it at work (in fact we are using cilium on an EKS cluster now), but I thought it would be a good idea to share my original scripts in this blog just in case they are useful to somebody, at least for playing a little with the technology.

InstallationFor each platform we are going to deploy two clusters on the same docker network; I ve chosen this model because it allows the containers to see the addresses managed by metallb from both clusters (the idea is to use those addresses for load balancers and treat them as if they were public). The installation(s) use cilium as CNI, metallb for BGP (I tested the cilium options, but I wasn t able to configure them right) and nginx as the ingress controller (again, I tried to use cilium but something didn t work either). To be able to use the previous components some default options have been disabled on k3d and kind and, in the case of k3d, a lot of k3s options (traefik, servicelb, kubeproxy, network-policy, ) have also been disabled to avoid conflicts. To use the scripts we need to install cilium, docker, helm, hubble, k3d, kind, kubectl and tmpl in our system. After cloning the repository, the sbin/tools.sh script can be used to do that on a linux-amd64 system:
$ git clone https://gitea.mixinet.net/blogops/cilium-docker.git
$ cd cilium-docker
$ ./sbin/tools.sh apps
Once we have the tools, to install everything on k3d (for kind replace k3d by kind) we can use the sbin/cilium-install.sh script as follows:
$ # Deploy first k3d cluster with cilium & cluster-mesh
$ ./sbin/cilium-install.sh k3d 1 full
[...]
$ # Deploy second k3d cluster with cilium & cluster-mesh
$ ./sbin/cilium-install.sh k3d 2 full
[...]
$ # The 2nd cluster-mesh installation connects the clusters
If we run the command cilium status after the installation we should get an output similar to the one seen on the following screenshot:
cilium status
The installation script uses the following templates:
Once we have finished our tests we can remove the installation using the sbin/cilium-remove.sh script.

Some notes about the configuration
  • As noted on the documentation, the cilium deployment needs to mount the bpffs on /sys/fs/bpf and cgroupv2 on /run/cilium/cgroupv2; that is done automatically on kind, but fails on k3d because the image does not include bash (see this issue).To fix it we mount a script on all the k3d containers that is executed each time they are started (the script is mounted as /bin/k3d-entrypoint-cilium.sh because the /bin/k3d-entrypoint.sh script executes the scripts that follow the pattern /bin/k3d-entrypoint-*.sh before launching the k3s daemon). The source code of the script is available here.
  • When testing the multi-cluster deployment with k3d we have found issues with open files, looks like they are related to inotify (see this page on the kind documentation); adding the following to the /etc/sysctl.conf file fixed the issue:
    # fix inotify issues with docker & k3d
    fs.inotify.max_user_watches = 524288
    fs.inotify.max_user_instances = 512
  • Although the deployment theoretically supports it, we are not using cilium as the cluster ingress yet (it did not work, so it is no longer enabled) and we are also ignoring the gateway-api for now.
  • The documentation uses the cilium cli to do all the installations, but I noticed that following that route the current version does not work right with hubble (it messes up the TLS support, there are some notes about the problems on this cilium issue), so we are deploying with helm right now.The problem with the helm approach is that there is no official documentation on how to install the cluster mesh with it (there is a request for documentation here), so we are using the cilium cli for now and it looks that it does not break the hubble configuration.

TestsTo test cilium we have used some scripts & additional config files that are available on the test sub directory of the repository:
  • cilium-connectivity.sh: a script that runs the cilium connectivity test for one cluster or in multi cluster mode (for mesh testing).If we export the variable HUBBLE_PF=true the script executes the command cilium hubble port-forward before launching the tests.
  • http-sw.sh: Simple tests for cilium policies from the cilium demo; the script deploys the Star Wars demo application and allows us to add the L3/L4 policy or the L3/L4/L7 policy, test the connectivity and view the policies.
  • ingress-basic.sh: This test is for checking the ingress controller, it is prepared to work against cilium and nginx, but as explained before the use of cilium as an ingress controller is not working as expected, so the idea is to call it with nginx always as the first argument for now.
  • mesh-test.sh: Tool to deploy a global service on two clusters, change the service affinity to local or remote, enable or disable if the service is shared and test how the tools respond.

Running the testsThe cilium-connectivity.sh executes the standard cilium tests:
$ ./test/cilium-connectivity.sh k3d 12
   Monitor aggregation detected, will skip some flow validation
steps
  [k3d-cilium1] Creating namespace cilium-test for connectivity
check...
  [k3d-cilium2] Creating namespace cilium-test for connectivity
check...
[...]
  All 33 tests (248 actions) successful, 2 tests skipped,
0 scenarios skipped.
To test how the cilium policies work use the http-sw.sh script:
kubectx k3d-cilium2 # (just in case)
# Create test namespace and services
./test/http-sw.sh create
# Test without policies (exaust-port fails by design)
./test/http-sw.sh test
# Create and view L3/L4 CiliumNetworkPolicy
./test/http-sw.sh policy-l34
# Test policy (no access from xwing, exaust-port fails)
./test/http-sw.sh test
# Create and view L7 CiliumNetworkPolicy
./test/http-sw.sh policy-l7
# Test policy (no access from xwing, exaust-port returns 403)
./test/http-sw.sh test
# Delete http-sw test
./test/http-sw.sh delete
And to see how the service mesh works use the mesh-test.sh script:
# Create services on both clusters and test
./test/mesh-test.sh k3d create
./test/mesh-test.sh k3d test
# Disable service sharing from cluster 1 and test
./test/mesh-test.sh k3d svc-shared-false
./test/mesh-test.sh k3d test
# Restore sharing, set local affinity and test
./test/mesh-test.sh k3d svc-shared-default
./test/mesh-test.sh k3d svc-affinity-local
./test/mesh-test.sh k3d test
# Delete deployment from cluster 1 and test
./test/mesh-test.sh k3d delete-deployment
./test/mesh-test.sh k3d test
# Delete test
./test/mesh-test.sh k3d delete

1 July 2023

Debian Brasil: MiniDebConf Bras lia 2023 - um breve relato

Minidebconf2033 palco No per odo de 25 a 27 de maio, Bras lia foi palco da MiniDebConf 2023. Esse encontro, composto por diversas atividades como palestras, oficinas, sprints, BSP (Bug Squashing Party), assinatura de chaves, eventos sociais e hacking, teve como principal objetivo reunir a comunidade e celebrar o maior projeto de Software Livre do mundo: o Debian. A MiniDebConf Bras lia 2023 foi um sucesso gra as participa o de todas e todos, independentemente do n vel de conhecimento sobre o Debian. Valorizamos a presen a tanto dos(as) usu rios(as) iniciantes que est o se familiarizando com o sistema quanto dos(as) desenvolvedores(as) oficiais do projeto. O esp rito de acolhimento e colabora o esteve presente em todos os momentos. As MiniDebConfs s o encontros locais organizados por membros do Projeto Debian, visando objetivos semelhantes aos da DebConf, por m em mbito regional. Ao longo do ano, eventos como esse ocorrem em diferentes partes do mundo, fortalecendo a comunidade Debian. Minidebconf2023 placa Atividades A programa o da MiniDebConf foi intensa e diversificada. Nos dias 25 e 26 (quinta e sexta-feira), tivemos palestras, debates, oficinas e muitas atividades pr ticas. J no dia 27 (s bado), ocorreu o Hacking Day, um momento especial em que os(as) colaboradores(as) do Debian se reuniram para trabalhar em conjunto em v rios aspectos do projeto. Essa foi a vers o brasileira da Debcamp, tradi o pr via DebConf. Nesse dia, priorizamos as atividades pr ticas de contribui o ao projeto, como empacotamento de softwares, tradu es, assinaturas de chaves, install fest e a Bug Squashing Party. Minidebconf2023 auditorio

Minidebconf2023 oficina N meros da edi o Os n meros do evento impressionam e demonstram o envolvimento da comunidade com o Debian. Tivemos 236 inscritos(as), 20 palestras submetidas, 14 volunt rios(as) e 125 check-ins realizados. Al m disso, nas atividades pr ticas, tivemos resultados significativos, como 7 novas instala es do Debian GNU/Linux, a atualiza o de 18 pacotes no reposit rio oficial do projeto Debian pelos participantes e a inclus o de 7 novos contribuidores na equipe de tradu o. Destacamos tamb m a participa o da comunidade de forma remota, por meio de transmiss es ao vivo. Os dados anal ticos revelam que nosso site obteve 7.058 visualiza es no total, com 2.079 visualiza es na p gina principal (que contava com o apoio de nossos patrocinadores), 3.042 visualiza es na p gina de programa o e 104 visualiza es na p gina de patrocinadores. Registramos 922 usu rios(as) nicos durante o evento. No YouTube, a transmiss o ao vivo alcan ou 311 visualiza es, com 56 curtidas e um pico de 20 visualiza es simult neas. Foram incr veis 85,1 horas de exibi o, e nosso canal conquistou 30 novos inscritos(as). Todo esse engajamento e interesse da comunidade fortalecem ainda mais a MiniDebConf. Minidebconf2023 palestrantes Fotos e v deos Para revivermos os melhores momentos do evento, temos dispon veis fotos e v deos. As fotos podem ser acessadas em: https://deb.li/pbsb2023. J os v deos com as grava es das palestras est o dispon veis no seguinte link: https://deb.li/vbsb2023. Para manter-se atualizado e conectar-se com a comunidade Debian Bras lia, siga-nos em nossas redes sociais: Agradecimentos Gostar amos de agradecer profundamente a todos(as) os(as) participantes, organizadores(as), patrocinadores e apoiadores(as) que contribu ram para o sucesso da MiniDebConf Bras lia 2023. Em especial, expressamos nossa gratid o aos patrocinadores Ouro: Pencillabs, Globo, Policorp e Toradex Brasil, e ao patrocinador Prata, 4-Linux. Tamb m agradecemos Finatec e ao Instituto para Conserva o de Tecnologias Livres (ICTL) pelo apoio. Minidebconf2023 coffee A MiniDebConf Bras lia 2023 foi um marco para a comunidade Debian, demonstrando o poder da colabora o e do Software Livre. Esperamos que todas e todos tenham desfrutado desse encontro enriquecedor e que continuem participando ativamente das pr ximas iniciativas do Projeto Debian. Juntos, podemos fazer a diferen a! Minidebconf2023 fotos oficial

25 June 2023

Russ Allbery: Review: The Wee Free Men

Review: The Wee Free Men, by Terry Pratchett
Series: Discworld #30
Publisher: HarperTempest
Copyright: 2003
Printing: 2006
ISBN: 0-06-001238-2
Format: Mass market
Pages: 375
The Wee Free Men is the 30th Discworld novel but the first Tiffany Aching book and doesn't rely on prior knowledge of Discworld, although the witches from previous books do appear. You could start here, although I think the tail end of the book has more impact if you already know who Granny Weatherwax and Nanny Ogg are. The Amazing Maurice and His Educated Rodents was the first Discworld novel written to be young adult, and although I could see that if I squinted, it didn't feel that obviously YA to me. The Wee Free Men is clearly young adult (or perhaps middle grade), right down to the quintessential protagonist: a nine-year-old girl who is practical and determined and a bit of a misfit and does a lot of growing up over the course of the story. Tiffany Aching is the youngest daughter in a large Aching family that comes from a long history of Aching families living in the Chalk. She has a pile of older relatives and one younger brother named Wentworth who is an annoying toddler obsessed with sweets. Her family work a farm that is theoretically the property of the local baron but has been in their family for years. There is always lots to do and Tiffany is an excellent dairymaid, so people mostly leave her alone with her thoughts and her tiny collection of books from her grandmother. Her now-deceased Grandma Aching was a witch. Tiffany, as it turns out, is also a witch, not that she knows that. As the book opens, certain... things are trying to get into her world from elsewhere. The first is a green monster that pops up out of the river and attempts to snatch Wentworth, much to Tiffany's annoyance. She identifies it as Jenny Green-Teeth via a book of fairy tales and dispatches it with a frying pan, somewhat to her surprise, but worse are coming. Even more surprised by her frying pan offensive are the Nac Mac Feegle, last seen in Carpe Jugulum, who know something about where this intrusion is coming from. In short order, the Aching farm has a Nac Mac Feegle infestation. This is, unfortunately, another book about Discworld's version of fairy (or elves, as they were called in Lords and Ladies). I find stories about the fae somewhat hit and miss, and Pratchett's version is one of my least favorites. The Discworld Queen of Fairy is mostly a one-dimensional evil monster and not a very interesting one. A big chunk of the plot is an extended sequence of dreams that annoyed me and went on for about twice as long as it needed to. That's the downside of this book. The upside is that Tiffany Aching is exactly the type of protagonist I loved reading about as a kid, and still love reading about as an adult. She's thoughtful, curious, observant, determined, and uninterested in taking any nonsense from anyone. She has a lot to learn, both about the world and about herself, but she doesn't have to be taught lessons twice and she has a powerful innate sense of justice. She also has a delightfully sarcastic sense of humor.
"Zoology, eh? That's a big word, isn't it." "No, actually it isn't," said Tiffany. "Patronizing is a big word. Zoology is really quite short."
One of the best things that Pratchett does with this book is let Tiffany dislike her little brother. Wentworth eventually ends up in trouble and Tiffany has to go rescue him, which of course she does because he's her baby brother. But she doesn't like him; he's annoying and sticky and constantly going on about sweets and never says anything interesting. Tiffany is aware that she's supposed to love him because he's her little brother, but of course this is not how love actually works, and she doesn't. But she goes and rescues him anyway, because that's the right thing to do, and because he's hers. There are a lot of adult novels that show the nuanced and sometimes uncomfortable emotions we have about family members, but this sort of thing is a bit rarer in novels pitched at pre-teens, and I loved it. One valid way to read it is that Tiffany is neurodivergent, but I think she simply has a reasonable reaction to a brother who is endlessly annoying and too young to have many redeeming qualities in her eyes, and no one forces her to have a more socially expected one. It doesn't matter what you feel about things; it matters what you do, and as long as you do the right thing, you can have whatever feelings about it you want. This is a great lesson for this type of book. The other part of this book that I adored was the stories of Grandma Aching. Tiffany is fairly matter-of-fact about her dead grandmother at the start of the book, but it becomes clear over the course of the story that she's grieving in her own way. Grandma Aching was a taciturn shepherd who rarely put more than two words together and was much better with sheep than people, but she was the local witch in the way that Granny Weatherwax was a witch, and Tiffany was paying close attention. They never managed to communicate as much as either of them wanted, but the love shines through Tiffany's memories. Grandma Aching was teaching her how to be a witch: not the magical parts, but the far more important parts about justice and fairness and respect for other people. This was a great introduction of a new character and a solid middle-grade or young YA novel. I was not a fan of the villain and I can take or leave the Nac Mac Feegle (who are basically Scottish Smurfs crossed with ants and are a little too obviously the comic relief, for all that they're also effective warriors). But Tiffany is great and the stories of Grandma Aching are even better. This was not as good as Night Watch (very few things are), but it was well worth reading. Followed in publication order by Monstrous Regiment. The next Tiffany Aching novel is A Hat Full of Sky. Rating: 8 out of 10

29 May 2023

Shirish Agarwal: Pearls of Luthra, Dahaad, Tetris & Discord.

Pearls of Luthra Pearls of Luthra is the first book by Brian Jacques and I think I am going to be a fan of his work. This particular book you have to be wary of. While it is a beautiful book with quite a few illustrations, I have to warn that if you are somebody who feels hungry at the very mention of food, then you will be hungry throughout the book. There isn t a single page where food isn t mentioned and not just any kind of food, the kind of food that is geared towards sweet tooth. So if you fancy tarts or chocolates or anything sweet you will right at home. The book also touches upon various teas and wines and various liquors but food is where it shines in literally. The tale is very much like a Harry Potter adventure but isn t as dark as HP was. In fact, apart from one death and one ear missing rest of our heroes and heroines and there are quite a few. I don t want to give too much away as it s a book to be treasured.

Dahaad Dahaad (the roar) is Sonakshi Sinha s entry in OTT/Web Series. The stage is set somewhere in North India while the exploits are based on a real life person called Cyanide Mohan who killed 20 women between 2005-2009. In the web series however, the antagonist s crimes are done over a period of 12 years and has 29 women as his victims. Apart from that it s pretty much a copy of what was done by the person above. It s a melting pot of a series which quite a few stories enmeshed along with the main one. The main onus and plot of the movie is about women from lower economic and caste order whose families want them to be wed but cannot due to huge demands for dowry. Now in such a situation, if a person were to give them a bit of attention, promise marriage and ask them to steal a bit and come with him and whatever, they will do it. The same modus operandi was done by Cynaide Mohan. He had a car that was not actually is but used it show off that he s from a richer background, entice the women, have sex, promise marriage and in the morning after pill there will be cynaide which the women unwittingly will consume. This is also framed by the protagonist Sonakshi Sinha to her mother as her mother is also forcing her to get married as she is becoming older. She shows some of the photographs of the victims and says that while the perpetrator is guilty but so is the overall society that puts women in such vulnerable positions. AFAIK, that is still the state of things. In fact, there is a series called Indian Matchmaking that has all the snobbishness that you want. How many people could have a lifestyle like the ones shown in that, less than 2% of the population. It s actually shows like the above that make the whole thing even more precarious  Apart from it, the show also shows prejudice about caste and background. I wouldn t go much into it as it s worth seeing and experiencing.

Tetris Tetris in many a ways is a story of greed. It s also a story of a lone inventor who had to wait almost 20 odd years to profit from his invention. Forbes does a marvelous job of giving some more background and foreground info. about Tetris, the inventor and the producer that went to strike it rich. It also does share about copyright misrepresentation happens but does nothing to address it. Could talk a whole lot but better to see the movie and draw your own conclusions. For me it was 4/5.

Discord Discord became Discord 2.0 and is a blank to me. A blank page. Can t do anything. First I thought it was a bug. Waited for a few days as sometimes webservices do fix themselves. But two weeks on and it still wasn t fixed then decided to look under. One of the tools in Firefox is Web Developer Tools ( CTRL+Shift+I) that tells you if an element of a page is not appearing or at least gives you a hint. To me it gave me the following
Content Security Policy: Ignoring 'unsafe-inline' within script-src or style-src: nonce-source or hash-source specified
Content Security Policy: The page s settings blocked the loading of a resource at data:text/css,%0A%20%20%20%20%20%20%20%2 ( style-src ). data:44:30
Content Security Policy: Ignoring 'unsafe-inline' within script-src or style-src: nonce-source or hash-source specified
TypeError: AudioContext is not a constructor 138875 https://discord.com/assets/cbf3a75da6e6b6a4202e.js:262 l https://discord.com/assets/f5f0b113e28d4d12ba16.js:1ed46a18578285e5c048b.js:241:118 What is being done is dom.webaudio.enabled being disabled in Firefox. Then on a hunch, searched on reddit and saw the following. Be careful while visiting the link as it s labelled NSFW although to my mind there wasn t anything remotely NSFW about it. They do mention using another tool AudioContext Fingerprint Defender which supposedly fakes or spoofs an id. As this add-on isn t tracked by Firefox privacy team it s hard for me to say anything positive or negative. So, in the end I stopped using discord as the alternative was being tracked by them  Last but not the least, saw this about a week back. Sooner or later this had to happen as Elon tries to make money off Twitter.

21 May 2023

Russ Allbery: Review: The Stone Canal

Review: The Stone Canal, by Ken MacLeod
Series: Fall Revolution #2
Publisher: Tor
Copyright: 1996
Printing: January 2001
ISBN: 0-8125-6864-8
Format: Mass market
Pages: 339
The Stone Canal is a sort of halfway sequel to The Star Fraction. They both take place in the same universe, but the characters are almost entirely disjoint. Half of The Stone Canal happens (mostly) well before the previous book and the other half happens well after it. This book does contain spoilers for the ending of The Star Fraction if one connects the events of the two books correctly (which was a bit harder than I thought it should be), so I would not read them out of order. At the start of The Stone Canal, Jon Wilde wakes up on New Mars beside the titular canal, in the middle of nowhere, accompanied only by a robot that says it made him. Wilde remembers dying on Earth; this new life is apparently some type of resurrection. It's a long walk to Ship City, the center of civilization of a place the robot tells him is New Mars. In Ship City, an android named Dee Model has escaped from her owner and is hiding in a bar. There, she meets an AI abolitionist named Tamara, who helps her flee out the back and down the canal on a boat when Wilde walks into the bar and immediately recognizes her. The abolitionists provide her protection and legal assistance to argue her case for freedom from her owner, a man named Reid. The third thread of the story, and about half the book, is Jon Wilde's life on Earth, starting in 1975 and leading up to the chaotic wars, political fracturing, and revolutions that formed the background and plot of The Star Fraction. Eventually that story turns into a full-fledged science fiction setting, but not until the last 60 pages of the book. I successfully read two books in a Ken MacLeod series! Sadly, I'm not sure I enjoyed the experience. I commented in my review of The Star Fraction that the appeal for me in MacLeod's writing was his reputation as a writer of political science fiction. Unfortunately that's been a bust. The characters are certainly political, in the sense that they profess to have strong political viewpoints and are usually members of some radical (often Trotskyite) organization. There are libertarian anarchist societies and lots of political conflict. But there is almost no meaningful political discussion in any of these books so far. The politics are all tactical or background, and often seem to be created by authorial fiat. For example, New Mars is a sort of libertarian anarchy that somehow doesn't have corporations or a strongman ruler, even though the history (when we finally learn it) would have naturally given rise to one or the other (and has, in numerous other SF novels with similar plots). There's a half-assed explanation for this towards the end of the book that I didn't find remotely believable. Another part of the book describes the formation of the libertarian microstate in The Star Fraction, but never answers a "why" or "how" question I had in the previous book in a satisfying way. Somehow people stop caring about control or predictability or stability or traditional hierarchy without any significant difficulties except external threats, in situations of chaos and disorder where historically humans turn to anyone promising firm structure. It's common to joke about MacLeod winning multiple libertarian Prometheus Awards for his fiction despite being a Scottish communist. I'm finding that much less surprising now that I've read more of his books. Whether or not he believes in it himself, he's got the cynical libertarian smugness and hand-waving down pat. What his characters do care deeply about is smoking, drinking, and having casual sex. (There's more political fire here around opposition to anti-smoking laws than there is about any of the society-changing political structures that somehow fall into place.) I have no objections to any of those activities from a moral standpoint, but reading about other people doing them is a snoozefest. The flashback scenes sketch out enough imagined history to satisfy some curiosity from the previous book, but they're mostly about the world's least interesting love triangle, involving two completely unlikable men and lots of tedious jealousy and posturing. The characters in The Stone Canal are, in general, a problem. One of those unlikable men is Wilde, the protagonist for most of the book. Not only did I never warm to him, I never figured out what motivates him or what he cares about. He's a supposedly highly political person who seems to engage in politics with all the enthusiasm of someone filling out tax forms, and is entirely uninterested in explaining to the reader any sort of coherent philosophical approach. The most interesting characters in this book are the women (Annette, Dee Model, Tamara, and, very late in the book, Meg), but other than Dee Model they rarely get much focus from the story. By far the best part of this book is the last 60 pages, where MacLeod finally explains the critical bridge events between Wilde's political history on earth and the New Mars society. I thought this was engrossing, fast-moving, and full of interesting ideas (at least for a 1990s book; many of them feel a bit stale now, 25 years later). It was also frustrating, because this was the book I wanted to have been reading for the previous 270 pages, instead of MacLeod playing coy with his invented history or showing us interminable scenes about Wilde's insecure jealousy over his wife. It's also the sort of book where at one point characters (apparently uniformly male as far as one could tell from the text of the book) get assigned sex slaves, and while MacLeod clearly doesn't approve of this, the plot is reminiscent of a Heinlein novel: the protagonist's sex slave becomes a very loyal permanent female companion who seems to have the same upside for the male character in question. This was unfortunately not the book I was hoping for. I did enjoy the last hundred pages, and it's somewhat satisfying to have the history come together after puzzling over what happened for 200 pages. But I found the characters tedious and annoying and the politics weirdly devoid of anything like sociology, philosophy, or political science. There is the core of a decent 1990s AI and singularity novel here, but the technology is now rather dated and a lot of other people have tackled the same idea with fewer irritating ticks. Not recommended, although I'll probably continue to The Cassini Division because the ending was a pretty great hook for another book. Followed by The Cassini Division. Rating: 5 out of 10

28 March 2023

Shirish Agarwal: Three Ancient Civilizations, Tesla, IMU and other things.

Three Ancient Civilizations I have been reading books for a long time but somehow I don t know how or when I realized that there are three medieval civilizations that time and again seem to fascinate the authors, either American or European. The three civilizations that do get mentioned every now and then are Egyptian, Greece and Roman and I have no clue as to why. Another two civilizations closely follow them, Mesopotian and Sumerian Civilizations. Why most of the authors irrespective of genre are mystified by these 5 civilizations is beyond me but they also conjure lot of imagination. Now if I was on the right side of 40, maybe 22-25 onwards and had the means or the opportunity or both, I would have gone and tried to learn as much as I can about these various civilizations. There is still a lot of enigma attached and it seems that the official explanations of how these various civilizations ended seems too good to be true or whatever. One of the more interesting points has been how Greece mythology were subverted and flipped to make Roman mythology. Apparently, many Greek gods have a Roman aspect and the qualities are opposite to the Greek aspect. I haven t learned the reason why it is so, yet. Apart from the Iziko Natural Museum in South Africa which did have its share of wonders (the whole South African experience was surreal) nowhere I have witnessed stuff from the above other than in Africa. The whole thing seemed just surreal. I could go on but as I don t know what is real and what is mythology when it comes to various cultures including whatever little I experienced in Africa.

Recent History Having said the above, I also find that many Indians somehow do not either know or are not interested in understanding recent history. I am talking of the time between 1800s and 1950 s. British apologist Mr. William Dalrymple in his book Anarchy has shared how the East Indian Company looted India and gave less than half back to the British. So where did the rest of the money go ? It basically went to the tax free havens around UK. That tale is very much similar as to how the Axis gold was used to have an entity called Switzerland from scratch. There have been books as well as couple of miniseries or two that document that fact. But for me this is not the real story, this is more of a side dish per-se. The real story perhaps is how the EU peace project was born. Because of Hitler s rise and the way World War 2 happened, the Allies knew that they couldn t subjugate Germany through another humiliation as they had after World War 1, who knows another Hitler might come. So while they did fine the Germans, they also helped them via the Marshall plan. Germany also realized that it had been warring with other European countries for over a thousand years as well as quite a few other countries. The first sort of treaty immediately in that regard was the Western Union Alliance . While on the face of it, it was a defense treaty between sovereign powers. Interestingly, as can be seen UK at that point in time wanted more countries to be part of the Treaty of Dunkirk. This was quickly followed 2 years later by the Treaty of London which made way for the Council of Europe to be formed. The reason I am sharing is because a lot of Indians whom I meet on SM do not know that UK was instrumental front and center for the formation of European Union (EU). Also they perhaps don t realize that after World War 2, UK was greatly diminished both financially and militarily. That is the reason it gave back its erstwhile colonies including India and other countries. If UK hadn t become a part of EU they would have been called the poor man of Europe as they had been called few hundred years ago. In fact, after Brexit, UK has been the only nation that has fallen on the dire times as much as it has. They have practically no food, supermarkets running out of veggies and whatnot. electricity sky-high as they don t want to curtail profits of the gas companies which in turn donate money to Tory coffers. Sadly, most of my brethen do not know that hence I have had to share it. The EU became a power in its own right due to Gravity model of trade. The U.S. is attempting the same thing and calls it near off-shoring.

Tesla, Investor Day Presentation, Toyota and Free Software The above brings it nicely about the Tesla Investor Day that happened couple of weeks back. The biggest news though was broken just 24 hours earlier when the governor of Monterrey] shared how Giga New Mexico would be happening and shared some site photographs. There have been bits of news on that off-and-on since then. Toyota meanwhile has been putting lot of anti-EV posters and whatnot. In fact, India seems to be mirroring Japan in a lot of ways, the only difference is Japan is still superior economically than India. I do sincerely hope that at least Japan can get out of its lost decades. I have asked privately if any of the Japanese translators would be willing to translate the anti-EV poster so we may all know.
Anti-EV poster by Toyota

Urinal outside UK Embassy About a week back few people chanting Khalistan entered the Indian Embassy and showed the flags. This was in the UK. Now in a retaliatory move against a friendly nation, we want to make a Urinal. after making a security downgrade for the Embassy. The pettiness being shown by Indian Govt. especially when it has very few friends. There are also plans to do the same for the U.S. and other embassies as well. I do not know when we will get common sense.

Holi Just a few weeks back, Holi happened. It used to be a sweet and innocent festival. But from the last few years, I have been hearing and seeing sexual harassment on the rise. In fact, saw quite a few lewd posts written to women on Holi or verge of Holi and also videos of the same. One of the most shameful incidents occurred with a Japanese tourist. She was not only sexually molested but also terrorized that if she were to report then she could be raped and murdered. She promptly left Delhi. Once it was reported in mass media, Delhi Police tried to show it was doing something. FWIW, Delhi s crime against women stats have been at record high and conviction at record low.

Ecommerce Rules being changed, logistics gonna be tough. Just today there have been changes in Ecommerce rules (again) and this is gonna be a pain for almost all companies big and small with the exception of Adani and Ambani. Almost all players including the Tatas have called them out. Of course, all such laws have been passed without debate. In such a scenario, small startups like these cannot hope to grow their business.

Inertial Measurement Unit or Full Body Tracking with cheap hardware. Apparently, a whole host of companies are looking at 3-D tracking using cheap hardware apparently known as IMU sensors. Sooner than later cheap 3-D glasses and IMU sensors should explode the 3-D market worldwide. There is a huge potential and upside to it and will probably overtake smartphones as well. But then the danger will be of our thoughts, ideas, nightmares etc. to be shared without our consent. The more we tap into a virtual world, what stops anybody from tapping into our brain and practically stealing our identity in more than one way. I dunno if there are any Debian people or FSF projects working on the above. Even Laws can only do so much, until and unless there are alternative places and ways it would be difficult to say the least

Matt Brown: Ventilation Monitoring: Ensuring every space has clean, fresh air

The importance of clean, fresh indoor air is one of the most tangible takeaways of the Covid-19 pandemic. In addition to being an effective risk mitigation strategy for reducing the spread of respiratory illnesses, clean, fresh air is necessary to enable effective cognitive performance. Monitoring indoor air quality is relatively easy to do, but traditionally has not been a key focus. I believe air quality monitoring should be accessible for any indoor space, and for highly occupied indoor spaces should be provided on a continuous basis. This post explores the need and an opportunity for a business that can accelerate the adoption of ventilation monitoring through the following topics:

The importance of indoor air quality Clean, fresh air is fundamental to life and health. That might sound obvious, but unfortunately being obvious is not enough to ensure the air we breathe is in fact always clean and healthy. Repeated studies have revealed that in many cases the air you re breathing at school, in the bus or at work and probably also at home falls well below the ideal of what clean, fresh air should be. Unclean air has potential long-term health impacts and has also been shown to lower cognitive performance impacting the ability to learn and work as well as increasing the risk of transmission of respiratory illnesses like Covid-19 and the flu. Ventilation (replacing old stale air with clean fresh air) is the most effective and economical method of improving and maintaining high indoor air quality. Most New Zealand buildings (including schools and houses) are designed to rely on manual ventilation (opening windows), while newer buildings, often including larger or commercial buildings may use mechanical ventilation involving fans and ducts. Mechanical ventilation including filtration may also be required in situations where the outdoor air is not clean and fresh such as in a city or next to a busy intersection.

Observing the invisible Overall air quality is a complex topic involving many contributing factors, many of which are invisible and not perceptible to us until well after adverse effects or irritation occur. This complexity and lack of visible signal is a large contributing factor to the ignorance and lack of attention towards indoor air quality that is prevalent in most buildings and indoor spaces today. Our attention is biased towards the risks that we can see, and this default bias has not been helped by hesitation and resistance to the idea that aerosol transmission and air quality is an important factor in preventing disease transmission that has only recently started to change. Zeynep Tufekci has a great overview that provides fascinating context for how an overreaction to the early incorrect theories of bad air and miasma causing disease contributed to aerosol transmission and air quality being incorrectly neglected for so long. Correcting this history of inattention to indoor air quality is going to take time and effort, but one significant step that we can take to help start the journey to ensuring all indoor spaces have clean, healthy air is to make the invisible part visible. The concentration of carbon dioxide (CO2) in a space is an incredibly effective and easy to measure proxy for the ventilation of a space. The atmospheric background level of CO2 is around 420 parts per million (ppm), while our exhaled breath has concentrations as high as 40,000 ppm. Without effective ventilation, one or more people breathing in an enclosed space will rapidly lead to an observable increase in CO2 concentration, which in turn provides a signal that the ventilation is insufficient and needs to be improved. Monitoring CO2 and improving ventilation is not a panacea for all possible air quality issues, but for the majority of buildings and indoor spaces, using CO2 as a proxy for ventilation and increasing ventilation when CO2 levels rise above recommended levels is a simple, effective and achievable approach that will deliver improvements in cognitive performance and reduction in the risk of disease transmission with few, if any, downsides or risks. See this Public Health Communication Centre briefing for a more detailed explanation.

Adding clean air to our hygiene practices We have well established expectations of hygiene for the food we eat and the water we drink and these expectations are codified in regulations that ensure those providing these services do so in a way that gives us confidence that we re not going to be at risk of illness. You may recall seeing food grade ratings prominently displayed on the walls of restaurants and cafes that you visit as an example of this. Why should the air we breathe be treated any differently? I think there is a strong argument that indoor air quality deserves regulation, both of the absolute quality of the air and ensuring that the practices and achieved air quality are clearly advertised and available. Ventilation monitoring via measurement of CO2 concentration provides an effective and achievable method that can be used to achieve this, and countries like Belgium and Japan are already starting to regulate indoor air quality. In the UK, the independent SAGE group of scientists has published Scores on the Doors , a proposal which demonstrates how CO2 monitoring can be helpful in providing information about the air quality of indoor spaces. Unfortunately there is no movement in any of these directions in New Zealand yet, and no sign that regulation or even a basic campaign to raise awareness of ventilation and air quality is even being planned. This is disappointing, but even if such work was planned, it would still require appropriate ventilation monitoring products and services to enable it, and while there are some options available, it is not a fully solved space yet.

Existing ventilation monitoring options Until recently the available offerings for ventilation monitoring have sat at two distinct ends of the price and quality spectrum:
  • Handheld air quality meters advertised as measuring CO2, but in reality reporting only an approximation. These meters do not contain actual CO2 sensors, and only approximate CO2 levels based on measurements of other components of the air. While cheap (often less than $100), these meters are not useful for providing reliable data that can be systematically used to assess and improve ventilation and should be avoided.
  • High-end building management systems (BMS), and industrial measurement products targeted at large buildings such as offices or commercial applications such as food production. These systems require specialist installation, often integrated with large whole-building air conditioning systems. These systems, if appropriately configured, can be a great solution for the types of buildings and spaces that can afford them, but by their nature and cost, they do not offer a solution for the majority of smaller buildings and indoor spaces where we tend to spend a lot of our time.
Over the last few years a growing number of companies have developed products that fit in between the unreliable air quality meters and the expensive BMS/industrial measurement products. Promising NZ-based options in this space include Air Suite, Tether and Monkeytronics. These products are wall mountable, resemble a smoke alarm and utilise a WiFi network to report their measurements to a supporting web service. Pricing varies between $200 and $300 ex GST per unit. Aranet, while not NZ based, provides a handheld monitor the Aranet4 Home, which is well regarded for quality and accuracy. Aranet4 Home devices are the most expensive in this space, retailing at $386 ex GST and offer a clunkier and less convenient set of connectivity options via a Bluetooth connection to an associated phone. To obtain similar reporting functionality to the other products requires upgrading to their Pro model and purchasing a separate base station at a combined cost of $1255 ex GST. Outside the commercial product offerings are a number of open source DIY options, which can be built by anyone with basic electronics knowledge. AirGradient is a leading example based in Thailand, and within New Zealand Oliver Seiler s CO2 Monitor provides similar functionality. These open source options have a parts cost in the $100-$150 range, depending on volume built and provide high-quality measurements via trusted CO2 sensors while also offering huge flexibility in terms of how they operate, interact with users and potential supporting web services.

An opportunity: Small businesses and organisations While a growing number of high-quality CO2 monitors has the potential to help drive increased adoption of ventilation monitoring, the plethora of small businesses and organisations that own, operate and manage many of the indoor spaces we visit on a day-to-day basis do not appear to be well served by these existing products. To deploy ventilation monitoring a small business or organisation needs to first become aware of the need or demand for it, and then have a simple and easy path to acquire and install the monitor and access the data. Little to no marketing or demand generation appears to be targeted towards this market from the existing businesses and tellingly, several of the products are not directly available for sale, requiring interaction with a salesperson to purchase. This indicates a focus on selling to larger customers who have a campus or portfolio of buildings and will purchase in larger quantities than the typical small business or organisation will. Small businesses and organisations are likely to occupy smaller buildings and spaces where manual ventilation is the prevalent method of improving and maintaining air quality. Maintaining clean, fresh air via manual ventilation requires the occupants of the space to receive an obvious and straightforward signal when action (opening windows, etc) is required. While the products above all tend to provide some form of local feedback and display in the room, the indication provided and notification of when to take action is less obvious and prominent than would be ideal in a situation where manual ventilation is being relied upon. Informally testing this opportunity with family and friends running small businesses over the last few months has resulted in promising feedback. One particular success story was the discovery of a fresh air duct on the air conditioning unit in a small office that had never been connected to the outside air and was simply recirculating air from the ceiling space back into an office! The resulting stuffiness and poor air quality had been noticed, but without the clear indication from the CO2 monitor that the air conditioning was actually making things worse, rather than better, the underlying issue had not been understood. With the issue fixed and the duct now connected, that business is now enjoying much more productive and healthy working conditions.

Next steps Many small businesses and organisations are likely to have poor air quality and opportunities for improvement similar to the example above that are waiting to be found and fixed, and the existing products available are neither focused or ideal for the needs of this market. I have spent some time over the past six months building a basic CO2 monitoring service that I have used to deploy ventilation monitoring to our local school, and a few other local businesses. There are a number of challenges that still need to be addressed in order to scale the business up, but I think there is a reasonable chance that I can build a viable business that offers an attractive and useful solution that would accelerate the deployment of ventilation monitoring for small businesses and organisations. In an upcoming post, I will explain the foundations of the service that I have built to date, the challenges that need to be overcome and how I plan to evolve the service from the current prototype into a sustainable, bootstrapped business.

25 March 2023

Russ Allbery: Review: Thief of Time

Review: Thief of Time, by Terry Pratchett
Series: Discworld #26
Publisher: Harper
Copyright: May 2001
Printing: August 2014
ISBN: 0-06-230739-8
Format: Mass market
Pages: 420
Thief of Time is the 26th Discworld novel and the last Death novel, although he still appears in subsequent books. It's the third book starring Susan Sto Helit, so I don't recommend starting here. Mort is the best starting point for the Death subseries, and Reaper Man provides a useful introduction to the villains. Jeremy Clockson was an orphan raised by the Guild of Clockmakers. He is very good at making clocks. He's not very good at anything else, particularly people, but his clocks are the most accurate in Ankh-Morpork. He is therefore the logical choice to receive a commission by a mysterious noblewoman who wants him to make the most accurate possible clock: a clock that can measure the tick of the universe, one that a fairy tale says had been nearly made before. The commission is followed by a surprise delivery of an Igor, to help with the clock-making. People who live in places with lots of fields become farmers. People who live where there is lots of iron and coal become blacksmiths. And people who live in the mountains near the Hub, near the gods and full of magic, become monks. In the highest valley are the History Monks, founded by Wen the Eternally Surprised. Like most monks, they take apprentices with certain talents and train them in their discipline. But Lobsang Ludd, an orphan discovered in the Thieves Guild in Ankh-Morpork, is proving a challenge. The monks decide to apprentice him to Lu-Tze the sweeper; perhaps that will solve multiple problems at once. Since Hogfather, Susan has moved from being a governess to a schoolteacher. She brings to that job the same firm patience, total disregard for rules that apply to other people, and impressive talent for managing children. She is by far the most popular teacher among the kids, and not only because she transports her class all over the Disc so that they can see things in person. It is a job that she likes and understands, and one that she's quite irate to have interrupted by a summons from her grandfather. But the Auditors are up to something, and Susan may be able to act in ways that Death cannot. This was great. Susan has quickly become one of my favorite Discworld characters, and this time around there is no (or, well, not much) unbelievable romance or permanently queasy god to distract. The clock-making portions of the book quickly start to focus on Igor, who is a delightful perspective through whom to watch events unfold. And the History Monks! The metaphysics of what they are actually doing (which I won't spoil, since discovering it slowly is a delight) is perhaps my favorite bit of Discworld world building to date. I am a sucker for stories that focus on some process that everyone thinks happens automatically and investigate the hidden work behind it. I do want to add a caveat here that the monks are in part a parody of Himalayan Buddhist monasteries, Lu-Tze is rather obviously a parody of Laozi and Daoism in general, and Pratchett's parodies of non-western cultures are rather ham-handed. This is not quite the insulting mess that the Chinese parody in Interesting Times was, but it's heavy on the stereotypes. It does not, thankfully, rely on the stereotypes; the characters are great fun on their own terms, with the perfect (for me) balance of irreverence and thoughtfulness. Lu-Tze refusing to be anything other than a sweeper and being irritatingly casual about all the rules of the order is a classic bit that Pratchett does very well. But I also have the luxury of ignoring stereotypes of a culture that isn't mine, and I think Pratchett is on somewhat thin ice. As one specific example, having Lu-Tze's treasured sayings be a collection of banal aphorisms from a random Ankh-Morpork woman is both hilarious and also arguably rather condescending, and I'm not sure where I landed. It's a spot-on bit of parody of how a lot of people who get very into "eastern religions" sound, but it's also equating the Dao De Jing with advice from the Discworld equivalent of a English housewife. I think the generous reading is that Lu-Tze made the homilies profound by looking at them in an entirely different way than the woman saying them, and that's not completely unlike Daoism and works surprisingly well. But that's reading somewhat against the grain; Pratchett is clearly making fun of philosophical koans, and while anything is fair game for some friendly poking, it still feels a bit weird. That isn't the part of the History Monks that I loved, though. Their actual role in the story doesn't come out of the parody. It's something entirely native to Discworld, and it's an absolute delight. The scene with Lobsang and the procrastinators is perhaps my favorite Discworld set piece to date. Everything about the technology of the History Monks, even the Bond parody, is so good. I grew up reading the Marvel Comics universe, and Thief of Time reminds me of a classic John Byrne or Jim Starlin story, where the heroes are dumped into the middle of vast interdimensional conflicts involving barely-anthropomorphized cosmic powers and the universe is revealed to work in ever more intricate ways at vastly expanding scales. The Auditors are villains in exactly that tradition, and just like the best of those stories, the fulcrum of the plot is questions about what it means to be human, what it means to be alive, and the surprising alliances these non-human powers make with humans or semi-humans. I devoured this kind of story as a kid, and it turns out I still love it. The one complaint I have about the plot is that the best part of this book is the middle, and the end didn't entirely work for me. Ronnie Soak is at his best as a supporting character about three quarters of the way through the book, and I found the ending of his subplot much less interesting. The cosmic confrontation was oddly disappointing, and there's a whole extended sequence involving chocolate that I think was funnier in Pratchett's head than it was in mine. The ending isn't bad, but the middle of this book is my favorite bit of Discworld writing yet, and I wish the story had carried that momentum through to the end. I had so much fun with this book. The Discworld novels are clearly getting better. None of them have yet vaulted into the ranks of my all-time favorite books there's always some lingering quibble or sagging bit but it feels like they've gone from reliably good books to more reliably great books. The acid test is coming, though: the next book is a Rincewind book, which are usually the weak spots. Followed by The Last Hero in publication order. There is no direct thematic sequel. Rating: 8 out of 10

Next.