Search Results: "dod"

4 March 2024

Colin Watson: Free software activity in January/February 2024

Two months into my new gig and it s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I ve done has also allowed some helpful introspection. Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian s summaries now, I ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail. January 2024 February 2024

21 November 2023

Mike Hommey: How I (kind of) killed Mercurial at Mozilla

Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me. But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance. From CVS to DVCS From its release in 1998, the Mozilla source code was kept in a CVS repository. If you're too young to know what CVS is, let's just say it's an old school version control system, with its set of problems. Back then, it was mostly ubiquitous in the Open Source world, as far as I remember. In the early 2000s, the Subversion version control system gained some traction, solving some of the problems that came with CVS. Incidentally, Subversion was created by Jim Blandy, who now works at Mozilla on completely unrelated matters. In the same period, the Linux kernel development moved from CVS to Bitkeeper, which was more suitable to the distributed nature of the Linux community. BitKeeper had its own problem, though: it was the opposite of Open Source, but for most pragmatic people, it wasn't a real concern because free access was provided. Until it became a problem: someone at OSDL developed an alternative client to BitKeeper, and licenses of BitKeeper were rescinded for OSDL members, including Linus Torvalds (they were even prohibited from purchasing one). Following this fiasco, in April 2005, two weeks from each other, both Git and Mercurial were born. The former was created by Linus Torvalds himself, while the latter was developed by Olivia Mackall, who was a Linux kernel developer back then. And because they both came out of the same community for the same needs, and the same shared experience with BitKeeper, they both were similar distributed version control systems. Interestingly enough, several other DVCSes existed: In this landscape, the major difference Git was making at the time was that it was blazing fast. Almost incredibly so, at least on Linux systems. That was less true on other platforms (especially Windows). It was a game-changer for handling large codebases in a smooth manner. Anyways, two years later, in 2007, Mozilla decided to move its source code not to Bzr, not to Git, not to Subversion (which, yes, was a contender), but to Mercurial. The decision "process" was laid down in two rather colorful blog posts. My memory is a bit fuzzy, but I don't recall that it was a particularly controversial choice. All of those DVCSes were still young, and there was no definite "winner" yet (GitHub hadn't even been founded). It made the most sense for Mozilla back then, mainly because the Git experience on Windows still wasn't there, and that mattered a lot for Mozilla, with its diverse platform support. As a contributor, I didn't think much of it, although to be fair, at the time, I was mostly consuming the source tarballs. Personal preferences Digging through my archives, I've unearthed a forgotten chapter: I did end up setting up both a Mercurial and a Git mirror of the Firefox source repository on alioth.debian.org. Alioth.debian.org was a FusionForge-based collaboration system for Debian developers, similar to SourceForge. It was the ancestor of salsa.debian.org. I used those mirrors for the Debian packaging of Firefox (cough cough Iceweasel). The Git mirror was created with hg-fast-export, and the Mercurial mirror was only a necessary step in the process. By that time, I had converted my Subversion repositories to Git, and switched off SVK. Incidentally, I started contributing to Git around that time as well. I apparently did this not too long after Mozilla switched to Mercurial. As a Linux user, I think I just wanted the speed that Mercurial was not providing. Not that Mercurial was that slow, but the difference between a couple seconds and a couple hundred milliseconds was a significant enough difference in user experience for me to prefer Git (and Firefox was not the only thing I was using version control for) Other people had also similarly created their own mirror, or with other tools. But none of them were "compatible": their commit hashes were different. Hg-git, used by the latter, was putting extra information in commit messages that would make the conversion differ, and hg-fast-export would just not be consistent with itself! My mirror is long gone, and those have not been updated in more than a decade. I did end up using Mercurial, when I got commit access to the Firefox source repository in April 2010. I still kept using Git for my Debian activities, but I now was also using Mercurial to push to the Mozilla servers. I joined Mozilla as a contractor a few months after that, and kept using Mercurial for a while, but as a, by then, long time Git user, it never really clicked for me. It turns out, the sentiment was shared by several at Mozilla. Git incursion In the early 2010s, GitHub was becoming ubiquitous, and the Git mindshare was getting large. Multiple projects at Mozilla were already entirely hosted on GitHub. As for the Firefox source code base, Mozilla back then was kind of a Wild West, and engineers being engineers, multiple people had been using Git, with their own inconvenient workflows involving a local Mercurial clone. The most popular set of scripts was moz-git-tools, to incorporate changes in a local Git repository into the local Mercurial copy, to then send to Mozilla servers. In terms of the number of people doing that, though, I don't think it was a lot of people, probably a few handfuls. On my end, I was still keeping up with Mercurial. I think at that time several engineers had their own unofficial Git mirrors on GitHub, and later on Ehsan Akhgari provided another mirror, with a twist: it also contained the full CVS history, which the canonical Mercurial repository didn't have. This was particularly interesting for engineers who needed to do some code archeology and couldn't get past the 2007 cutoff of the Mercurial repository. I think that mirror ultimately became the official-looking, but really unofficial, mozilla-central repository on GitHub. On a side note, a Mercurial repository containing the CVS history was also later set up, but that didn't lead to something officially supported on the Mercurial side. Some time around 2011~2012, I started to more seriously consider using Git for work myself, but wasn't satisfied with the workflows others had set up for themselves. I really didn't like the idea of wasting extra disk space keeping a Mercurial clone around while using a Git mirror. I wrote a Python script that would use Mercurial as a library to access a remote repository and produce a git-fast-import stream. That would allow the creation of a git repository without a local Mercurial clone. It worked quite well, but it was not able to incrementally update. Other, more complete tools existed already, some of which I mentioned above. But as time was passing and the size and depth of the Mercurial repository was growing, these tools were showing their limits and were too slow for my taste, especially for the initial clone. Boot to Git In the same time frame, Mozilla ventured in the Mobile OS sphere with Boot to Gecko, later known as Firefox OS. What does that have to do with version control? The needs of third party collaborators in the mobile space led to the creation of what is now the gecko-dev repository on GitHub. As I remember it, it was challenging to create, but once it was there, Git users could just clone it and have a working, up-to-date local copy of the Firefox source code and its history... which they could already have, but this was the first officially supported way of doing so. Coincidentally, Ehsan's unofficial mirror was having trouble (to the point of GitHub closing the repository) and was ultimately shut down in December 2013. You'll often find comments on the interwebs about how GitHub has become unreliable since the Microsoft acquisition. I can't really comment on that, but if you think GitHub is unreliable now, rest assured that it was worse in its beginning. And its sustainability as a platform also wasn't a given, being a rather new player. So on top of having this official mirror on GitHub, Mozilla also ventured in setting up its own Git server for greater control and reliability. But the canonical repository was still the Mercurial one, and while Git users now had a supported mirror to pull from, they still had to somehow interact with Mercurial repositories, most notably for the Try server. Git slowly creeping in Firefox build tooling Still in the same time frame, tooling around building Firefox was improving drastically. For obvious reasons, when version control integration was needed in the tooling, Mercurial support was always a no-brainer. The first explicit acknowledgement of a Git repository for the Firefox source code, other than the addition of the .gitignore file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py, from September 2012. Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format command that would apply clang-format-diff to the output from hg diff. Obviously, running hg diff on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014. A year later, when the initial implementation of mach artifact was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact was eventually added 14 months later, in March 2016. From gecko-dev to git-cinnabar Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands. Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now). Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence. Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch). One Firefox repository to rule them all For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them. And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though. I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such. In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary. 7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository. Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated. Hosting is simple, right? Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones). And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches. Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication. Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar). Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is. "But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way. And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while). Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016. The growing difficulty of maintaining the status quo Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin. Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things). On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option. But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one. Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours). I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem. Taking the leap I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective. Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more. It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different. You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved. And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?). Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git). Heck, even Microsoft moved to Git! With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems. But... GitHub? I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far. After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those). Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that). I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened). "But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion. At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then. So, what's next? The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not. While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now: As there is no one-size-fits-all workflow, I won't tell you how to organize yourself from there. I'll just say this: if you know the Mercurial sha1s of your previous local work, you can create branches for them with:
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post. If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them. Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far). What about git-cinnabar? With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained. I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches). Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there. So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either. Final words That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;) So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change. To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire. And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.

Russ Allbery: Review: Thud!

Review: Thud!, by Terry Pratchett
Series: Discworld #34
Publisher: Harper
Copyright: October 2005
Printing: November 2014
ISBN: 0-06-233498-0
Format: Mass market
Pages: 434
Thud! is the 34th Discworld novel and the seventh Watch novel. It is partly a sequel to The Fifth Elephant, partly a sequel to Night Watch, and references many of the previous Watch novels. This is not a good place to start. Dwarfs and trolls have a long history of conflict, as one might expect between a race of creatures who specialize in mining and a race of creatures whose vital organs are sometimes the targets of that mining. The first battle of Koom Valley was the place where that enmity was made concrete and given a symbol. Now that there are large dwarf and troll populations in Ankh-Morpork, the upcoming anniversary of that battle is the excuse for rising tensions. Worse, Grag Hamcrusher, a revered deep-down dwarf and a dwarf supremacist, is giving incendiary speeches about killing all trolls and appears to be tunneling under the city. Then whispers run through the city's dwarfs that Hamcrusher has been murdered by a troll. Vimes has no patience for racial tensions, or for the inspection of the Watch by one of Vetinari's excessively competent clerks, or the political pressure to add a vampire to the Watch over his prejudiced objections. He was already grumpy before the murder and is in absolutely no mood to be told by deep-down dwarfs who barely believe that humans exist that the murder of a dwarf underground is no affair of his. Meanwhile, The Battle of Koom Valley by Methodia Rascal has been stolen from the Ankh-Morpork Royal Art Museum, an impressive feat given that the painting is ten feet high and fifty feet long. It was painted in impressive detail by a madman who thought he was a chicken, and has been the spark for endless theories about clues to some great treasure or hidden knowledge, culminating in the conspiratorial book Koom Valley Codex. But the museum prides itself on allowing people to inspect and photograph the painting to their heart's content and was working on a new room to display it. It's not clear why someone would want to steal it, but Colon and Nobby are on the case. This was a good time to read this novel. Sadly, the same could be said of pretty much every year since it was written. "Thud" in the title is a reference to Hamcrusher's murder, which was supposedly done by a troll club that was found nearby, but it's also a reference to a board game that we first saw in passing in Going Postal. We find out a lot more about Thud in this book. It's an asymmetric two-player board game that simulates a stylized battle between dwarf and troll forces, with one player playing the trolls and the other playing the dwarfs. The obvious comparison is to chess, but a better comparison would be to the old Steve Jackson Games board game Ogre, which also featured asymmetric combat mechanics. (I'm sure there are many others.) This board game will become quite central to the plot of Thud! in ways that I thought were ingenious. I thought this was one of Pratchett's best-plotted books to date. There are a lot of things happening, involving essentially every member of the Watch that we've met in previous books, and they all matter and I was never confused by how they fit together. This book is full of little callbacks and apparently small things that become important later in a way that I found delightful to read, down to the children's book that Vimes reads to his son and that turns into the best scene of the book. At this point in my Discworld read-through, I can see why the Watch books are considered the best sub-series. It feels like Pratchett kicks the quality of writing up a notch when he has Vimes as a protagonist. In several books now, Pratchett has created a villain by taking some human characteristic and turning it into an external force that acts on humans. (See, for instance the Gonne in Men at Arms, or the hiver in A Hat Full of Sky.) I normally do not like this plot technique, both because I think it lets humans off the hook in a way that cheapens the story and because this type of belief has a long and bad reputation in religions where it is used to dodge personal responsibility and dehumanize one's enemies. When another of those villains turned up in this book, I was dubious. But I think Pratchett pulls off this type of villain as well here as I've seen it done. He lifts up a facet of humanity to let the reader get a better view, but somehow makes it explicit that this is concretized metaphor. This force is something people create and feed and choose and therefore are responsible for. The one sour note that I do have to complain about is that Pratchett resorts to some cheap and annoying "men are from Mars, women are from Venus" nonsense, mostly around Nobby's subplot but in a few other places (Sybil, some of Angua's internal monologue) as well. It's relatively minor, and I might let it pass without grumbling in other books, but usually Pratchett is better on gender than this. I expected better and it got under my skin. Otherwise, though, this was a quietly excellent book. It doesn't have the emotional gut punch of Night Watch, but the plotting is superb and the pacing is a significant improvement over The Fifth Elephant. The parody is of The Da Vinci Code, which is both more interesting than Pratchett's typical movie parodies and delightfully subtle. We get more of Sybil being a bad-ass, which I am always here for. There's even some lovely world-building in the form of dwarven Devices. I love how Pratchett has built Vimes up into one of the most deceptively heroic figures on Discworld, but also shows all of the support infrastructure that ensures Vimes maintain his principles. On the surface, Thud! has a lot in common with Vimes's insistently moral stance in Jingo, but here it is more obvious how Vimes's morality happens in part because his wife, his friends, and his boss create the conditions for it to thrive. Highly recommended to anyone who has gotten this far. Rating: 9 out of 10

25 October 2023

Russ Allbery: Review: Going Infinite

Review: Going Infinite, by Michael Lewis
Publisher: W.W. Norton & Company
Copyright: 2023
ISBN: 1-324-07434-5
Format: Kindle
Pages: 255
My first reaction when I heard that Michael Lewis had been embedded with Sam Bankman-Fried working on a book when Bankman-Fried's cryptocurrency exchange FTX collapsed into bankruptcy after losing billions of dollars of customer deposits was "holy shit, why would you talk to Michael Lewis about your dodgy cryptocurrency company?" Followed immediately by "I have to read this book." This is that book. I wasn't sure how Lewis would approach this topic. His normal (although not exclusive) area of interest is financial systems and crises, and there is lots of room for multiple books about cryptocurrency fiascoes using someone like Bankman-Fried as a pivot. But Going Infinite is not like The Big Short or Lewis's other financial industry books. It's a nearly straight biography of Sam Bankman-Fried, with just enough context for the reader to follow his life. To understand what you're getting in Going Infinite, I think it's important to understand what sort of book Lewis likes to write. Lewis is not exactly a reporter, although he does explain complicated things for a mass audience. He's primarily a storyteller who collects people he finds fascinating. This book was therefore never going to be like, say, Carreyrou's Bad Blood or Isaac's Super Pumped. Lewis's interest is not in a forensic account of how FTX or Alameda Research were structured. His interest is in what makes Sam Bankman-Fried tick, what's going on inside his head. That's not a question Lewis directly answers, though. Instead, he shows you Bankman-Fried as Lewis saw him and was able to reconstruct from interviews and sources and lets you draw your own conclusions. Boy did I ever draw a lot of conclusions, most of which were highly unflattering. However, one conclusion I didn't draw, and had been dubious about even before reading this book, was that Sam Bankman-Fried was some sort of criminal mastermind who intentionally plotted to steal customer money. Lewis clearly doesn't believe this is the case, and with the caveat that my study of the evidence outside of this book has been spotty and intermittent, I think Lewis has the better of the argument. I am utterly fascinated by this, and I'm afraid this review is going to turn into a long summary of my take on the argument, so here's the capsule review before you get bored and wander off: This is a highly entertaining book written by an excellent storyteller. I am also inclined to believe most of it is true, but given that I'm not on the jury, I'm not that invested in whether Lewis is too credulous towards the explanations of the people involved. What I do know is that it's a fantastic yarn with characters who are too wild to put in fiction, and I thoroughly enjoyed it. There are a few things that everyone involved appears to agree on, and therefore I think we can take as settled. One is that Bankman-Fried, and most of the rest of FTX and Alameda Research, never clearly distinguished between customer money and all of the other money. It's not obvious that their home-grown accounting software (written entirely by one person! who never spoke to other people! in code that no one else could understand!) was even capable of clearly delineating between their piles of money. Another is that FTX and Alameda Research were thoroughly intermingled. There was no official reporting structure and possibly not even a coherent list of employees. The environment was so chaotic that lots of people, including Bankman-Fried, could have stolen millions of dollars without anyone noticing. But it was also so chaotic that they could, and did, literally misplace millions of dollars by accident, or because Bankman-Fried had problems with object permanence. Something that was previously less obvious from news coverage but that comes through very clearly in this book is that Bankman-Fried seriously struggled with normal interpersonal and societal interactions. We know from multiple sources that he was diagnosed with ADHD and depression (Lewis describes it specifically as anhedonia, the inability to feel pleasure). The ADHD in Lewis's account is quite severe and does not sound controlled, despite medication; for example, Bankman-Fried routinely played timed video games while he was having important meetings, forgot things the moment he stopped dealing with them, was constantly on his phone or seeking out some other distraction, and often stimmed (by bouncing his leg) to a degree that other people found it distracting. Perhaps more tellingly, Bankman-Fried repeatedly describes himself in diary entries and correspondence to other people (particularly Caroline Ellison, his employee and on-and-off secret girlfriend) as being devoid of empathy and unable to access his own emotions, which Lewis supports with stories from former co-workers. I'm very hesitant to diagnose someone via a book, but, at least in Lewis's account, Bankman-Fried nearly walks down the symptom list of antisocial personality disorder in his own description of himself to other people. (The one exception is around physical violence; there is nothing in this book or in any of the other reporting that I've seen to indicate that Bankman-Fried was violent or physically abusive.) One of the recurrent themes of this book is that Bankman-Fried never saw the point in following rules that didn't make sense to him or worrying about things he thought weren't important, and therefore simply didn't. By about a third of the way into this book, before FTX is even properly started, very little about its eventual downfall will seem that surprising. There was no way that Sam Bankman-Fried was going to be able to run a successful business over time. He was extremely good at probabilistic trading and spotting exploitable market inefficiencies, and extremely bad at essentially every other aspect of living in a society with other people, other than a hit-or-miss ability to charm that worked much better with large audiences than one-on-one. The real question was why anyone would ever entrust this man with millions of dollars or decide to work for him for longer than two weeks. The answer to those questions changes over the course of this story. Later on, it was timing. Sam Bankman-Fried took the techniques of high frequency trading he learned at Jane Street Capital and applied them to exploiting cryptocurrency markets at precisely the right time in the cryptocurrency bubble. There was far more money than sense, the most ruthless financial players were still too leery to get involved, and a rising tide was lifting all boats, even the ones that were piles of driftwood. When cryptocurrency inevitably collapsed, so did his businesses. In retrospect, that seems inevitable. The early answer, though, was effective altruism. A full discussion of effective altruism is beyond the scope of this review, although Lewis offers a decent introduction in the book. The short version is that a sensible and defensible desire to use stronger standards of evidence in evaluating charitable giving turned into a bizarre navel-gazing exercise in making up statistical risks to hypothetical future people and treating those made-up numbers as if they should be the bedrock of one's personal ethics. One of the people most responsible for this turn is an Oxford philosopher named Will MacAskill. Sam Bankman-Fried was already obsessed with utilitarianism, in part due to his parents' philosophical beliefs, and it was a presentation by Will MacAskill that converted him to the effective altruism variant of extreme utilitarianism. In Lewis's presentation, this was like joining a cult. The impression I came away with feels like something out of a science fiction novel: Bankman-Fried knew there was some serious gap in his thought processes where most people had empathy, was deeply troubled by this, and latched on to effective altruism as the ethical framework to plug into that hole. So much of effective altruism sounds like a con game that it's easy to think the participants are lying, but Lewis clearly believes Bankman-Fried is a true believer. He appeared to be sincerely trying to make money in order to use it to solve existential threats to society, he does not appear to be motivated by money apart from that goal, and he was following through (in bizarre and mostly ineffective ways). I find this particularly believable because effective altruism as a belief system seems designed to fit Bankman-Fried's personality and justify the things he wanted to do anyway. Effective altruism says that empathy is meaningless, emotion is meaningless, and ethical decisions should be made solely on the basis of expected value: how much return (usually in safety) does society get for your investment. Effective altruism says that all the things that Sam Bankman-Fried was bad at were useless and unimportant, so he could stop feeling bad about his apparent lack of normal human morality. The only thing that mattered was the thing that he was exceptionally good at: probabilistic reasoning under uncertainty. And, critically to the foundation of his business career, effective altruism gave him access to investors and a recruiting pool of employees, things he was entirely unsuited to acquiring the normal way. There's a ton more of this book that I haven't touched on, but this review is already quite long, so I'll leave you with one more point. I don't know how true Lewis's portrayal is in all the details. He took the approach of getting very close to most of the major players in this drama and largely believing what they said happened, supplemented by startling access to sources like Bankman-Fried's personal diary and Caroline Ellis's personal diary. (He also seems to have gotten extensive information from the personal psychiatrist of most of the people involved; I'm not sure if there's some reasonable explanation for this, but based solely on the material in this book, it seems to be a shocking breach of medical ethics.) But Lewis is a storyteller more than he's a reporter, and his bias is for telling a great story. It's entirely possible that the events related here are not entirely true, or are skewed in favor of making a better story. It's certainly true that they're not the complete story. But, that said, I think a book like this is a useful counterweight to the human tendency to believe in moral villains. This is, frustratingly, a counterweight extended almost exclusively to higher-class white people like Bankman-Fried. This is infuriating, but that doesn't make it wrong. It means we should extend that analysis to more people. Once FTX collapsed, a lot of people became very invested in the idea that Bankman-Fried was a straightforward embezzler. Either he intended from the start to steal everyone's money or, more likely, he started losing money, panicked, and stole customer money to cover the hole. Lots of people in history have done exactly that, and lots of people involved in cryptocurrency have tenuous attachments to ethics, so this is a believable story. But people are complicated, and there's also truth in the maxim that every villain is the hero of their own story. Lewis is after a less boring story than "the crook stole everyone's money," and that leads to some bias. But sometimes the less boring story is also true. Here's the thing: even if Sam Bankman-Fried never intended to take any money, he clearly did intend to mix customer money with Alameda Research funds. In Lewis's account, he never truly believed in them as separate things. He didn't care about following accounting or reporting rules; he thought they were boring nonsense that got in his way. There is obvious criminal intent here in any reading of the story, so I don't think Lewis's more complex story would let him escape prosecution. He refused to follow the rules, and as a result a lot of people lost a lot of money. I think it's a useful exercise to leave mental space for the possibility that he had far less obvious reasons for those actions than that he was a simple thief, while still enforcing the laws that he quite obviously violated. This book was great. If you like Lewis's style, this was some of the best entertainment I've read in a while. Highly recommended; if you are at all interested in this saga, I think this is a must-read. Rating: 9 out of 10

10 October 2023

Russ Allbery: Review: Chilling Effect

Review: Chilling Effect, by Valerie Valdes
Series: Chilling Effect #1
Publisher: Harper Voyager
Copyright: September 2019
Printing: 2020
ISBN: 0-06-287724-0
Format: Kindle
Pages: 420
Chilling Effect is a space opera, kind of; more on the genre classification in a moment. It is the first volume of a series, although it reaches a reasonable conclusion on its own. It was Valerie Valdes's first novel. Captain Eva Innocente's line of work used to be less than lawful, following in the footsteps of her father. She got out of that life and got her own crew and ship. Now, the La Sirena Negra and its crew do small transport jobs for just enough money to stay afloat. Or, maybe, a bit less than that, when the recipient of a crate full of psychic escape-artist cats goes bankrupt before she can deliver it and get paid. It's a marginal and tenuous life, but at least she isn't doing anything shady. Then the Fridge kidnaps her sister. The Fridge is a shadowy organization of extortionists whose modus operandi is to kidnap a family member of their target, stuff them in cryogenic suspension, and demand obedience lest the family member be sold off as indentured labor after a few decades as a popsicle. Eva will be given missions that she and her crew have to perform. If she performs them well, she will pay off the price of her sister's release. Eventually. Oh, and she's not allowed to tell anyone. I found it hard to place the subgenre of this novel more specifically than comedy-adventure. The technology fits space opera: there are psychic cats, pilots who treat ships as extensions of their own body, brain parasites, a random intergalactic warlord, and very few attempts to explain anything with scientific principles. However, the stakes aren't on the scale that space opera usually goes for. Eva and her crew aren't going to topple governments or form rebellions. They're just trying to survive in a galaxy full of abusive corporations, dodgy clients, and the occasional alien who requires you to carry extensive documentation to prove that you can't be hunted for meat. It is also, as you might guess from that description, occasionally funny. That part of the book didn't mesh for me. Eva is truly afraid for her sister, and some of the events in the book are quite sinister, but the antagonist is an organization called The Fridge that puts people in fridges. Sexual harassment in a bar turns into obsessive stalking by a crazed intergalactic warlord who frequently interrupts the plot by randomly blasting things with his fleet, which felt like something from Hitchhiker's Guide to the Galaxy. The stakes for Eva, and her frustrations at being dragged back into a life she escaped, felt too high for the wacky, comic descriptions of the problems she gets into. My biggest complaint, though, is that the plot is driven by people not telling other people critical information they should know. Eva is keeping major secrets from her crew for nearly the entire book. Other people are also keeping information from Eva. There is a romance subplot driven almost entirely by both parties refusing to talk to each other about the existence of a romance subplot. For some people, this is catnip, but it's one of my least favorite fictional tropes and I found much of the book both frustrating and stressful. Fictional characters keeping important secrets from each other apparently raises my blood pressure. One of the things I did like about this book is that Eva is Hispanic and speaks like it. She resorts to Spanish frequently for curses, untranslatable phrases, aphorisms, derogatory comments, and similar types of emotional communication that don't feel right in a second language. Most of the time one can figure out the meaning from context, but Valdes doesn't feel obligated to hold the reader's hand and explain everything. I liked that. I think this approach is more viable in these days of ebook readers that can attempt translations on demand, and I think it does a lot to make Eva feel like a real person. I think the characters are the best part of this book, once one gets past the frustration of their refusal to talk to each other. Eva and the alien ship engineer get the most screen time, but Pink, Eva's honest-to-a-fault friend, was probably my favorite character. I also really enjoyed Min, the ship pilot whose primary goal is to be able to jack into the ship and treat it as her body, and otherwise doesn't particularly care about the rest of the plot as long as she gets paid. A lot of books about ship crews like this one lean hard into found family. This one felt more like a group of coworkers, with varying degrees of friendship and level of interest in their shared endeavors, but without the too-common shorthand of making the less-engaged crew members either some type of villain or someone who needs to be drawn out and turned into a best friend or love interest. It's okay for a job to just be a job, even if it's one where you're around the same people all the time. People who work on actual ships do it all the time. This is a half-serious, half-comic action romp that turned out to not be my thing, but I can see why others will enjoy it. Be prepared for a whole lot of communication failures and an uneven emotional tone, but if you're looking for a space-ships-and-aliens story that doesn't take itself very seriously and has some vague YA vibes, this may work for you. Followed by Prime Deceptions, although I didn't like this well enough to read on. Rating: 6 out of 10

16 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project.

Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository:
dumb.sh
#!/bin/sh
echo "The script arguments are: '$@'"
If we run the following job on the common repository:
job:
  script:
    - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2
the output will be:
The script arguments are: 'ARG1 ARG2'
But if we run the same job from a different project that includes the same job definition the output will be different:
/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found
The problem here is that we include and expand the YAML files, but if a script wants to use other files from the common repository as an asset (configuration file, shell script, template, etc.), the execution fails if the files are not available on the project that includes the remote job definition.

SolutionsWe can solve the issue using multiple approaches, I ll describe two of them:
  • Create files using scripts
  • Download files from the common repository

Create files using scriptsOne way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using HERE documents. The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error prone) or encode the whole file into base64 or something similar, making maintenance harder. As an example, imagine that we want to use the dumb.sh script presented on the previous section and we want to call it from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden folder inside the project directory or use a PATH like /tmp/assets-$CI_JOB_ID to leave things outside the project folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh runner). To create the file we will use hidden jobs to write our script template and reference tags to add it to the scripts when we want to use them. Here we have a snippet that creates the file with cat:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      cat >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      #!/bin/sh
      echo "The script arguments are: '\$@'"
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Note that to make things work we ve added 6 spaces before the script code and escaped the dollar sign. To do the same using base64 we replace the previous snippet by this:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      base64 -d >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Again, we have to indent the base64 version of the file using 6 spaces (all lines of the base64 output have to be indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain. With either version we just need to add a !reference before using the script, if we add the call on the first lines of the before_script we can use the downloaded file in the before_script, script or after_script sections of the job without problems:
job:
  before_script:
    - !reference [.file_scripts, create_dumb_sh]
  script:
    - $ CI_PROJECT_DIR /assets/dumb.sh ARG1 ARG2
The output of a pipeline that uses this job will be the same as the one shown in the original example:
The script arguments are: 'ARG1 ARG2'

Download the files from the common repositoryAs we ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use. An alternative approach is to keep the assets on a directory of the common repository (in our examples we will name it assets) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where we want to download the files) and defines a script fragment to download the complete folder. Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the before_script of the jobs that use files from the assets directory and they will be available when needed. The following file is an example of the YAML file we just mentioned:
bootstrap.yml
variables:
  CI_TMPL_API_V4_URL: "$ CI_API_V4_URL /projects/common%2Fci-templates"
  CI_TMPL_ARCHIVE_URL: "$ CI_TMPL_API_V4_URL /repository/archive"
  CI_TMPL_ASSETS_DIR: "/tmp/assets-$ CI_JOB_ID "
.scripts_common:
  bootstrap_ci_templates:
    -  
      # Downloading assets
      echo "Downloading assets"
      mkdir -p "$CI_TMPL_ASSETS_DIR"
      wget -q -O - --header="PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN" \
        "$CI_TMPL_ARCHIVE_URL?path=assets&sha=$ CI_TMPL_REF:-main "  
        tar --strip-components 2 -C "$ASSETS_DIR" -xzf -
The file defines the following variables:
  • CI_TMPL_API_V4_URL: URL of the common project, in our case we are using the project ci-templates inside the common group (note that the slash between the group and the project is escaped, that is needed to reference the project by name, if we don t like that approach we can replace the url encoded path by the project id, i.e. we could use a value like $ CI_API_V4_URL /projects/31)
  • CI_TMPL_ARCHIVE_URL: Base URL to use the gitlab API to download files from a repository, we will add the arguments path and sha to select which sub path to download and from which commit, branch or tag (we will explain later why we use the CI_TMPL_REF, for now just keep in mind that if it is not defined we will download the version of the files available on the main branch when the job is executed).
  • CI_TMPL_ASSETS_DIR: Destination of the downloaded files.
And uses variables defined in other places:
  • CI_TMPL_READ_TOKEN: token that includes the read_api scope for the common project, we need it because the tokens created by the CI/CD pipelines of other projects can t be used to access the api of the common one.We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)
  • CI_TMPL_REF: branch or tag of the common repo from which to get the files (we need that to make sure we are using the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed tags to make sure that the assets don t change between executions unless we change the reference).We will set the value on the .gitlab-ci.yml file of the remote projects and will use the same reference when including the files to make sure that everything is coherent.
This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:
pipeline.yml
include:
  - /bootstrap.yaml
stages:
  - test
dumb_job:
  stage: test
  before_script:
    - !reference [.bootstrap_ci_templates, create_dumb_sh]
  script:
    - $ CI_TMPL_ASSETS_DIR /dumb.sh ARG1 ARG2
To use it from an external project we will use the following gitlab ci configuration:
gitlab-ci.yml
include:
  - project: 'common/ci-templates'
    ref: &ciTmplRef 'main'
    file: '/pipeline.yml'
variables:
  CI_TMPL_REF: *ciTmplRef
Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the CI_TMPL_REF variable (as far as I know we have to pass the ref value explicitly to know which reference was used when including the file, the anchor allows us to make sure that the value is always the same in both places). The reference we use is quite important for the reproducibility of the jobs, if we don t use fixed tags or commit hashes as references each time a job that downloads the files is executed we can get different versions of them. For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common repo the way everything is going to work is almost the same as having the pipelines directly in our repo). But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs that we want to test and they will download the latest versions of the asset files on the branch, speeding up the testing process. However keep in mind that the trick only works with the asset files, if we change a job or a pipeline on the YAML files restarting the job is not enough to test the new version as the restart uses the same job created with the current pipeline. To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the pipeline manually.

ConclusionFor now I m using the second solution and as it is working well my guess is that I ll keep using that approach unless giltab itself provides a better or simpler way of doing the same thing.

31 May 2023

Russ Allbery: Review: Night Watch

Review: Night Watch, by Terry Pratchett
Series: Discworld #29
Publisher: Harper
Copyright: November 2002
Printing: August 2014
ISBN: 0-06-230740-1
Format: Mass market
Pages: 451
Night Watch is the 29th Discworld novel and the sixth Watch novel. I would really like to tell people they could start here if they wanted to, for reasons that I will get into in a moment, but I think I would be doing you a disservice. The emotional heft added by having read the previous Watch novels and followed Vimes's character evolution is significant. It's the 25th of May. Vimes is about to become a father. He and several of the other members of the Watch are wearing sprigs of lilac for reasons that Sergeant Colon is quite vehemently uninterested in explaining. A serial killer named Carcer the Watch has been after for weeks has just murdered an off-duty sergeant. It's a tense and awkward sort of day and Vimes is feeling weird and wistful, remembering the days when he was a copper and not a manager who has to dress up in ceremonial armor and meet with committees. That may be part of why, when the message comes over the clacks that the Watch have Carcer cornered on the roof of the New Hall of the Unseen University, Vimes responds in person. He's grappling with Carcer on the roof of the University Library in the middle of a magical storm when lightning strikes. When he wakes up, he's in the past, shortly after he joined the Watch and shortly before the events of the 25th of May that the older Watch members so vividly remember and don't talk about. I have been saying recently in Discworld reviews that it felt like Pratchett was on the verge of a breakout book that's head and shoulders above Discworld prior to that point. This is it. This is that book. The setup here is masterful: the sprigs of lilac that slowly tell the reader something is going on, the refusal of any of the older Watch members to talk about it, the scene in the graveyard to establish the stakes, the disconcerting fact that Vetinari is wearing a sprig of lilac as well, and the feeling of building tension that matches the growing electrical storm. And Pratchett never gives into the temptation to explain everything and tip his hand prematurely. We know the 25th is coming and something is going to happen, and the reader can put together hints from Vimes's thoughts, but Pratchett lets us guess and sometimes be right and sometimes be wrong. Vimes is trying to change history, which adds another layer of uncertainty and enjoyment as the reader tries to piece together both the true history and the changes. This is a masterful job at a "what if?" story. And, beneath that, the commentary on policing and government and ethics is astonishingly good. In a review of an earlier Watch novel, I compared Pratchett to Dickens in the way that he focuses on a sort of common-sense morality rather than political theory. That is true here too, but oh that moral analysis is sharp enough to slide into you like a knife. This is not the Vimes that we first met in Guards! Guards!. He has has turned his cynical stubbornness into a working theory of policing, and it's subtle and complicated and full of nuance that he only barely knows how to explain. But he knows how to show it to people.
Keep the peace. That was the thing. People often failed to understand what that meant. You'd go to some life-threatening disturbance like a couple of neighbors scrapping in the street over who owned the hedge between their properties, and they'd both be bursting with aggrieved self-righteousness, both yelling, their wives would either be having a private scrap on the side or would have adjourned to a kitchen for a shared pot of tea and a chat, and they all expected you to sort it out. And they could never understand that it wasn't your job. Sorting it out was a job for a good surveyor and a couple of lawyers, maybe. Your job was to quell the impulse to bang their stupid fat heads together, to ignore the affronted speeches of dodgy self-justification, to get them to stop shouting and to get them off the street. Once that had been achieved, your job was over. You weren't some walking god, dispensing finely tuned natural justice. Your job was simply to bring back peace.
When Vimes is thrown back in time, he has to pick up the role of his own mentor, the person who taught him what policing should be like. His younger self is right there, watching everything he does, and he's desperately afraid he'll screw it up and set a worse example. Make history worse when he's trying to make it better. It's a beautifully well-done bit of tension that uses time travel as the hook to show both how difficult mentorship is and also how irritating one's earlier naive self would be.
He wondered if it was at all possible to give this idiot some lessons in basic politics. That was always the dream, wasn't it? "I wish I'd known then what I know now"? But when you got older you found out that you now wasn't you then. You then was a twerp. You then was what you had to be to start out on the rocky road of becoming you now, and one of the rocky patches on that road was being a twerp.
The backdrop of this story, as advertised by the map at the front of the book, is a revolution of sorts. And the revolution does matter, but not in the obvious way. It creates space and circumstance for some other things to happen that are all about the abuse of policing as a tool of politics rather than Vimes's principle of keeping the peace. I mentioned when reviewing Men at Arms that it was an awkward book to read in the United States in 2020. This book tackles the ethics of policing head-on, in exactly the way that book didn't. It's also a marvelous bit of competence porn. Somehow over the years, Vimes has become extremely good at what he does, and not just in the obvious cop-walking-a-beat sort of ways. He's become a leader. It's not something he thinks about, even when thrown back in time, but it's something Pratchett can show the reader directly, and have the other characters in the book comment on. There is so much more that I'd like to say, but so much would be spoilers, and I think Night Watch is more effective when you have the suspense of slowly puzzling out what's going to happen. Pratchett's pacing is exquisite. It's also one of the rare Discworld novels where Pratchett fully commits to a point of view and lets Vimes tell the story. There are a few interludes with other people, but the only other significant protagonist is, quite fittingly, Vetinari. I won't say anything more about that except to note that the relationship between Vimes and Vetinari is one of the best bits of fascinating subtlety in all of Discworld. I think it's also telling that nothing about Night Watch reads as parody. Sure, there is a nod to Back to the Future in the lightning storm, and it's impossible to write a book about police and street revolutions without making the reader think about Les Miserables, but nothing about this plot matches either of those stories. This is Pratchett telling his own story in his own world, unapologetically, and without trying to wedge it into parody shape, and it is so much the better book for it. The one quibble I have with the book is that the bits with the Time Monks don't really work. Lu-Tze is annoying and flippant given the emotional stakes of this story, the interludes with him are frustrating and out of step with the rest of the book, and the time travel hand-waving doesn't add much. I see structurally why Pratchett put this in: it gives Vimes (and the reader) a time frame and a deadline, it establishes some of the ground rules and stakes, and it provides a couple of important opportunities for exposition so that the reader doesn't get lost. But it's not good story. The rest of the book is so amazingly good, though, that it doesn't matter (and the framing stories for "what if?" explorations almost never make much sense). The other thing I have a bit of a quibble with is outside the book. Night Watch, as you may have guessed by now, is the origin of the May 25th Pratchett memes that you will be familiar with if you've spent much time around SFF fandom. But this book is dramatically different from what I was expecting based on the memes. You will, for example see a lot of people posting "Truth, Justice, Freedom, Reasonably Priced Love, And a Hard-Boiled Egg!", and before reading the book it sounds like a Pratchett-style humorous revolutionary slogan. And I guess it is, sort of, but, well... I have to quote the scene:
"You'd like Freedom, Truth, and Justice, wouldn't you, Comrade Sergeant?" said Reg encouragingly. "I'd like a hard-boiled egg," said Vimes, shaking the match out. There was some nervous laughter, but Reg looked offended. "In the circumstances, Sergeant, I think we should set our sights a little higher " "Well, yes, we could," said Vimes, coming down the steps. He glanced at the sheets of papers in front of Reg. The man cared. He really did. And he was serious. He really was. "But...well, Reg, tomorrow the sun will come up again, and I'm pretty sure that whatever happens we won't have found Freedom, and there won't be a whole lot of Justice, and I'm damn sure we won't have found Truth. But it's just possible that I might get a hard-boiled egg."
I think I'm feeling defensive of the heart of this book because it's such an emotional gut punch and says such complicated and nuanced things about politics and ethics (and such deeply cynical things about revolution). But I think if I were to try to represent this story in a meme, it would be the "angels rise up" song, with all the layers of meaning that it gains in this story. I'm still at the point where the lilac sprigs remind me of Sergeant Colon becoming quietly furious at the overstep of someone who wasn't there. There's one other thing I want to say about that scene: I'm not naturally on Vimes's side of this argument. I think it's important to note that Vimes's attitude throughout this book is profoundly, deeply conservative. The hard-boiled egg captures that perfectly: it's a bit of physical comfort, something you can buy or make, something that's part of the day-to-day wheels of the city that Vimes talks about elsewhere in Night Watch. It's a rejection of revolution, something that Vimes does elsewhere far more explicitly. Vimes is a cop. He is in some profound sense a defender of the status quo. He doesn't believe things are going to fundamentally change, and it's not clear he would want them to if they did. And yet. And yet, this is where Pratchett's Dickensian morality comes out. Vimes is a conservative at heart. He's grumpy and cynical and jaded and he doesn't like change. But if you put him in a situation where people are being hurt, he will break every rule and twist every principle to stop it.
He wanted to go home. He wanted it so much that he trembled at the thought. But if the price of that was selling good men to the night, if the price was filling those graves, if the price was not fighting with every trick he knew... then it was too high. It wasn't a decision that he was making, he knew. It was happening far below the areas of the brain that made decisions. It was something built in. There was no universe, anywhere, where a Sam Vimes would give in on this, because if he did then he wouldn't be Sam Vimes any more.
This is truly exceptional stuff. It is the best Discworld novel I have read, by far. I feel like this was the Watch novel that Pratchett was always trying to write, and he had to write five other novels first to figure out how to write it. And maybe to prepare Discworld readers to read it. There are a lot of Discworld novels that are great on their own merits, but also it is 100% worth reading all the Watch novels just so that you can read this book. Followed in publication order by The Wee Free Men and later, thematically, by Thud!. Rating: 10 out of 10

27 April 2023

Jonathan McDowell: Repurposing my C.H.I.P.

Way back at DebConf16 Gunnar managed to arrange for a number of Next Thing Co. C.H.I.P. boards to be distributed to those who were interested. I was lucky enough to be amongst those who received one, but I have to confess after some initial experimentation it ended up sitting in its box unused. The reasons for that were varied; partly about not being quite sure what best to do with it, partly due to a number of limitations it had, partly because NTC sadly went insolvent and there was less momentum around the hardware. I ve always meant to go back to it, poking it every now and then but never completing a project. I m finally almost there, and I figure I should write some of it up. TL;DR: My C.H.I.P. is currently running a mainline Linux 6.3 kernel with only a few DTS patches, an upstream u-boot v2022.1 with a couple of minor patches and an unmodified Debian bullseye armhf userspace.

Storage The main issue with the C.H.I.P. is that it uses MLC NAND, in particular mine has an 8MB H27QCG8T2E5R. That ended up unsupported in Linux, with the UBIFS folk disallowing operation on MLC devices. There s been subsequent work to enable an SLC emulation mode which makes the device more reliable at the cost of losing capacity by pairing up writes/reads in cells (AFAICT). Some of this hit for the H27UCG8T2ETR in 5.16 kernels, but I definitely did some experimentation with 5.17 without having much success. I should maybe go back and try again, but I ended up going a different route. It turned out that BytePorter had documented how to add a microSD slot to the NTC C.H.I.P., using just a microSD to full SD card adapter. Every microSD card I buy seems to come with one of these, so I had plenty lying around to test with. I started with ensuring the kernel could see it ok (by modifying the device tree), but once that was all confirmed I went further and built a more modern u-boot that talked to the SD card, and defaulted to booting off it. That meant no more relying on the internal NAND at all! I do see some flakiness with the SD card, which is possibly down to the dodgy way it s hooked up (I should probably do a basic PCB layout with JLCPCB instead). That s mostly been mitigated by forcing it into 1-bit mode instead of 4-bit mode (I tried lowering the frequency too, but that didn t make a difference). The problem manifests as:
sunxi-mmc 1c11000.mmc: data error, sending stop command
and then all storage access freezing (existing logins still work, if the program you re trying to run is in cache). I can t find a conclusive software solution to this; I m pretty sure it s the hardware, but I don t understand why the recovery doesn t generally work.

Random power offs After I had storage working I d see random hangs or power offs. It wasn t quite clear what was going on. So I started trying to work out how to find out the CPU temperature, in case it was overheating. It turns out the temperature sensor on the R8 is part of the touchscreen driver, and I d taken my usual approach of turning off all the drivers I didn t think I d need. Enabling it (CONFIG_TOUCHSCREEN_SUN4I) gave temperature readings and seemed to help somewhat with stability, though not completely. Next I ended up looking at the AXP209 PMIC. There were various scripts still installed (I d started out with the NTC Debian install and slowly upgraded it to bullseye while stripping away the obvious pieces I didn t need) and a start-up script called enable-no-limit. This turned out to not be running (some sort of expectation of i2c-dev being loaded and another failing check), but looking at the script and the data sheet revealed the issue. The AXP209 can cope with 3 power sources; an external DC source, a Li-battery, and finally a USB port. I was powering my board via the USB port, using a charger rated for 2A. It turns out that the AXP209 defaults to limiting USB current to 900mA, and that with wifi active and the CPU busy the C.H.I.P. can rise above that. At which point the AXP shuts everything down. Armed with that info I was able to understand what the power scripts were doing and which bit I needed - i2cset -f -y 0 0x34 0x30 0x03 to set no limit and disable the auto-power off. Additionally I also discovered that the AXP209 had a built in temperature sensor as well, so I added support for that via iio-hwmon.

WiFi WiFi on the C.H.I.P. is provided by an RTL8723BS SDIO attached device. It s terrible (and not just here, I had an x86 based device with one where it also sucked). Thankfully there s a driver in staging in the kernel these days, but I ve still found it can fall out with my house setup, end up connecting to a further away AP which then results in lots of retries, dropped frames and CPU consumption. Nailing it to the AP on the other side of the wall from where it is helps. I haven t done any serious testing with the Bluetooth other than checking it s detected and can scan ok.

Patches I patched u-boot v2022.01 (which shows you how long ago I was trying this out) with the following to enable boot from external SD:
u-boot C.H.I.P. external SD patch
diff --git a/arch/arm/dts/sun5i-r8-chip.dts b/arch/arm/dts/sun5i-r8-chip.dts
index 879a4b0f3b..1cb3a754d6 100644
--- a/arch/arm/dts/sun5i-r8-chip.dts
+++ b/arch/arm/dts/sun5i-r8-chip.dts
@@ -84,6 +84,13 @@
 		reset-gpios = <&pio 2 19 GPIO_ACTIVE_LOW>; /* PC19 */
 	 ;
 
+	mmc2_pins_e: mmc2@0  
+		pins = "PE4", "PE5", "PE6", "PE7", "PE8", "PE9";
+		function = "mmc2";
+		drive-strength = <30>;
+		bias-pull-up;
+	 ;
+
 	onewire  
 		compatible = "w1-gpio";
 		gpios = <&pio 3 2 GPIO_ACTIVE_HIGH>; /* PD2 */
@@ -175,6 +182,16 @@
 	status = "okay";
  ;
 
+&mmc2  
+	pinctrl-names = "default";
+	pinctrl-0 = <&mmc2_pins_e>;
+	vmmc-supply = <&reg_vcc3v3>;
+	vqmmc-supply = <&reg_vcc3v3>;
+	bus-width = <4>;
+	broken-cd;
+	status = "okay";
+ ;
+
 &ohci0  
 	status = "okay";
  ;
diff --git a/arch/arm/include/asm/arch-sunxi/gpio.h b/arch/arm/include/asm/arch-sunxi/gpio.h
index f3ab1aea0e..c0dfd85a6c 100644
--- a/arch/arm/include/asm/arch-sunxi/gpio.h
+++ b/arch/arm/include/asm/arch-sunxi/gpio.h
@@ -167,6 +167,7 @@ enum sunxi_gpio_number  
 
 #define SUN8I_GPE_TWI2		3
 #define SUN50I_GPE_TWI2		3
+#define SUNXI_GPE_SDC2		4
 
 #define SUNXI_GPF_SDC0		2
 #define SUNXI_GPF_UART0		4
diff --git a/board/sunxi/board.c b/board/sunxi/board.c
index fdbcd40269..f538cb7e20 100644
--- a/board/sunxi/board.c
+++ b/board/sunxi/board.c
@@ -433,9 +433,9 @@ static void mmc_pinmux_setup(int sdc)
 			sunxi_gpio_set_drv(pin, 2);
 		 
 #elif defined(CONFIG_MACH_SUN5I)
-		/* SDC2: PC6-PC15 */
-		for (pin = SUNXI_GPC(6); pin <= SUNXI_GPC(15); pin++)  
-			sunxi_gpio_set_cfgpin(pin, SUNXI_GPC_SDC2);
+		/* SDC2: PE4-PE9 */
+		for (pin = SUNXI_GPE(4); pin <= SUNXI_GPE(9); pin++)  
+			sunxi_gpio_set_cfgpin(pin, SUNXI_GPE_SDC2);
 			sunxi_gpio_set_pull(pin, SUNXI_GPIO_PULL_UP);
 			sunxi_gpio_set_drv(pin, 2);
 		 

I ve sent some patches for the kernel device tree upstream - there s an outstanding issue with the Bluetooth wake GPIO causing the serial port not to probe(!) that I need to resolve before sending a v2, but what s there works for me. The only remaining piece is patch to enable the external SD for Linux; I don t think it s appropriate to send upstream but it s fairly basic. This limits the bus to 1 bit rather than the 4 bits it s capable of, as mentioned above.
Linux C.H.I.P. external SD DTS patch diff diff --git a/arch/arm/boot/dts/sun5i-r8-chip.dts b/arch/arm/boot/dts/sun5i-r8-chip.dts index fd37bd1f3920..2b5aa4952620 100644 --- a/arch/arm/boot/dts/sun5i-r8-chip.dts +++ b/arch/arm/boot/dts/sun5i-r8-chip.dts @@ -163,6 +163,17 @@ &mmc0 status = "okay"; ; +&mmc2 + pinctrl-names = "default"; + pinctrl-0 = <&mmc2_4bit_pe_pins>; + vmmc-supply = <&reg_vcc3v3>; + vqmmc-supply = <&reg_vcc3v3>; + bus-width = <1>; + non-removable; + disable-wp; + status = "okay"; + ; + &ohci0 status = "okay"; ;

As for what I m doing with it, I think that ll have to be a separate post.

Simon Josefsson: A Security Device Threat Model: The Substitution Attack

I d like to describe and discuss a threat model for computational devices. This is generic but we will narrow it down to security-related devices. For example, portable hardware dongles used for OpenPGP/OpenSSH keys, FIDO/U2F, OATH HOTP/TOTP, PIV, payment cards, wallets etc and more permanently attached devices like a Hardware Security Module (HSM), a TPM-chip, or the hybrid variant of a mostly permanently-inserted but removable hardware security dongles. Our context is cryptographic hardware engineering, and the purpose of the threat model is to serve as as a thought experiment for how to build and design security devices that offer better protection. The threat model is related to the Evil maid attack. Our focus is to improve security for the end-user rather than the traditional focus to improve security for the organization that provides the token to the end-user, or to improve security for the site that the end-user is authenticating to. This is a critical but often under-appreciated distinction, and leads to surprising recommendations related to onboard key generation, randomness etc below.

The Substitution Attack
An attacker is able to substitute any component of the device (hardware or software) at any time for any period of time.
Your takeaway should be that devices should be designed to mitigate harmful consequences if any component of the device (hardware or software) is substituted for a malicious component for some period of time, at any time, during the lifespan of that component. Some designs protect better against this attack than other designs, and the threat model can be used to understand which designs are really bad, and which are less so.

Terminology The threat model involves at least one device that is well-behaving and one that is not, and we call these Good Device and Bad Device respectively. The bad device may be the same physical device as the good key, but with some minor software modification or a minor component replaced, but could also be a completely separate physical device. We don t care about that distinction, we just care if a particular device has a malicious component in it or not. I ll use terms like security device , device , hardware key , security co-processor etc interchangeably. From an engineering point of view, malicious here includes unintentional behavior such as software or hardware bugs. It is not possible to differentiate an intentionally malicious device from a well-designed device with a critical bug. Don t attribute to malice what can be adequately explained by stupidity, but don t na vely attribute to stupidity what may be deniable malicious.

What is some period of time ? Some period of time can be any length of time: seconds, minutes, days, weeks, etc. It may also occur at any time: During manufacturing, during transportation to the user, after first usage by the user, or after a couple of months usage by the user. Note that we intentionally consider time-of-manufacturing as a vulnerable phase. Even further, the substitution may occur multiple times. So the Good Key may be replaced with a Bad Key by the attacker for one day, then returned, and later this repeats a month later.

What is harmful consequences ? Since a security key has a fairly well-confined scope and purpose, we can get a fairly good exhaustive list of things that could go wrong. Harmful consequences include:
  • Attacker learns any secret keys stored on a Good Key.
  • Attacker causes user to trust a public generated by a Bad Key.
  • Attacker is able to sign something using a Good Key.
  • Attacker learns the PIN code used to unlock a Good Key.
  • Attacker learns data that is decrypted by a Good Key.

Thin vs Deep solutions One approach to mitigate many issues arising from device substitution is to have the host (or remote site) require that the device prove that it is the intended unique device before it continues to talk to it. This require an authentication/authorization protocol, which usually involves unique device identity and out-of-band trust anchors. Such trust anchors is often problematic, since a common use-case for security device is to connect it to a host that has never seen the device before. A weaker approach is to have the device prove that it merely belongs to a class of genuine devices from a trusted manufacturer, usually by providing a signature generated by a device-specific private key signed by the device manufacturer. This is weaker since then the user cannot differentiate two different good devices. In both cases, the host (or remote site) would stop talking to the device if it cannot prove that it is the intended key, or at least belongs to a class of known trusted genuine devices. Upon scrutiny, this solution is still vulnerable to a substitution attack, just earlier in the manufacturing chain: how can the process that injects the per-device or per-class identities/secrets know that it is putting them into a good key rather than a malicious device? Consider also the consequences if the cryptographic keys that guarantee that a device is genuine leaks. The model of the thin solution is similar to the old approach to network firewalls: have a filtering firewall that only lets through intended traffic, and then run completely insecure protocols internally such as telnet. The networking world has evolved, and now we have defense in depth: even within strongly firewall ed networks, it is prudent to run for example SSH with publickey-based user authentication even on locally physical trusted networks. This approach requires more thought and adds complexity, since each level has to provide some security checking. I m arguing we need similar defense-in-depth for security devices. Security key designs cannot simply dodge this problem by assuming it is working in a friendly environment where component substitution never occur.

Example: Device authentication using PIN codes To see how this threat model can be applied to reason about security key designs, let s consider a common design. Many security keys uses PIN codes to unlock private key operations, for example on OpenPGP cards that lacks built-in PIN-entry functionality. The software on the computer just sends a PIN code to the device, and the device allows private-key operations if the PIN code was correct. Let s apply the substitution threat model to this design: the attacker replaces the intended good key with a malicious device that saves a copy of the PIN code presented to it, and then gives out error messages. Once the user has entered the PIN code and gotten an error message, presumably temporarily giving up and doing other things, the attacker replaces the device back again. The attacker has learnt the PIN code, and can later use this to perform private-key operations on the good device. This means a good design involves not sending PIN codes in clear, but use a stronger authentication protocol that allows the card to know that the PIN was correct without learning the PIN. This is implemented optionally for many OpenPGP cards today as the key-derivation-function extension. That should be mandatory, and users should not use setups that sends device authentication in the clear, and ultimately security devices should not even include support for that. Compare how I build Gnuk on my PGP card with the kdf_do=required option.

Example: Onboard non-predictable key-generation Many devices offer both onboard key-generation, for example OpenPGP cards that generate a Ed25519 key internally on the devices, or externally where the device imports an externally generated cryptographic key. Let s apply the subsitution threat model to this design: the user wishes to generate a key and trust the public key that came out of that process. The attacker substitutes the device for a malicious device during key-generation, imports the private key into a good device and gives that back to the user. Most of the time except during key generation the user uses a good device but still the attacker succeeded in having the user trust a public key which the attacker knows the private key for. The substitution may be a software modification, and the method to leak the private key to the attacker may be out-of-band signalling. This means a good design never generates key on-board, but imports them from a user-controllable environment. That approach should be mandatory, and users should not use setups that generates private keys on-board, and ultimately security devices should not even include support for that.

Example: Non-predictable randomness-generation Many devices claims to generate random data, often with elaborate design documents explaining how good the randomness is. Let s apply the substitution threat model to this design: the attacker replaces the intended good key with a malicious design that generates (for the attacker) predictable randomness. The user will never be able to detect the difference since the random output is, well, random, and typically not distinguishable from weak randomness. The user cannot know if any cryptographic keys generated by a generator was faulty or not. This means a good design never generates non-predictable randomness on the device. That approach should be mandatory, and users should not use setups that generates non-predictable randomness on the device, and ideally devices should not have this functionality.

Case-Study: Tillitis I have warmed up a bit for this. Tillitis is a new security device with interesting properties, and core to its operation is the Compound Device Identifier (CDI), essentially your Ed25519 private key (used for SSH etc) is derived from the CDI that is computed like this:
cdi = blake2s(UDS, blake2s(device_app), USS)
Let s apply the substitution threat model to this design: Consider someone replacing the Tillitis key with a malicious key during postal delivery of the key to the user, and the replacement device is identical with the real Tillitis key but implements the following key derivation function:
cdi = weakprng(UDS , weakprng(device_app), USS)
Where weakprng is a compromised algorithm that is predictable for the attacker, but still appears random. Everything will work correctly, but the attacker will be able to learn the secrets used by the user, and the user will typically not be able to tell the difference since the CDI is secret and the Ed25519 public key is not self-verifiable.

Conclusion Remember that it is impossible to fully protect against this attack, that s why it is merely a thought experiment, intended to be used during design of these devices. Consider an attacker that never gives you access to a good key and as a user you only ever use a malicious device. There is no way to have good security in this situation. This is not hypothetical, many well-funded organizations do what they can to derive people from having access to trustworthy security devices. Philosophically it does not seem possible to tell if these organizations have succeeded 100% already and there are only bad security devices around where further resistance is futile, but to end on an optimistic note let s assume that there is a non-negligible chance that they haven t succeeded. In these situations, this threat model becomes useful to improve the situation by identifying less good designs, and that s why the design mantra of mitigate harmful consequences is crucial as a takeaway from this. Let s improve the design of security devices that further the security of its users!

10 December 2022

Russ Allbery: Review: Trust

Review: Trust, by Mary Sisson
Series: Trang #2
Publisher: Mary Sisson
Copyright: 2012
Printing: December 2013
ASIN: B0087KQDQ0
Format: Kindle
Pages: 375
Trust is a direct sequel to Trang and should not be read out of order. Both the primary and secondary plot deal with the consequences of the ending of Trang and thus spoil it heavily. After a dodgy and unethical debriefing under mental probe (which took far too long to become relevant to the story) Philippe Trang is back at work as the human diplomat to a space station full of aliens in intergalactic space. At first, interspecies relations have not changed much, but soon Philippe will be drawn farther into the consequences of Trang's dramatic conclusion than he would have expected. And there are other complications: a new group of Special Forces soldiers, an extremely grumpy alien who disapproves of his species' current culture, and the continuing incomprehensible behavior of the Magic Man. I was hoping this series would be a hidden gem of diplomatic SF, a subgenre that I want to exist. Unfortunately, this entry takes a strong turn towards more conventional territory: a Star Trek plot. One of the merits of the first novel is that Trang, and the humans in general, didn't have much power. Humans were meeting aliens as equals or inferiors, and both sides were attempting to understand. Although the humans became central to the plot (probably unavoidable when they're the protagonists), it was somewhat by accident. I was hoping Sisson would lean into that setup, stress the equal footing, and focus on the complex diplomacy. Instead, Trust puts Trang in a position of considerable authority over one of the alien races. Like most Star Trek plots, this book is primarily about mapping strange alien behavior onto human morality to solve the problem of the week. As an entry in that genre, it's... okay. I still like Trang as a character, Sisson does an adequate job showing how weird aliens can be (although mostly by jumbling together extremes of human behavior instead of finding something truly alien), and the opinions of the aliens continue to matter. But I feel like this series is sliding towards making the humans central to everything, and I was not enthused by a plot that requires Trang sort out complex political problems for an entire alien race he knows next to nothing about. This book also adds more viewpoints, giving us several chapters from the perspective of one of Trang's escort soldiers and one from one of the aliens. I liked the alien perspective; it was probably the best chapter of the book. Seeing the human characters from an external perspective was amusing, and I liked how both the humans and the aliens surprised and confused each other. That said, it also felt like a bit of a cheat to tell the reader about the cultural forces at play that Trang was missing. I already thought Trang was being exceptionally dim about the alien gifts he was given, pocketing them without attempting to understand them and then never studying them later. Being shown more things he should have tried to figure out didn't help. You're a diplomat in an extended first contact situation: please have some basic curiosity about what's going on and talk it over with the people around you, rather than treating aliens like children. The soldier perspective adds a secondary plot that was okay but a bit silly, and I'm not sure why Sisson needed to introduce a second perspective to tell it. I think the same secondary plot would have been more interesting if she'd stuck with Trang's perspective and forced him to reconstruct what had been happening when he wasn't paying attention. As is, the plot relied too much on people being stupid and not talking to each other. There was nothing seriously wrong with this book, but it's also not fulfilling the potential of the first book of the series. It felt like okay self-published science fiction; that's not nothing, but there are a lot of books like that, and I wanted diplomatic science fiction that goes somewhere different. I may give the third book a chance, but I'm feeling less enthused now. Followed by Tribulations. Rating: 6 out of 10

1 March 2022

Utkarsh Gupta: FOSS Activites in February 2022

Here s my (twenty-ninth) monthly but brief update about the activities I ve done in the F/L/OSS world.

Debian
This was my 38th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ I had been sick this month, so most of the time I spent away from system, recovering, et al, and also went through the huge backlog that I had, which is starting to get smaller. :D Anyway, I did the following stuff in Debian:

Uploads and bug fixes:
  • at (3.4.4-1) - Adding a DEP8 test for the package, fixing bug #985421.

Other $things:
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
This was my 13th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there s a bunch of things I do! \o/ I mostly worked on different things, I guess. I was too lazy to maintain a list of things I worked on so there s no concrete list atm. Maybe I ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my twenty-ninth month as a Debian LTS and eighteenth month as a Debian ELTS paid contributor.
Whilst I was assigned 42.75 hours for LTS and 45.25 hours for ELTS, I could only work a little due to being sick and so I spent 15.75 hours on LTS and 9.25 hours on ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

Debian LTS Survey I ve spent 10 hours on the LTS survey on the following bits:
(and 5 hours of the last month that I m going to invoice this month)
  • Put most of the content in the instance according to the question type.
  • Been going back and forth updating the status of the survey on the issue.
  • Trying to find a way to send to DDs - discussing with DPL, Raphael, and other people on the issue itself.
  • Completing the last bits to start the survey for the paid contributors, at least. Talking to Jeremiah about this.

Until next time.
:wq for today.

17 May 2021

Dominique Dumont: Important bug fix for OpenSsh cme config editor

The new release of Config::Model::OpenSsh fixes a bugs that impacted experienced users: the order of Hosts or Match sections is now preserved when writing back ~/.ssh/config file. Why does this matter ? Well, the beginning of ssh_config man page mentions that For each parameter, the first obtained value will be used. and Since the first obtained value for each parameter is used, more host-specific declarations should be given near the beginning of the file, and general defaults at the end. . Looks like I missed these statements when I designed the model for OpenSsh configuration: the Host section was written back in a neat, but wrong, alphabetical order. This does not matter except when there an overlap between the specifications of the Host (or Match) sections like in the example below:
Host foo.company.com
Port 22
Host *.company.com
Port 10022
With this example, ssh connection to foo.company.com is done using port 22 and connection to bar.company.com with port 10022. If the Host sections are written back in reverse order:
Host *.company.com
Port 10022
Host foo.company.com
Port 22
Then, ssh would be happy to use the first matching section for foo.company.com , i.e. *.company.com and would use the wrong port (10022) This is now fixed with Config::Model::OpenSsh 2.8.4.3 which is available on cpan and in Debian/experimental. While I was at it, I ve also updated Managing OpenSsh configuration with cme wiki page. All the best

25 April 2021

Dominique Dumont: An improved GUI for cme and Config::Model

I ve finally found the time to improve the GUI of my pet project: cme (aka Config::Model). Several years ago, I stumbled on a usability problem on the GUI. Some configuration (like OpenSsh or Systemd) feature a lot of configuration parameters. Which means that the GUI displays all these parameters, so finding a specfic parameter might be challenging:
To workaround this problem, I ve added a Filter widget in 2018 which did more or less the job, but it suffered from several bugs which made its behavior confusing. This is now fixed. The Filter widget is now working in a more consistent way. In the example below, I ve typed IdentityFile (1) in the Filter widget to show the identityFile used for various hosts (2):
Which is quite good, but some hosts use the default identity file so no value show up in the GUI. You can then click on hide empty value checkbox to show only the hosts that use a specific identity file:
I hope that this new behavior of the Filter box will make this project more useful. The improved GUI was released with Config::Model::TkUI 1.374. This new version is available on CPAN and on Debian/experimental). It will be released on Debian/unstable once the next Debian version is out. All the best

11 April 2021

Vishal Gupta: Sikkim 101 for Backpackers

Host to Kanchenjunga, the world s third-highest mountain peak and the endangered Red Panda, Sikkim is a state in northeastern India. Nestled between Nepal, Tibet (China), Bhutan and West Bengal (India), the state offers a smorgasbord of cultures and cuisines. That said, it s hardly surprising that the old spice route meanders through western Sikkim, connecting Lhasa with the ports of Bengal. Although the latter could also be attributed to cardamom (kali elaichi), a perennial herb native to Sikkim, which the state is the second-largest producer of, globally. Lastly, having been to and lived in India, all my life, I can confidently say Sikkim is one of the cleanest & safest regions in India, making it ideal for first-time backpackers.

Brief History
  • 17th century: The Kingdom of Sikkim is founded by the Namgyal dynasty and ruled by Buddhist priest-kings known as the Chogyal.
  • 1890: Sikkim becomes a princely state of British India.
  • 1947: Sikkim continues its protectorate status with the Union of India, post-Indian-independence.
  • 1973: Anti-royalist riots take place in front of the Chogyal's palace, by Nepalis seeking greater representation.
  • 1975: Referendum leads to the deposition of the monarchy and Sikkim joins India as its 22nd state.
Languages
  • Official: English, Nepali, Sikkimese/Bhotia and Lepcha
  • Though Hindi and Nepali share the same script (Devanagari), they are not mutually intelligible. Yet, most people in Sikkim can understand and speak Hindi.
Ethnicity
  • Nepalis: Migrated in large numbers (from Nepal) and soon became the dominant community
  • Bhutias: People of Tibetan origin. Major inhabitants in Northern Sikkim.
  • Lepchas: Original inhabitants of Sikkim

Food
  • Tibetan/Nepali dishes (mostly consumed during winter)
    • Thukpa: Noodle soup, rich in spices and vegetables. Usually contains some form of meat. Common variations: Thenthuk and Gyathuk
    • Momos: Steamed or fried dumplings, usually with a meat filling.
    • Saadheko: Spicy marinated chicken salad.
    • Gundruk Soup: A soup made from Gundruk, a fermented leafy green vegetable.
    • Sinki : A fermented radish tap-root product, traditionally consumed as a base for soup and as a pickle. Eerily similar to Kimchi.
  • While pork and beef are pretty common, finding vegetarian dishes is equally easy.
  • Staple: Dal-Bhat with Subzi. Rice is a lot more common than wheat (rice) possibly due to greater carb content and proximity to West Bengal, India s largest producer of Rice.
  • Good places to eat in Gangtok
    • Hamro Bhansa Ghar, Nimtho (Nepali)
    • Taste of Tibet
    • Dragon Wok (Chinese & Japanese)

Buddhism in Sikkim
  • Bayul Demojong (Sikkim), is the most sacred Land in the Himalayas as per the belief of the Northern Buddhists and various religious texts.
  • Sikkim was blessed by Guru Padmasambhava, the great Buddhist saint who visited Sikkim in the 8th century and consecrated the land.
  • However, Buddhism is said to have reached Sikkim only in the 17th century with the arrival of three Tibetan monks viz. Rigdzin Goedki Demthruchen, Mon Kathok Sonam Gyaltshen & Rigdzin Legden Je at Yuksom. Together, they established a Buddhist monastery.
  • In 1642 they crowned Phuntsog Namgyal as the first monarch of Sikkim and gave him the title of Chogyal, or Dharma Raja.
  • The faith became popular through its royal patronage and soon many villages had their own monastery.
  • Today Sikkim has over 200 monasteries.

Major monasteries
  • Rumtek Monastery, 20Km from Gangtok
  • Lingdum/Ranka Monastery, 17Km from Gangtok
  • Phodong Monastery, 28Km from Gangtok
  • Ralang Monastery, 10Km from Ravangla
  • Tsuklakhang Monastery, Royal Palace, Gangtok
  • Enchey Monastery, Gangtok
  • Tashiding Monastery, 35Km from Ravangla


Reaching Sikkim
  • Gangtok, being the capital, is easiest to reach amongst other regions, by public transport and shared cabs.
  • By Air:
    • Pakyong (PYG) :
      • Nearest airport from Gangtok (about 1 hour away)
      • Tabletop airport
      • Reserved cabs cost around INR 1200.
      • As of Apr 2021, the only flights to PYG are from IGI (Delhi) and CCU (Kolkata).
    • Bagdogra (IXB) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Larger airport with flights to most major Indian cities.
      • Reserved cabs cost about INR 3000. Shared cabs cost about INR 350.
  • By Train:
    • New Jalpaiguri (NJP) :
      • About 20 minutes from Siliguri and 4 hours from Gangtok.
      • Reserved cabs cost about INR 3000. Shared cabs from INR 350.
  • By Road:
    • NH10 connects Siliguri to Gangtok
    • If you can t find buses plying to Gangtok directly, reach Siliguri and then take a cab to Gangtok.
  • Sikkim Nationalised Transport Div. also runs hourly buses between Siliguri and Gangtok and daily buses on other common routes. They re cheaper than shared cabs.
  • Wizzride also operates shared cabs between Siliguri/Bagdogra/NJP, Gangtok and Darjeeling. They cost about the same as shared cabs but pack in half as many people in luxury cars (Innova, Xylo, etc.) and are hence more comfortable.

Gangtok
  • Time needed: 1D/1N
  • Places to visit:
    • Hanuman Tok
    • Ganesh Tok
    • Tashi View Point [6,800ft]
    • MG Marg
    • Sikkim Zoo
    • Gangtok Ropeway
    • Enchey Monastery
    • Tsuklakhang Palace & Monastery
  • Hostels: Tagalong Backpackers (would strongly recommend), Zostel Gangtok
  • Places to chill: Travel Cafe, Caf Live & Loud and Gangtok Groove
  • Places to shop: Lal Market and MG Marg

Getting Around
  • Taxis operate on a reserved or shared basis. In case of the latter, you can pool with other commuters your taxis will pick up and drop en-route.
  • Naturally shared taxis only operate on popular routes. The easiest way to get around Gangtok is to catch a shared cab from MG Marg.
  • Reserved taxis for Gangtok sightseeing cost around INR 1000-1500, depending upon the spots you d like to see
  • Key taxi/bus stands :
    • Deorali stand: For Darjeeling, Siliguri, Kalimpong
    • Vajra stand: For North & East Sikkim (Tsomgo Lake & Nathula)
    • Rumtek taxi: For Ravangla, Pelling, Namchi, Geyzing, Jorethang and Singtam.
Exploring Gangtok on an MTB

North Sikkim
  • The easiest & most economical way to explore North Sikkim is the 3D/2N package offered by shared-cab drivers.
  • This includes food, permits, cab rides and accommodation (1N in Lachen and 1N in Lachung)
  • The accommodation on both nights are at homestays with bare necessities, so keep your hopes low.
  • In the spirit of sustainable tourism, you ll be asked to discard single-use plastic bottles, so please carry a bottle that you can refill along the way.
  • Zero Point and Gurdongmer Lake are snow-capped throughout the year
3D/2N Shared-cab Package Itinerary
  • Day 1
    • Gangtok (10am) - Chungthang - Lachung (stay)
  • Day 2
    • Pre-lunch : Lachung (6am) - Yumthang Valley [12,139ft] - Zero Point - Lachung [15,300ft]
    • Post-lunch : Lachung - Chungthang - Lachen (stay)
  • Day 3
    • Pre-lunch : Lachen (5am) - Kala Patthar - Gurdongmer Lake [16,910ft] - Lachen
    • Post-lunch : Lachen - Chungthang - Gangtok (7pm)
  • This itinerary is idealistic and depends on the level of snowfall.
  • Some drivers might switch up Day 2 and 3 itineraries by visiting Lachen and then Lachung, depending upon the weather.
  • Areas beyond Lachen & Lachung are heavily militarized since the Indo-China border is only a few miles away.

East Sikkim

Zuluk and Silk Route
  • Time needed: 2D/1N
  • Zuluk [9,400ft] is a small hamlet with an excellent view of the eastern Himalayan range including the Kanchenjunga.
  • Was once a transit point to the historic Silk Route from Tibet (Lhasa) to India (West Bengal).
  • The drive from Gangtok to Zuluk takes at least four hours. Hence, it makes sense to spend the night at a homestay and space out your trip to Zuluk

Tsomgo Lake and Nathula
  • Time Needed : 1D
  • A Protected Area Permit is required to visit these places, due to their proximity to the Chinese border
  • Tsomgo/Chhangu Lake [12,313ft]
    • Glacial lake, 40 km from Gangtok.
    • Remains frozen during the winter season.
    • You can also ride on the back of a Yak for INR 300
  • Baba Mandir
    • An old temple dedicated to Baba Harbhajan Singh, a Sepoy in the 23rd Regiment, who died in 1962 near the Nathu La during Indo China war.
  • Nathula Pass [14,450ft]
    • Located on the Indo-Tibetan border crossing of the Old Silk Route, it is one of the three open trading posts between India and China.
    • Plays a key role in the Sino-Indian Trade and also serves as an official Border Personnel Meeting(BPM) Point.
    • May get cordoned off by the Indian Army in event of heavy snowfall or for other security reasons.


West Sikkim
  • Time needed: 3N/1N
  • Hostels at Pelling : Mochilerro Ostillo

Itinerary

Day 1: Gangtok - Ravangla - Pelling
  • Leave Gangtok early, for Ravangla through the Temi Tea Estate route.
  • Spend some time at the tea garden and then visit Buddha Park at Ravangla
  • Head to Pelling from Ravangla

Day 2: Pelling sightseeing
  • Hire a cab and visit Skywalk, Pemayangtse Monastery, Rabdentse Ruins, Kecheopalri Lake, Kanchenjunga Falls.

Day 3: Pelling - Gangtok/Siliguri
  • Wake up early to catch a glimpse of Kanchenjunga at the Pelling Helipad around sunrise
  • Head back to Gangtok on a shared-cab
  • You could take a bus/taxi back to Siliguri if Pelling is your last stop.

Darjeeling
  • In my opinion, Darjeeling is lovely for a two-day detour on your way back to Bagdogra/Siliguri and not any longer (unless you re a Bengali couple on a honeymoon)
  • Once a part of Sikkim, Darjeeling was ceded to the East India Company after a series of wars, with Sikkim briefly receiving a grant from EIC for gifting Darjeeling to the latter
  • Post-independence, Darjeeling was merged with the state of West Bengal.

Itinerary

Day 1 :
  • Take a cab from Gangtok to Darjeeling (shared-cabs cost INR 300 per seat)
  • Reach Darjeeling by noon and check in to your Hostel. I stayed at Hideout.
  • Spend the evening visiting either a monastery (or the Batasia Loop), Nehru Road and Mall Road.
  • Grab dinner at Glenary whilst listening to live music.

Day 2:
  • Wake up early to catch the sunrise and a glimpse of Kanchenjunga at Tiger Hill. Since Tiger Hill is 10km from Darjeeling and requires a permit, book your taxi in advance.
  • Alternatively, if you don t want to get up at 4am or shell out INR1500 on the cab to Tiger Hill, walk to the Kanchenjunga View Point down Mall Road
  • Next, queue up outside Keventers for breakfast with a view in a century-old cafe
  • Get a cab at Gandhi Road and visit a tea garden (Happy Valley is the closest) and the Ropeway. I was lucky to meet 6 other backpackers at my hostel and we ended up pooling the cab at INR 200 per person, with INR 1400 being on the expensive side, but you could bargain.
  • Get lunch, buy some tea at Golden Tips, pack your bags and hop on a shared-cab back to Siliguri. It took us about 4hrs to reach Siliguri, with an hour to spare before my train.
  • If you ve still got time on your hands, then check out the Peace Pagoda and the Darjeeling Himalayan Railway (Toy Train). At INR 1500, I found the latter to be too expensive and skipped it.


Tips and hacks
  • Download offline maps, especially when you re exploring Northern Sikkim.
  • Food and booze are the cheapest in Gangtok. Stash up before heading to other regions.
  • Keep your Aadhar/Passport handy since you need permits to travel to North & East Sikkim.
  • In rural areas and some cafes, you may get to try Rhododendron Wine, made from Rhododendron arboreum a.k.a Gurans. Its production is a little hush-hush since the flower is considered holy and is also the National Flower of Nepal.
  • If you don t want to invest in a new jacket, boots or a pair of gloves, you can always rent them at nominal rates from your hotel or little stores around tourist sites.
  • Check the weather of a region before heading there. Low visibility and precipitation can quite literally dampen your experience.
  • Keep your itinerary flexible to accommodate for rest and impromptu plans.
  • Shops and restaurants close by 8pm in Sikkim and Darjeeling. Plan for the same.

Carry
  • a couple of extra pairs of socks (woollen, if possible)
  • a pair of slippers to wear indoors
  • a reusable water bottle
  • an umbrella
  • a power bank
  • a couple of tablets of Diamox. Helps deal with altitude sickness
  • extra clothes and wet bags since you may not get a chance to wash/dry your clothes
  • a few passport size photographs

Shared-cab hacks
  • Intercity rides can be exhausting. If you can afford it, pay for an additional seat.
  • Call shotgun on the drives beyond Lachen and Lachung. The views are breathtaking.
  • Return cabs tend to be cheaper (WB cabs travelling from SK and vice-versa)

Cost
  • My median daily expenditure (back when I went to Sikkim in early March 2021) was INR 1350.
  • This includes stay (bunk bed), food, wine and transit (shared cabs)
  • In my defence, I splurged on food, wine and extra seats in shared cabs, but if you re on a budget, you could easily get by on INR 1 - 1.2k per day.
  • For a 9-day trip, I ended up shelling out nearly INR 15k, including 2AC trains to & from Kolkata
  • Note : Summer (March to May) and Autumn (October to December) are peak seasons, and thereby more expensive to travel around.

Souvenirs and things you should buy

Buddhist souvenirs :
  • Colourful Prayer Flags (great for tying on bikes or behind car windshields)
  • Miniature Prayer/Mani Wheels
  • Lucky Charms, Pendants and Key Chains
  • Cham Dance masks and robes
  • Singing Bowls
  • Common symbols: Om mani padme hum, Ashtamangala, Zodiac signs

Handicrafts & Handlooms
  • Tibetan Yak Wool shawls, scarfs and carpets
  • Sikkimese Ceramic cups
  • Thangka Paintings

Edibles
  • Darjeeling Tea (usually brewed and not boiled)
  • Wine (Arucha Peach & Rhododendron)
  • Dalle Khursani (Chilli) Paste and Pickle

Header Icon made by Freepik from www.flaticon.com is licensed by CC 3.0 BY

2 March 2021

Russ Allbery: Review: The Lion, the Witch and the Wardrobe

Review: The Lion, the Witch and the Wardrobe, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #1
Publisher: Collier Books
Copyright: 1950
Printing: 1978
ISBN: 0-02-044220-3
Format: Mass market
Pages: 186
Although it's been more than 20 years since I last read it, I believe I have read The Lion, the Witch and the Wardrobe more times than any other book. The count is certainly in double digits. As you might guess, I also have strong opinions about it, some of which are unorthodox, and I've been threatening to write this review for years. It seemed a fitting choice for my 1000th review. There is quite a lot that can and has been said about this book and this series, and this review is already going to be much too long, so I'm only going to say a fraction of it. I'm going to focus on my personal reactions as someone raised a white evangelical Christian but no longer part of that faith, and the role this book played in my religion. I'm not going to talk much about some of its flaws, particularly Lewis's treatment of race and gender. This is not because I don't agree they're there, but only that I don't have much to say that isn't covered far better in other places. Unlike my other reviews, this one will contain major spoilers. If you have managed to remain unspoiled for a 70-year-old novel that spawned multiple movies and became part of the shared culture of evangelical Christianity, and want to stay that way, I'll warn you in ALL CAPS when it's time to go. But first, a few non-spoiler notes. First, reading order. Most modern publications of The Chronicles of Narnia will list The Magician's Nephew as the first book. This follows internal chronological order and is at C.S. Lewis's request. However, I think Lewis was wrong. You should read this series in original publication order, starting with The Lion, the Witch and the Wardrobe (which I'm going to abbreviate as TLtWatW like everyone else who writes about it). I will caveat this by saying that I have a bias towards reading books in the order an author wrote them because I like seeing the development of the author's view of their work, and I love books that jump back in time and fill in background, so your experience may vary. But the problem I see with the revised publication order is that The Magician's Nephew explains the origins of Narnia and, thus, many of the odd mysteries of TLtWatW that Lewis intended to be mysterious. Reading it first damages both books, like watching a slow-motion how-to video for a magic trick before ever seeing it performed. The reader is not primed to care about the things The Magician's Nephew is explaining, which makes it less interesting. And the bits of unexpected magic and mystery in TLtWatW that give it so much charm (and which it needs, given the thinness of the plot) are already explained away and lose appeal because of it. I have read this series repeatedly in both internal chronological order and in original publication order. I have even read it in strict chronological order, wherein one pauses halfway through the last chapter of TLtWatW to read The Horse and His Boy before returning. I think original publication order is the best. (The Horse and His Boy is a side story and it doesn't matter that much where you read it as long as you read it after TLtWatW. For this re-read, I will follow original publication order and read it fifth.) Second, allegory. The common understanding of TLtWatW is that it's a Christian allegory for children, often provoking irritated reactions from readers who enjoyed the story on its own terms and later discovered all of the religion beneath it. I think this view partly misunderstands how Lewis thought about the world and there is a more interesting way of looking at the book. I'm not as dogmatic about this as I used to be; if you want to read it as an allegory, there are plenty of carefully crafted parallels to the gospels to support that reading. But here's my pitch for a different reading. To C.S. Lewis, the redemption of the world through the death of Jesus Christ is as foundational a part of reality as gravity. He spent much of his life writing about religion and Christianity in both fiction and non-fiction, and this was the sort of thing he constantly thought about. If somewhere there is another group of sentient creatures, Lewis's theology says that they must fit into that narrative in some way. Either they would have to be unfallen and thus not need redemption (roughly the position taken by The Space Trilogy), or they would need their own version of redemption. So yes, there are close parallels in Narnia to events of the Christian Bible, but I think they can be read as speculating how Christian salvation would play out in a separate creation with talking animals, rather than an attempt to disguise Christianity in an allegory for children. It's a subtle difference, but I think Narnia more an answer to "how would Christ appear in this fantasy world?" than to "how do I get children interested in the themes of Christianity?", although certainly both are in play. Put more bluntly, where other people see allegory, I see the further adventures of Jesus Christ as an anthropomorphic lion, which in my opinion is an altogether more delightful way to read the books. So much for the preamble; on to the book. The Pevensie kids, Peter, Susan, Edmund, and Lucy, have been evacuated to a huge old house in the country due to the air raids (setting this book during World War II, something that is passed over with barely a mention and not a hint of trauma in a way that a modern book would never do). While exploring this house, which despite the scant description is still stuck in my mind as the canonical huge country home, Lucy steps into a wardrobe because she wants to feel the fur of the coats. Much to her surprise, the wardrobe appears not to have a back, and she finds herself eventually stepping into a snow-covered pine forest where she meets a Fawn named Mr. Tumnus by an unlikely lamp-post. MAJOR SPOILERS BELOW, so if you don't want to see those, here's your cue to stop reading. Two things surprised me when re-reading TLtWatW. The first, which I remember surprising me every time I read it, is how far into this (very short) book one has to go before the plot kicks into gear. It's not until "What Happened After Dinner" more than a third in that we learn much of substance about Narnia, and not until "The Spell Begins to Break" halfway through the book that things start to happen. The early chapters are concerned primarily with the unreliability of the wardrobe portal, with a couple of early and brief excursions by Edmund and Lucy, and with Edmund being absolutely awful to Lucy. The second thing that surprised me is how little of what happens is driven by the kids. The second half of TLtWatW is about the fight between Aslan and the White Witch, but this fight was not set off by the children and their decisions don't shape it in any significant way. They're primarily bystanders; the few times they take action, it's either off-camera or they're told explicitly what to do. The arguable exception is Edmund, who provides the justification for the final conflict, but he functions more as plot device than as a character with much agency. When that is combined with how much of the story is also on rails via its need to recapitulate part of the gospels (more on that in a moment), it makes the plot feel astonishingly thin and simple. Edmund is the one protagonist who gets to make some decisions, all of them bad. As a kid, I hated reading these parts because Edmund is an ass, the White Witch is obviously evil, and everyone knows not to eat the food. Re-reading now, I have more appreciation for how Lewis shows Edmund's slide into treachery. He starts teasing Lucy because he thinks it's funny (even though it's not), has a moment when he realizes he was wrong and almost apologizes, but then decides to blame his discomfort on the victim. From that point, he is caught, with some help from the White Witch's magic, in a spiral of doubling down on his previous cruelty and then feeling unfairly attacked. Breaking the cycle is beyond him because it would require admitting just how badly he behaved and, worse, that he was wrong and his little sister was right. He instead tries to justify himself by spreading poisonous bits of doubt, and looks for reasons to believe the friends of the other children are untrustworthy. It's simplistic, to be sure, but it's such a good model of how people slide into believing conspiracy theories and joining hate groups. The Republican Party is currently drowning in Edmunds. That said, Lewis does one disturbing thing with Edmund that leaped out on re-reading. Everyone in this book has a reaction when Aslan's name is mentioned. For the other three kids, that reaction is awe or delight. For Edmund, it's mysterious horror. I know where Lewis is getting this from, but this is a nasty theological trap. One of the problems that religion should confront directly is criticism that questions the moral foundations of that religion. If one postulates that those who have thrown in with some version of the Devil have an instinctual revulsion for God, it's a free intellectual dodge. Valid moral criticism can be hand-waved away as Edmund's horrified reaction to Aslan: a sign of Edmund's guilt, rather than a possible flaw to consider seriously. It's also, needless to say, not the effect you would expect from a god who wants universal salvation! But this is only an odd side note, and once Edmund is rescued it's never mentioned again. This brings us to Aslan himself, the Great Lion, and to the heart of why I think this book and series are so popular. In reinterpreting Christianity for the world of Narnia, Lewis created a far more satisfying and relatable god than Jesus Christ, particularly for kids. I'm not sure I can describe, for someone who didn't grow up in that faith, how central the idea of a personal relationship with Jesus is to evangelical Christianity. It's more than a theological principle; it's the standard by which one's faith is judged. And it is very difficult for a kid to mentally bootstrap themselves into a feeling of a personal relationship with a radical preacher from 2000 years ago who spoke in gnomic parables about subtle points of adult theology. It's hard enough for adults with theological training to understand what that phrase is intended to mean. For kids, you may as well tell them they have a personal relationship with Aristotle. But a giant, awe-inspiring lion with understanding eyes, a roar like thunder, and a warm mane that you can bury your fingers into? A lion who sacrifices himself for your brother, who can be comforted and who comforts you in turn, and who makes a glorious surprise return? That's the kind of god with which one can imagine having a personal relationship. Aslan felt physical and embodied and present in the imagination in a way that Jesus never did. I am certain I was not the only Christian kid for whom Aslan was much more viscerally real than Jesus, and who had a tendency to mentally substitute Aslan for Jesus in most thoughts about religion. I am getting ahead of myself a bit because this is a review of TLtWatW and not of the whole series, and Aslan in this book is still a partly unformed idea. He's much more mundanely present here than he is later, more of a field general than a god, and there are some bits that are just wrong (like him clapping his paws together). But the scenes with Susan and Lucy, the night at the Stone Table and the rescue of the statues afterwards, remain my absolute favorite parts of this book and some of the best bits of the whole series. They strike just the right balance of sadness, awe, despair, and delight. The image of a lion also lets Lewis show joy in a relatable way. Aslan plays, he runs, he wrestles with the kids, he thrills in the victory over evil just as much as Susan and Lucy do, and he is clearly having the time of his life turning people back to flesh from stone. The combination of translation, different conventions, and historical distance means the Bible has none of this for the modern reader, and while people have tried to layer it on with Bible stories for kids, none of them (and I read a lot of them) capture anything close to the sheer joy of this story. The trade-off Lewis makes for that immediacy is that Aslan is a wonderful god, but TLtWatW has very little religion. Lewis can have his characters interact with Aslan directly, which reduces the need for abstract theology and difficult questions of how to know God's will. But even when theology is unavoidable, this book doesn't ask for the type of belief that Christianity demands. For example, there is a crucifixion parallel, because in Lewis's world view there would have to be. That means Lewis has to deal with substitutionary atonement (the belief that Christ died for the sins of the world), which is one of the hardest parts of Christianity to justify. How he does this is fascinating. The Narnian equivalent is the Deep Magic, which says that the lives of all traitors belong to the White Witch. If she is ever denied a life, Narnia will be destroyed by fire and water. The Witch demands Edmund's life, which sets up Aslan to volunteer to be sacrificed in Edmund's place. This triggers the Deeper Magic that she did not know about, freeing Narnia from her power. You may have noticed the card that Lewis is palming, and to give him credit, so do the kids, leading to this exchange when the White Witch is still demanding Edmund:
"Oh, Aslan!" whispered Susan in the Lion's ear, "can't we I mean, you won't, will you? Can't we do something about the Deep Magic? Isn't there something you can work against it?" "Work against the Emperor's magic?" said Aslan, turning to her with something like a frown on his face. And nobody ever made that suggestion to him again.
The problem with substitutionary atonement is why would a supposedly benevolent god create such a morally abhorrent rule in the first place? And Lewis totally punts. Susan is simply not allowed to ask the question. Lewis does try to tackle this problem elsewhere in his apologetics for adults (without, in my opinion, much success). But here it's just a part of the laws of this universe, which all of the characters, including Aslan, have to work within. That leads to another interesting point of theology, which is that if you didn't already know about the Christian doctrine of the trinity, you would never guess it from this book. The Emperor-Beyond-the-Sea and Aslan are clearly separate characters, with Aslan below the Emperor in the pantheon. This makes rules like the above work out more smoothly than they do in Christianity because Aslan is bound by the Emperor's rules and the Emperor is inscrutable and not present in the story. (The Holy Spirit is Deity Not-appearing-in-this-book, but to be fair to Lewis, that's largely true of the Bible as well.) What all this means is that Aslan's death is presented straightforwardly as a magic spell. It works because Aslan has the deepest understanding of the fixed laws of the Emperor's magic, and it looks nothing like what we normally think of as religion. Faith is not that important in this book because Aslan is physically present, so it doesn't require any faith for the children to believe he exists. (The Beavers, who believed in him from prophecy without having seen him, are another matter, but this book never talks about that.) The structure of religion is therefore remarkably absent despite the story's Christian parallels. All that's expected of the kids is the normal moral virtues of loyalty and courage and opposition to cruelty. I have read this book so many times that I've scrutinized every word, so I have to resist the temptation to dig into every nook and cranny: the beautiful description of spring, the weird insertion of Lilith as Adam's first wife, how the controversial appearance of Santa Claus in this book reveals Lewis's love of Platonic ideals... the list is endless, and the review is already much longer than normal. But I never get to talk about book endings in reviews, so one more indulgence. The best thing that can be said about the ending of TLtWatW is that it is partly redeemed by the start of Prince Caspian. Other than that, the last chapter of this book has always been one of my least favorite parts of The Chronicles of Narnia. For those who haven't read it (and who by this point clearly don't mind spoilers), the four kids are immediately and improbably crowned Kings and Queens of Narnia. Apparently, to answer the Professor from earlier in the book, ruling magical kingdoms is what they were teaching in those schools? They then spend years in Narnia, never apparently giving a second thought to their parents (you know, the ones who are caught up in World War II, which prompted the evacuation of the kids to the country in the first place). This, for some reason, leaves them talking like medieval literature, which may be moderately funny if you read their dialogue in silly voices to a five-year-old and is otherwise kind of tedious. Finally, in a hunt for the white stag, they stumble across the wardrobe and tumble back into their own world, where they are children again and not a moment has passed. I will give Lewis credit for not doing a full reset and having the kids not remember anything, which is possibly my least favorite trope in fiction. But this is almost as bad. If the kids returned immediately, that would make sense. If they stayed in Narnia until they died, that arguably would also make sense (their poor parents!). But growing up in Narnia and then returning as if nothing happened doesn't work. Do they remember all of their skills? How do you readjust to going to school after you've lived a life as a medieval Queen? Do they remember any of their friends after fifteen years in Narnia? Argh. It's a very "adventures are over, now time for bed" sort of ending, although the next book does try to patch some of this up. As a single book taken on its own terms, TLtWatW is weirdly slight, disjointed, and hits almost none of the beats that one would expect from a children's novel. What saves it is a sense of delight and joy that suffuses the descriptions of Narnia, even when locked in endless winter, and Aslan. The plot is full of holes, the role of the children in that plot makes no sense, and Santa Claus literally shows up in the middle of the story to hand out plot devices and make an incredibly sexist statement about war. And yet, I memorized every gift the children received as a kid, I can still feel the coziness of the Beaver's home while Mr. Beaver is explaining prophecy, and the night at the Stone Table remains ten times more emotionally effective for me than the description of the analogous event in the Bible. And, of course, there's Aslan.
"Safe?" said Mr. Beaver. "Don't you hear what Mrs. Beaver tells you? Who said anything about safe? 'Course he isn't safe. But he's good. He's the king, I tell you."
Aslan is not a tame lion, to use the phrase that echos through this series. That, I think, is the key to the god that I find the most memorable in all of fantasy literature, even in this awkward, flawed, and decidedly strange introduction. Followed by Prince Caspian, in which the children return to a much-changed Narnia. Lewis has gotten most of the obligatory cosmological beats out of the way in this book, so subsequent books can tell more conventional stories. Rating: 7 out of 10

26 February 2021

Ritesh Raj Sarraf: Wayland KDE X11

KDE Impressions These days, I often hear a lot about Wayland. And how much of effort is being put into it; not just by the Embedded world but also the usual Desktop systems, namely KDE and GNOME. In recent past, I switched back to KDE and have been (very) happy about the switch. Even though the KDE 4 (and initial KDE 5) debacle had burnt many, coming back to a usable KDE desktop is always a delight. It makes me feel home with the elegance, while at the same time the flexibility, it provides. It feels so nice to draft this blog article from Kwrite + VI Input Mode Thanks to the great work of the Debian KDE Team, but Norbert Preining in particular, who has helped bring very up-to-date KDE packages into Debian. Right now, I m on a Plamsa 5.21.1 desktop, which is recent by all standards.

Wayland Almost all the places in the Linux world these days are busy with integrating Wayland as the primary display service. Not sure what the current status on the GNOME side is but I definitely keep trying KDE + Wayland with every release. I keep trying with every release because it still is not prime for daily use. And it makes me get back to X11, no matter how dated some may call. Fact is, X11 still shines to me as an end-user. Glitches with Wayland still are (Based on this week s test on Plasma 5.21.1):
  • Horrible performance compared to X11
  • Very crashy, especially when hotplugging secondary display. Plasma would just crash. X11 is very resilient to such things, part of the reason I can think is the age of the codebase.
  • Many many applications still need to be fixed for Wayland. Or Wayland needs to accomodate them in some way. XWayland does not really feel like the answer.
And while KDE keeps insisting users to switch to Wayland, as that s where all the new enhancements and fixes are put in, someone like me still needs to stick to X11 for the time being. So to get my shiny new LG 27" 4K Monitor (3840x2160 60.00*+) to work without too much glitch, I had to live with an alias:
$ alias   grep xrandr
alias rrs_xrandr_lg='xrandr --output DP-1 --mode 3840x2160 --scale .75x.75'
18:31                    

Plasma 5.21 On the brighter side, the Plasma 5.21.1 release brings some nice enhancements in other areas.
  • I m now able to make use of tighter integration with systemd/cgroups, with better organization and management of processes overall.
  • The new Plasma theme, Breeze Twilight, is a good blend of Light + Dark.
I also appreciate the work put in by Michail Vourlakos. The KDE project is lucky to have a developer/designer like him. His vision and work into the KDE desktop is well beyond a writing by me.
$ usystemctl status plasma-plasmashell.service 
  plasma-plasmashell.service - KDE Plasma Workspace
     Loaded: loaded (/usr/lib/systemd/user/plasma-plasmashell.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-02-26 18:34:23 IST; 13s ago
   Main PID: 501806 (plasmashell)
      Tasks: 21 (limit: 18821)
     Memory: 759.8M
        CPU: 13.706s
     CGroup: /user.slice/user-1000.slice/user@1000.service/session.slice/plasma-plasmashell.service
              501806 /usr/bin/plasmashell --no-respawn
             
Feb 26 18:35:00 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:21 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:49 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:57 priyasi plasmashell[501806]: qml: recreating buttons
18:36                  

OBS - Open Build Service I should also thank the OpenSUSE folks for the OBS work. It has enabled the close equivalent (or better, in my experience) of PPAs for Debian. And that is what has enabled developers like Norbert to easily and quickly be able to deliver the entire KDE suite.

OBS - Some detail Christian asked for some more details on the OBS side of things, of my view. I m updating this article with it because the comment system may not always be reliable and I hate losing content. Having been using OBS myself, and also others in the Debian community who are making use of it, I surely think we as project should consider making use of OBS. Given that OBS is Free Software, it is a perfect fit for Debian. Gitlab is another example of what we ve made available in Debian. OBS is divided into multiple parts
  • OBS Server
  • OBS DoD service
  • OBS Publisher
  • OBS Workers
  • OBS Warden
  • OBS Rep Server
For every Debian release I care about, I add an OBS project per release. So I have OBS projects for: Sid, Bullseye, Buster, Jessie. Now, say you have a package, foo . You prep your package and enable all the releases that you want to build the package for. So the same package gets built, in separate clean environments, for every release I mentioned as an example above. You don t have to manually trigger the build for every release/every architcture. You should add the release (as projects) in OBS, set their supported architectures, and then add those enabled release projets as bits to your package. Every build involves:
  • Creating a new chroot for each build
  • Building the package
Builds can be scattered across multiple hosts, known as workers in OBS terminology. Your workers are independent machine entities, supporting different architectures. The machines can be Bare-Metal ones, VMs, even containers. So this allows for very nice scale-in and scale-out. There may be auto-scaling too but that is something worth investigating. Think of things like cross architecture builds. Let s assume the cloud vendors decide to donate resources to the Debian project. We could enable OBS worker instances on the respective clouds (different architectures) and plug them into the master OBS instance that Debian hosts. Fully distributed. Similarly, big hardware vendors willing to donate compute resources could house them in their premises and Debian could just easily establish a connection to them. All of this just a TCP connection away. So when I look at the features of OBS, from the point of view of Debian, I like it more. Extensibility won t be an issue. Supporting a new Debian release would just be a matter of bootstrapping the Debian release as a project in OBS, and then all done. The single effort of setting of the target release project is a one time job, and then all can leverage it. The PPA was a long craved feature missing in Debian, in my opinion. OBS allows to not just fulfil that gap but also extend it in a very easy way. Andrew Lee had put in a nice video presentation about the same @ Debconf 20

17 January 2021

Enrico Zini: OpenStreetMap maps links

Some interesting renderings of OpenStreetMap data: If you want to print out local maps, MyOSMatic is a service to generate maps of cities using OpenStreetMap data. The generated maps are available in PNG, PDF and SVG formats and are ready to be printed. On a terminal? Sure: MapSCII is a Braille & ASCII world map renderer for the console: telnet mapscii.me and explore the map with the mouse, keyboard arrows, and a and z to zoom, c to switch to block character mode, q to quit. Alternatively you can fly over it, and you might have to dodge the rare map editing bug, or have fun landing on it: A typo created a 212-story monolith in Microsoft Flight Simulator

10 January 2021

Iustin Pop: Dealing with evil ads

Background I usually don t mind ads, as not as they not very intrusive. I get that the current media model is basically ad-funded, and that unless I want to pay $1/month or so to 50 web sites, I have to accept ads, so I don t run an ad-blocker. Sure, sometimes are annoying (hey YT, mid-roll ads are borderline), but I ve also seen many good ads, as in interesting or funny even. Well, I don t think I ever bought anything as direct result from ads, so I don t know how useful ads are for the companies, but hey, what do I care. Except there a few ad networks that run what I would say are basically revolting ads. Things I don t want to ever accidentally see while eating, or things that are really make you go WTF? Maybe you know them, maybe you don t, but I guess there are people who don t know how to clean their ears, or people for whom a fast 7 day weight loss routine actually works. Thankfully, most of the time I don t browse sites which use this networks, but randomly they do leak to even sites I do browse. If I m not very stressed already, I can ignore them, otherwise they really, really annoy me. Case in point, I was on Slashdot, and because I was logged on and recently had mod points, the right side column had a check-box disable ads . That sidebar had some relatively meaningful ads, like a VPN subscription (not that I would use it, but it is a tech thing), or even a book about Kali Linux, etc. etc. So I click the disable ads , and the right column goes away. I scroll down happily, only to be met, at the bottom, by the best way to clean your ear , the most 50 useless planes ever built (which had a drawing of something that was for sure never ever built outside of in movies), you won t believe how this child actor looks today , etc.

Solving the problem The above really, really pissed me off, so I went to search how to block ad network . To my surprise, the fix was not that simple, for standard users at least.

Method 1: hosts file The hosts file is reasonable as it is relatively cross-platform (Linux and Windows and Mac, I think), but how the heck do you edit hosts on your phone? And furthermore, it has some significant downsides. First, /etc/hosts lists individual hosts, so for an entire ad network, the example I had had two screens of host names. This is really unmaintainable, since rotating host names, or having a gazillion of them is trivial. Second, it cannot return negative answers. I.e. you have to give each of those hosts a valid IPv4/IPv6, and have something either reply with 404 or another 4xx response, or not listen on port 80/443. Too annoying. And finally, it s a client-side solution, so one would have to replicate it across all clients in a home, and keep it in sync.

Method 2: ad-blockers I dislike ad-blockers on principle, since they need wide permissions on all pages, but it is a recommended solution. However, to my surprise, one finds threads saying ad-blocker foo has whitelisted ad network bar, at which point you re WTF? Why do I use an ad-blocker if they get paid by the lowest of the ad networks to show the ads? And again, it s a client-side solution, and one would have to deploy it across N clients, and keep them in sync, etc.

Method 3: HTTP proxy blocking To my surprise, I didn t find this mentioned in a quick internet search. Well, HTTP proxies have long gone the way of the dodo due to HTTPs everywhere , and while one can still use them even with HTTPS, it s not that convenient:
  • you need to tunnel all traffic through them, which might result in bottlenecks (especially for media playing/maybe video-conference/etc.).
  • or even worse, there might be protocol issues/incompatibilities due to 100% tunneling.
  • running a proxy opens up some potential security issues on the internal network, so you need to harden the proxy as well, and maintain it.
  • you need to configure all clients to know about the proxy (via DHCP or manually), which might or might not work well, since it s client-dependent.
  • you can only block at CONNECT level (host name), and you have to build up regexes for the host name.
On the good side, the actual blocking configuration is centralised, and the only distributed configuration is pointing the clients through the proxy. While I used to run a proxy back in HTTP times, the gains were significant back them (media elements caching, downloads caching, all with a slow pipe, etc.), but today is not worth it, so I ve stopped and won t bring a proxy back just for this.

Method 4: DNS resolver filtering After thinking through all the options, I thought - hey, a caching/recursive DNS resolver is what most people with a local network run, right? How difficult is to block at resolver level? and oh my, it is so trivial, for some resolvers at least. And yes, I didn t know about this a week ago

Response Policy Zones Now, of course, there is a standard for this, called Response Policy Zone, and which is supported across multiple resolvers. There are many tutorials on how to use RPZs to configure things, some of them quite detailed - e.g. this one, or a simple/more straightforward one here. The upstream BIND documentation also explains things quite well here, so you can go that route as well. It looks a bit hairy for me thought, but it works, and since it is a standard, it can be more easily deployed. There are many discussions on the internet about how to configure RPZs, how to not even resolve the names (if you re going to return something explicitly/statically), etc. so there are docs, but again it seems a bit overdone.

Resolver hooks There s another way too, if your resolver allows scripting. For example, the PowerDNS resolver allow Lua scripting, and has a relatively simple API at least, to me it looks way, way simpler than the RPZ equivalent. After 20 minutes of reading the docs, I ended up with this, IMO trivial, solution (in a file named e.g. rules.lua):
ads = newDS()
ads:add( 'evilads.com', 'evilads.well-known-cdn.com', 'moreads.net' )

function preresolve(dq)
  if ads:check(dq.qname) then
    dq.rcode = pdns.NXDOMAIN
    return true;
  end
  return false;
end
and that s it. Well, enable it/load the file in the configuration, but nothing else. Syntax is pretty straightforward, matching by suffix here, and if you need more complex stuff, you can of course do it; it s just Lua and a simple API. I don t see any immediate equivalent in Bind, so there s that, but if you can use PowerDNS, then the above solution seems simple for simple cases, and could be extended if needed (not sure in which cases). The only other thing one needs to do is to serve the local/custom resolver to all clients, whether desktop or mobile, and that s it. DNS server is bread-and-butter in DHCP, so better support than proxy, and once the host name has been (mis)resolved, nothing is involved anymore in the communication path. True, your name server might get higher CPU usage, but for home network, this should not be a problem. Can this filtering method (either RPZ or hooks) be worked around by ad networks? Sure, like anything. But changing the base domain is not fun. DNSSEC might break it (note Bind RPZ can be configure to ignore DNSSEC), but I m more worried about DNS-over-HTTPS, which I thought initially it s done for the user, but now I m not so sure anymore. Not being in control even of your own DNS resolver seems evil , but what do I know.

Combined authoritative + recursive solution This solution was provided by Guillem Jover, who uses unbound, which is a combined authoritative name server and recursive resolver in one, and dnsmasq (which is even more things, I think):
For my LANs I use unbound, and then block this kind of thing in /etc/unbound/unbound.conf.d/block.conf, with stuff like:
server:
 local-zone: adsite.example.com refuse
But then for things that are mobile, and might get out of the LAN, such as laptops, I also block with dnsmasq in /etc/dnsmasq.d/block.conf, with stuff like:
 address=/adsite.example.com/
I still use ublock-origin to block stuff at the browser level, though, for yet an extra layer of noise suppression. :)
Thanks for the info!

Happy browsing! 10 lines of Lua, and now for sure I m going to get even fatter without the this natural method will melt your belly fat in 7 days information. Or I will just throw away banana peels without knowing what I could do with hem. After a few days, I asked myself but ads are not so bad, why did I and then realised that yes, ads are not so bad anymore. And Slashdot actually loads faster So, happy browsing!

27 December 2020

Russ Allbery: Review: King of the Murgos

Review: King of the Murgos, by David Eddings
Series: The Malloreon #2
Publisher: Del Rey
Copyright: April 1988
Printing: November 1991
ISBN: 0-345-35880-5
Format: Mass market
Pages: 403
This is the second book of the Malloreon, which in turn is a sequel trilogy to The Belgariad. You could start here, since Eddings does a good job at summarizing previous volumes, but I'm not sure why you'd want to. This is the sort of book that gives epic fantasy series a bad name. After a lot of action in Guardians of the West, our band of heroes has been mostly assembled. They set out after the villain through the novel strategy of exploring the parts of the world map we have not yet seen. By the end of 400 pages of traveling they... are on the trail of the villain while exploring parts of the world map we have not yet seen. What I'm saying is that you'd better be really enjoying Silk's nose twitching and Belgarath being grumpy at Polgara, because that's about all you get. The protagonists collect a few important plot coupons and set up some emotional conflicts whose resolutions are already telegraphed, and that's about it. After wasting a bunch of time getting to and wandering around Nyissa, all of which felt pointless except for one moment of Errand doing something nifty (often the best parts of this series), the band of heroes ventures into enemy territory as foreshadowed by the title. The structure of Eddings's series is basically Suikoden (collect all the protagonists), so as one might expect the point of this excursion is to pick up another protagonist. In the process, we get to see a bit more of the Murgos than we have before. I've commented before that Eddings takes racial essentialism to such an absurd degree that these books start feeling like animal fairy tales, which is why I'm willing to tolerate all of the stereotyping. It's Planet of Hats except with fantasy countries. That tolerance is stretched rather thin when it comes to the bad guys, since the congruence with real-life racism and dehumanization of enemies is a bit too strong. Eddings dodges this mostly by not putting many of the bad guys in the page, but in King of the Murgos he has a golden opportunity to undercut his racial structure and surprise the reader. What if one of the Murgos is a protagonist and is vital to the series plot? I won't spoil what Eddings actually does, but if you imagine the stupidest and most obvious way to avoid having to acknowledge his fantasy races are not uniform, you probably just guessed the big reveal in this book. It's sadly not surprising, since Eddings is all in on his racial construction of this fantasy world, but it's still disgusting. The main reason why I decided to re-read this series, which is notorious for being a re-run of the Belgariad, is because I think what Eddings does here with prophecy, the voice in Garion's head, and the meddling of the seers is hilarious. The plot is openly on rails to the point that the characters actively grumble about it. I find that oddly entertaining, a bit like watching a Twitch streamer play an RPG. In this installment, Errand does something random in the middle of a Murgo temple, Garion gets a level-up chime, the prophecy gets very smug about how well things are going, and no one knows why. It's an absolute delight, and I don't know of any other series that isn't pure parody that would be willing to use that as a plot element. Sadly, King of the Murgos has only one or two enjoyable moments like that, a whole lot of frankly boring map exploration, some truly egregious racist claptrap, and no plot development worth speaking of. I probably should have skipped this one when re-reading this series. Followed by Demon Lord of Karanda. Rating: 4 out of 10

23 December 2020

Jonathan McDowell: Rooting the Tesco Hudl

Tesco Hudl I have an original Tesco Hudl - a Rockchip RK3188 based Android tablet. It s somewhat long in the tooth and mine is running Android 4.2.2 (Jelly Bean). As a first step in trying to get it updated a bit I decided to root it and have a poke about. There are plenty of guides for this, but they mostly involve downloading Android apps that look dodgy or don t exist any more. Thankfully the bootloader is unlocked, so I did it the hard (manual) way from a Debian 10 (Buster) box. I doubt this is useful to many folk, but I thought I d write it up. As you d expect follow this at your own risk; there is the potential to brick the Hudl. First, enable developer mode on the Hudl (so we can adb in). Open the Settings app, scroll down to the bottom and click About Tablet , scroll down to the bottom and tap Build number 7 times, at which point it will tell you You are now a developer! . Go back to the main settings menu and just above About Tablet there will now be a Developer options entry. Click it, then make sure the box beside USB debugging is ticked. Now you need to install the appropriate tools on your Debian box. That should be:
$ sudo apt install adb rkflashtool
We also need to download a suitable su tool. I lazily went for the prebuilt SuperSU Root:
$ mkdir hudl-root
$ cd hudl-root
$ wget https://supersuroot.org/downloads/SuperSU-v2.79-201612051815.zip
$ unzip SuperSU-v2.79-201612051815.zip
2.82 is the latest version but has problems on Jelly Bean; the device will end up not properly booting. Hook the Hudl up to your machine with a suitable USB cable and you ll now be able to get a shell on it:
$ adb shell
* daemon not running; starting now at tcp:5037
* daemon started successfully
shell@android:/ $
Ctrl-D will quit the shell and return you back to the local prompt. Next step is to reboot into the Rockchip bootloader, and use that to download the system partition (just over 1G in size)
$ adb reboot bootloader
$ sudo rkflashtool r system > system.img
rkflashtool: info: rkflashtool v5.2
rkflashtool: info: Detected RK3188...
rkflashtool: info: interface claimed
rkflashtool: info: working with partition: system
rkflashtool: info: found offset: 0x00142000
rkflashtool: info: found size: 0x00200000
rkflashtool: info: reading flash memory at offset 0x00341fe0... Done!
We now have a system.img file which represents the system partition of the device. We can mount that and copy over the su binary and SuperSU apk.
$ sudo mount -o loop system.img /mnt
$ sudo cp common/Superuser.apk /mnt/app/
$ sudo cp armv7/su /mnt/xbin/
$ sudo chmod +sx /mnt/xbin/su
$ sudo umount /mnt
Finally we can write this image back to the device, reboot and once the reboot has completed use adb to connect and su to root. SuperSU might pop up a dialog on the tablet asking you to ok the action (and possibly indicate it needs to do a fixup of the installation):
$ sudo rkflashtool w system < system.img
$ sudo rkflashtool b
$ adb shell
shell@android:/ $ su -
root@android:/ #

Next.