Search Results: "ghe"

12 February 2023

Russell Coker: Intel vs AMD

In response to a post about my latest laptop I had someone ask why I chose an Intel CPU. I ve been a fan of the Thinkpad series of laptops since the 90s. They have always seemed well constructed (given the constraints of being light etc) and had a good feature set. Also I really like the TrackPoint. I ve been a fan of the smaller Thinkpads since I got an X-301 from e-waste [1] and the X1-Carbon series is the latest and greatest line of small Thinkpads. AMD makes some nice laptop CPUs which appear to have low power use and good performance particularly for smaller numbers of threads, it seems that generally AMD CPUs are designed for fewer cores with higher performance per core which is good for laptops. But Lenovo only makes the Thinkpad Carbon X1 series with Intel CPUs so choosing that model of laptop means choosing Intel. It could be that for some combination of size, TDP, speed, etc Intel just happens to beat AMD for all the times when Lenovo was designing a new motherboard for the Carbon X1. But it seems more likely that Intel has been lobbying Lenovo for this. It would be nice if there was an anti-trust investigation into Intel, everyone who s involved in the computer industry knows of some of the anti-competitive things that they have done. Also it would be nice if Lenovo started shipping laptops with ARM CPUs across their entire range. But for the moment I guess I have to keep buying laptops with Intel CPUs.

1 February 2023

Julian Andres Klode: Ubuntu 2022v1 secure boot key rotation and friends

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work? Booting on Ubuntu involves three components after the firmware:
  1. shim
  2. grub
  3. linux
Each of these is a PE binary signed with a key. The shim is signed by Microsoft s 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA In Ubuntu s case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes. This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh. We decided we want to rotate our signing key next time. This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable. We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh. Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them. We also submitted a shim 15.7 with the old keys revoked which came back at around the same time. Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys. So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks. (Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.) Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we d simply add Breaks: linux-image-generic ( 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage. I explored checking the kernels at runtime and aborting if we don t have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:
  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.
Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice. So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two: In it s post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred. Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP. Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it s not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions Of course, the first version I uploaded had still some remaining hardcoded shimx64 in the scripts and so failed to install on arm64 where shimaa64 is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts). shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues. We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work. We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week. With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round
  • Intel TDX support in grub and shim
  • Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s

am I using this yet? The new signing keys are used in:
  • shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
  • grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
  • fwupd-signed 1.51~ or newer
  • various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.
If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:
$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
  link best version is /usr/lib/shim/shimx64.efi.signed.latest
  link currently points to /usr/lib/shim/shimx64.efi.signed.latest
  link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50
If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You ll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished. For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it s in proposed)?
  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.
If you already upgraded your shim before your kernel, don t worry:
  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed
And you ll be all good to go.

deep dive: uploading signed boot assets to Ubuntu For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets. OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing. The entire workflow looks something like this:
  1. Upload the unsigned package to one of the following build PPAs:
  2. Upload the signed package to the same PPA
  3. For stable release uploads:
    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads. The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb. Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed
  5. Review the binaries themselves
  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public. This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private proposed PPA.
  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive
Lots of steps!

WIP As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

15 January 2023

Matthew Garrett: Blogging and microblogging

Long-term Linux users may remember that Alan Cox used to write an online diary. This was before the concept of a "Weblog" had really become a thing, and there certainly weren't any expectations around what one was used for - while now blogging tends to imply a reasonably long-form piece on a specific topic, Alan was just sitting there noting small life concerns or particular technical details in interesting problems he'd solved that day. For me, that was fascinating. I was trying to figure out how to get into kernel development, and was trying to read as much LKML as I could to figure out how kernel developers did stuff. But when you see discussion on LKML, you're frequently missing the early stages. If an LKML patch is a picture of an owl, I wanted to know how to draw the owl, and most of the conversations about starting in kernel development were very "Draw two circles. Now draw the rest of the owl". Alan's musings gave me insight into the thought processes involved in getting from "Here's the bug" to "Here's the patch" in ways that really wouldn't have worked in a more long-form medium.

For the past decade or so, as I moved away from just doing kernel development and focused more on security work instead, Twitter's filled a similar role for me. I've seen people just dumping their thought process as they work through a problem, helping me come up with effective models for solving similar problems. I've learned that the smartest people in the field will spend hours (if not days) working on an issue before realising that they misread something back at the beginning and that's helped me feel like I'm not unusually bad at any of this. It's helped me learn more about my peers, about my field, and about myself.

Twitter's now under new ownership that appears to think all the worst bits of Twitter were actually the good bits, so I've mostly bailed to the Fediverse instead. There's no intrinsic length limit on posts there - Mastodon defaults to 500 characters per post, but that's configurable per instance. But even at 500 characters, it means there's more room to provide thoughtful context than there is on Twitter, and what I've seen so far is more detailed conversation and higher levels of meaningful engagement. Which is great! Except it also seems to discourage some of the posting style that I found so valuable on Twitter - if your timeline is full of nuanced discourse, it feels kind of rude to just scream "THIS FUCKING PIECE OF SHIT IGNORES THE HIGH ADDRESS BIT ON EVERY OTHER WRITE" even though that's exactly the sort of content I'm there for.

And, yeah, not everything has to be for me. But I worry that as Twitter's relevance fades for the people I'm most interested in, we're replacing it with something that's not equivalent - something that doesn't encourage just dropping 50 characters or so of your current thought process into a space where it can be seen by thousands of people. And I think that's a shame.

comment count unavailable comments

3 January 2023

Russell Coker: Samsung Galaxy Note 10.1 2014

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet (wikipedia page [1]) with 32G of RAM. It s display is 2560 1600 resolution which still compares well to the latest tablets. The Galaxy Tab S8 [2] is the latest high-end tablet series from Samsung and the 11 inch tablet in that series also has a 2560 1600 giving it a slightly lower DPI! The latest series also has 12.4 and 14.6 tablets with resolutions of 2800 1752 and 2960 1848 respectively. Obviously if you want a 14 tablet then the latest offerings are good, but if you want a 10 or 11 tablet then Samsung hasn t improved much. The Note 10.1 has 3G of RAM and a choice of 16G, 32G, or 64G of storage. The latest Tab S8 tablets have 8G to 16G of RAM and 128G to 512G of internal storage, which are great if you need such things. For many tasks 3G of RAM is quite adequate and as I chose the 32G model I haven t had a problem with storage. The s-pen is a feature of this tablet which is also on the latest high-end Samsung tablets, it is useful for accessing small elements in web sites designed for desktop use and for graphics editing. One noteworthy feature of this tablet is the fact that when in landscape orientation it has speakers on each side, which is the correct layout as the vast majority of video with stereo sound is in a landscape orientation. After using that tablet for about 4 years I bought myself a newer tablet and gave it to my wife. She has since passed it on to another relative who is using it regularly. That tablet seems to have lasted well still being quite usable when it s almost 9 years old. The price including delivery was $579, that works out to about $1.30 per week (disregarding interest and inflation). According to the Reserve Bank of Australia inflation calculator [3] $579 in 2014 is equivalent to $652 in 2021, they don t have results for later than 2021 so I ll assume it would be $675 in 2023. Currently the main problems with this tablet are lack of USB-C support (which means it s difficult to connect to an external display among other things) and lack of a recent version of Android, 4.4.2 was the latest OTA update available. The XDA Developers forum has a section for this tablet [4] which includes discussion of updates to Android 5.x for devices which didn t get it automatically and for upgrading to very recent Android versions in LineageOS. I m idly considering one of those options, but for the current user the Google Play store is a requirement. Newer Samsung Tablets The current equivalent Samsung tablet is the Galaxy Tab S8 which is currently being sold for $1055 which is 56% higher than the inflation adjusted price of my tablet. I don t think this is reasonable given that I bought it 7 months after release and it s now 11 months since the release of the Tab S8. The Tab S8 has more RAM, more storage, and a faster CPU due to improvements over the entire computer industry replacing old parts with newer versions of the same things (including changing to USB-C) doesn t justify a price rise. Increasing RAM size by a factor of 3-5 and increasing storage by a factor of 8 over the last 9 years doesn t match the industry trends for PCs, also as an aside my latest laptop only has 8G of RAM and works well for much more demanding tasks. The Tab S8 series also has significantly better cameras, but I don t think that s a big deal, the 2Mp front camera in my tablet can provide adequate quality for video conferencing and usually saturate the upload bandwidth and again that s an issue of the entire industry moving to newer hardware. I don t think it s bad to take a form factor and display that works well and put newer versions of the CPU, RAM, storage, cameras, and OS on it. But asking for 56% more money for the updated tablet seems unreasonable. The current S8 Ultra is going for $1760 and the S8+ is $1479. I think those are ridiculous prices for tablets as there is a decent range of new laptops that are cheaper. I believe that the purpose of a tablet is to be easy to carry and quick to start using (no waiting for a laptop to connect to wifi after leaving suspend). The largest of the S8 Tabs is about the same length and width as a Thinkpad X1 Carbon with the benefits being that it s thinner and lighter, but if you got a tablet case with keyboard then it would be thicker and heavier. The S8 seems like bad value for money and the S8+ and S8 Ultra don t seem to compare well to laptops and Chromebooks with touch screens unless you have a specific need for Android tablet apps. If Samsung are going to just make new tablets without any significant improvements other than refreshing to the latest CPU, RAM, storage, and Camera technology and force users to upgrade via a lack of new OS support then they shouldn t charge so much. Stick well below $1000 and people will be more inclined to replace items, expensive items are expected to last. Conclusion Buying this tablet was definitely a good choice. It has performed well for many years and after a couple of years of light use it s back in daily use again. The value for money it offered was significantly greater than newer tablets, when it was new it was really high-end, the current S8 Tab series of tablets aren t anything special when compared to other tablets.

29 December 2022

Chris Lamb: Favourite books of 2022: Memoir/biography

In my two most recent posts, I listed the fiction and classic fiction I enjoyed the most in 2022. I'll leave my roundup of general non-fiction until tomorrow, but today I'll be going over my favourite memoirs and biographies, in no particular order. Books that just missed the cut here include Roisin Kiberd's The Disconnect: A Personal Journey Through the Internet (2019), Steve Richards' The Prime Ministers (2019) which reflects on UK leadership from Harold Wilson to Boris Johnson, Robert Graves Great War memoir Goodbye to All That (1929) and David Mikics's portrait of Stanley Kubrick called American Filmmaker.

Afropean: Notes from Black Europe (2019) Johny Pitts Johny Pitts is a photographer and writer who lives in the north of England who set out to explore "black Europe from the street up" those districts within European cities that, although they were once 'white spaces' in the past, they are now occupied by Black people. Unhappy with the framing of the Black experience back home in post-industrial Sheffield, Pitts decided to become a nomad and goes abroad to seek out the sense of belonging he cannot find in post-Brexit Britain, and Afropean details his journey through Paris, Brussels, Lisbon, Berlin, Stockholm and Moscow. However, Pitts isn't just avoiding the polarisation and structural racism embedded in contemporary British life. Rather, he is seeking a kind of super-national community that transcends the reductive and limiting nationalisms of all European countries, most of which have based their national story on a self-serving mix of nostalgia and postcolonial fairy tales. Indeed, the term 'Afropean' is the key to understanding the goal of this captivating memoir. Pitts writes at the beginning of this book that the word wasn't driven only as a response to the crude nativisms of Nigel Farage and Marine Le Pen, but that it:
encouraged me to think of myself as whole and unhyphenated. [ ] Here was a space where blackness was taking part in shaping European identity at large. It suggested the possibility of living in and with more than one idea: Africa and Europe, or, by extension, the Global South and the West, without being mixed-this, half-that or black-other. That being black in Europe didn t necessarily mean being an immigrant.
In search of this whole new theory of home, Pitts travels to the infamous banlieue of Clichy-sous-Bois just to the East of Paris, thence to Matong in Brussels, as well as a quick and abortive trip into Moscow and other parallel communities throughout the continent. In these disparate environs, Pitts strikes up countless conversations with regular folk in order to hear their quotidian stories of living, and ultimately to move away from the idea that Black history is defined exclusively by slavery. Indeed, to Pitts, the idea of race is one that ultimately restricts one's humanity; the concept "is often forced to embody and speak for certain ideas, despite the fact it can't ever hold in both hands the full spectrum of a human life and the cultural nuances it creates." It's difficult to do justice to the effectiveness of the conversations Pitts has throughout his travels, but his shrewd attention to demeanour, language, raiment and expression vividly brings alive the people he talks to. Of related interest to fellow Brits as well are the many astute observations and comparisons with Black and working-class British life. The tone shifts quite often throughout this book. There might be an amusing aside one minute, such as the portrait of an African American tourist in Paris to whom "the whole city was a film set, with even its homeless people appearing to him as something oddly picturesque." But the register abruptly changes when he visits Clichy-sous-Bois on an anniversary of important to the area, and an element of genuine danger is introduced when Johny briefly visits Moscow and barely gets out alive. What's especially remarkable about this book is there is a freshness to Pitt s treatment of many well-worn subjects. This can be seen in his account of Belgium under the reign of Leopold II, the history of Portuguese colonialism (actually mostly unknown to me), as well in the way Pitts' own attitude to contemporary anti-fascist movements changes throughout an Antifa march. This chapter was an especial delight, and not only because it underlined just how much of Johny's trip was an inner journey of an author willing have his mind changed. Although Johny travels alone throughout his journey, in the second half of the book, Pitts becomes increasingly accompanied by a number of Black intellectuals by the selective citing of Frantz Fanon and James Baldwin and Caryl Phillips. (Nevertheless, Jonny has also brought his camera for the journey as well, adding a personal touch to this already highly-intimate book.) I suspect that his increasing exercise of Black intellectual writing in the latter half of the book may be because Pitts' hopes about 'Afropean' existence ever becoming a reality are continually dashed and undercut. The unity among potential Afropeans appears more-and-more unrealisable as the narrative unfolds, the various reasons of which Johny explores both prosaically and poetically. Indeed, by the end of the book, it's unclear whether Johny has managed to find what he left the shores of England to find. But his mix of history, sociology and observation of other cultures right on my doorstep was something of a revelation to me.

Orwell's Roses (2021) Rebecca Solnit Orwell s Roses is an alternative journey through the life and afterlife of George Orwell, reimaging his life primarily through the lens of his attentiveness to nature. Yet this framing of the book as an 'alternative' history is only revisionist if we compare it to the usual view of Orwell as a bastion of 'free speech' and English 'common sense' the roses of the title of this book were very much planted by Orwell in his Hertfordshire garden in 1936, and his yearning of nature one was one of the many constants throughout his life. Indeed, Orwell wrote about wildlife and outdoor life whenever he could get away with it, taking pleasure in a blackbird's song and waxing nostalgically about the English countryside in his 1939 novel Coming Up for Air (reviewed yesterday).
By sheer chance, I actually visited this exact garden immediately to the publication of this book
Solnit has a particular ability to evince unexpected connections between Orwell and the things he was writing about: Joseph Stalin's obsession with forcing lemons to grow in ludicrously cold climates; Orwell s slave-owning ancestors in Jamaica; Jamaica Kincaid's critique of colonialism in the flower garden; and the exploitative rose industry in Colombia that supplies the American market. Solnit introduces all of these new correspondences in a voice that feels like a breath of fresh air after decades of stodgy Orwellania, and without lapsing into a kind of verbal soft-focus. Indeed, the book displays a marked indifference towards the usual (male-centric) Orwell fandom. Her book draws to a close with a rereading of the 'dystopian' Nineteen Eighty-Four that completes her touching portrait of a more optimistic and hopeful Orwell, as well as a reflection on beauty and a manifesto for experiencing joy as an act of resistance.

The Disaster Artist (2013) Greg Sestero & Tom Bissell For those not already in the know, The Room was a 2003 film by director-producer-writer-actor Tommy Wiseau, an inscrutable Polish immigr with an impenetrable background, an idiosyncratic choice of wardrobe and a mysterious large source of income. The film, which centres on a melodramatic love triangle, has since been described by several commentators and publications as one of the worst films ever made. Tommy's production completely bombed at the so-called 'box office' (the release was actually funded entirely by Wiseau personally), but the film slowly became a favourite at cult cinema screenings. Given Tommy's prominent and central role in the film, there was always an inherent cruelty involved in indulging in the spectacle of The Room the audience was laughing because the film was astonishingly bad, of course, but Wiseau infused his film with sincere earnestness that in a heartless twist of irony may be precisely why it is so terrible to begin with. Indeed, it should be stressed that The Room is not simply a 'bad' film, and therefore not worth paying any attention to: it is uncannily bad in a way that makes it eerily compelling to watch. It unintentionally subverts all the rules of filmmaking in a way that captivates the attention. Take this representative example:
This thirty-six-second scene showcases almost every problem in The Room: the acting, the lighting, the sound design, the pacing, the dialogue and that this unnecessary scene (which does not advance the plot) even exists in the first place. One problem that the above clip doesn't capture, however, is Tommy's vulnerable ego. (He would later make the potentially conflicting claims that The Room was both an ironic cult success and that he is okay with people interpreting it sincerely). Indeed, the filmmaker's central role as Johnny (along with his Willy-Wonka meets Dracula persona) doesn't strike viewers as yet another vanity project, it actually asks more questions than it answers. Why did Tommy even make this film? What is driving him psychologically? And why and how? is he so spellbinding? On the surface, then, 2013's The Disaster Artist is a book about the making of one the strangest films ever made, written by The Room's co-star Greg Sestero and journalist Tom Bissell. Naturally, you learn some jaw-dropping facts about the production and inspiration of the film, the seed of which was planted when Greg and Tommy went to see an early screening of The Talented Mr Ripley (1999). It turns out that Greg's character in The Room is based on Tommy's idiosyncratic misinterpretation of its plot, extending even to the character's name Mark who, in textbook Tommy style, was taken directly (or at least Tommy believed) from one of Ripley's movie stars: "Mark Damon" [sic]. Almost as absorbing as The Room itself, The Disaster Artist is partly a memoir about Thomas P. Wiseau and his cinematic masterpiece. But it could also be described as a biography about a dysfunctional male relationship and, almost certainly entirely unconsciously, a text about the limitations of hetronormativity. It is this latter element that struck me the most whilst reading this book: if you take a step back for a moment, there is something uniquely sad about Tommy's inability to connect with others, and then, when Wiseau poured his soul into his film people just laughed. Despite the stories about his atrocious behaviour both on and off the film set, there's something deeply tragic about the whole affair. Jean-Luc Godard, who passed away earlier this year, once observed that every fictional film is a documentary of its actors. The Disaster Artist shows that this well-worn aphorism doesn't begin to cover it.

28 December 2022

Russell Coker: Links December 2022

Charles Stross wrote an informative summary of the problems with the UK monarchy [1], conveniently before the queen died. The blog post To The Next Mass Shooter, A Modest Proposal is a well written suggestion to potential mass murderers [2]. The New Yorker has an interesting and amusing article about the former CIA employee who released the Vault 7 collection of CIA attack software [3]. This exposes the ridiculously poor hiring practices of the CIA which involved far less background checks than the reporter writing the story did. Wired has an interesting 6 part series about the hunt for Alpha02 the admin of the Alphabay dark web marketplace [4]. The Atlantic has an interesting and informative article about Marjorie Taylor Greene, one of the most horrible politicians in the world [5]. Anarcat wrote a long and detailed blog post about Matrix [6]. It s mostly about comparing Matrix to other services and analysing the overall environment of IM systms. I recommend using Matrix, it is quite good although having a server with SSD storage is required for the database. Edent wrote an interesting thought experiment on how one might try to regain access to all their digital data if a lightning strike destroyed everything in their home [7]. Cory Doctorow wrote an interesting article about the crapification of literary contracts [8]. A lot of this applies to most contracts between corporations and individuals. We need legislation to restrict corporations from such abuse. Jared A Brock wrote an insightful article about why AirBNB is horrible and how it will fail [9]. Habr has an interesting article on circumventing UEFI secure boot [10]. This doesn t make secure boot worthless but does expose some weaknesses in it. Matthew Garrett wrote an interesting blog post about stewartship of the UEFI boot ecosystem and how Microsoft has made some strange and possibly hypocritical decisions about it [11]. It also has a lot of background information on how UEFI can be used and misused. Cory Doctorow wrote an interesting article Let s Make Amazon Into a Dumb Pipe [12]. The idea is to use the Amazon search and reviews to find a product and then buy it elsewhere, a reverse of the showrooming practice where people look at products in stores and buy them online. There is already a browser plugin to search local libraries for Amazon books. Charles Stross wrote an interesting blog post about the UK Tory plan to destroy higher education [13]. There s a lot of similarities to what conservatives are doing in other countries. Antoine Beaupr wrote an insightful blog post How to nationalize the internet in Canada [14]. They cover the technical issues to be addressed as well as some social justice points that are often missed when discussing such issues. Internet is not a luxuary nowadays, it s an important part of daily life and the governments need to treat it the same way as roads and other national infrastructure.

18 December 2022

Russell Coker: Wall Facers

I m currently reading the second book of the TriSolar Sci-Fi series by Cixin Liu, I ve only just started it so this post can t have spoilers for it and I will also only have minimal spoilers for the first book (nothing more than you will get from pop culture references to it). In the second book there are people called Wall Facers who have broad powers to shape the course of the Human response to an alien invasion in 400+ years time. The idea is that as the aliens have an ability to see everything that can be seen on Earth any ideas that leave the brain of one person can be snooped on, so if some people act independently without communicating their plans they can take the aliens by surprise. While that is probably going to work out well in the books history in general seems to show that people who act independently without any useful feedback from others tend to perform poorly, every king and dictator seems to demonstrate this. Efficient Work I ve been thinking about what I would do if I had significant powers to guide the response to an alien threat in some hundreds of years. The first thing to do would be to get all people working as efficiently as possible. Without the imminent threat of alien invasion we can have debates about how much time to spend working vs leisure time. Should we make 24 hours per week the new normal work week? But if the threat of annihilation is looming then the discussion should be about how to get as many people as possible working as much as possible. Currently 1/4 of the world population lack access to safe drinking water [1], there s a plan to achieve universal and equitable access to safe and affordable drinking water for all by 2030 . But 2030 isn t soon enough, another 8 years where 1/4 of children born won t reach their potential due to poor water is unacceptable. Currently 13% of the world population don t have access to electricity and 40% don t have access to clean fuels for cooking [2]. Lack of energy access reduces health and opportunities for education. Healthcare is another major obstacle to human development and therefore economic development. Even some allegedly first-world countries like the US lack universal affordable healthcare. I think we could reasonably get safe water to 99% of the world population before 2025 if we tried hard (IE applied a small fraction of the resources of a single war to it). Getting electricity to 95% of the world population and clean cooking fuels to 90% of the world population are probably achievable goals for 2025 as well. Healthcare is a slightly harder problem as we need to train more nurses and doctors. A registered nurse apparently needs 3 years of training after completing high school. We may have to improve high schools to get more students up to the standard of nursing degrees. If it takes 3 years to improve schools in year 9+ and then 3 years to get more high school graduates that would mean that it would take about 9 years to get an increase in nurses. Doing this would require increasing the capacity of universities and making university almost free (as it was for decades). So in about 2031 we could start sending a significant number of nurses from developed countries to help out developing countries. Becoming a doctor apparently requires 8 years of study plus a minimum of 3 years residency . So if doctors were entirely trained in first world countries then we wouldn t be able to send many doctors to developing countries until 2039. If the residency was performed in other countries then it could be as early as 2036. According to the WHO currently only half the world s population have adequate healthcare [3]. To get adequate healthcare to the world we need to more than double the number of doctors because currently we don t have enough in countries with decent healthcare systems such as Australia. It would probably take to at least 2060 to get enough doctors trained. The end goal of course would be to have every country able to train enough of it s citizens to provide all medical services, but countries that have serious widespread healthcare problems that reduce the number of people who can pursue higher education will have difficulty in that until some of the healthcare problems are alleviated. Education Obviously education is important to all achievements. Currently education seems very poorly run, it is possible to create a school system that teaches children effectively without the bullying that is common in Australia and without the sort of pressure that South Korea is infamous for. One of the main issues to resolve with the school system is the idea that everyone should learn at the same speed, that goal can only be achieved by making the majority of the students learn slowly. Students should be able to freely skip ahead as their skill permits and finish school at any age. Also high school isn t for everyone, the tech schools that teach trades need to be brought back. Deceiving Aliens A plot point in the TriSolar series is that the aliens can see each other s thoughts, the local communication (their equivalent to talking) is based on reading each other s thoughts without the possibility of deception. While deceptive written communication is potentially possible for them they haven t developed skills in that area. As a first step towards exploiting this humans could focus more on linguistic development that increases language complexity, such as the way the English language adopts words from other languages and gives them slightly different meanings for example the difference between driver and chauffeur and the difference between dog and hound is not obvious to many Europeans who otherwise speak English fluently. When involved in conversation it s possible to convey meaning without directly stating things, this is used extensively by people who are interested in security. My observations of this are based on conversations with people who do government work, but I imagine that criminal organisations also do similar things for similar reasons. An increased focus on poetry in schools might be helpful in developing skills for conveying ideas to people who think in human ways where the message is unclear to non-humans who have no experience of deception. I wonder whether the ability to understand human poetry would make aliens less hostile to humans, if they can think like us then they would be less likely to want to exterminate us. Poker is a game that depends on the ability to deceive others, I ve never been any good at it. I wonder if making it part of the school curriculum would help improve the overall human ability to deceive aliens. I don t think that such schools would become dens of sociopathy as depicted in Kakegurui, but it might have some negative results. Spreading education to a larger portion of the world s population requires more use of electronic education. Anything learned via text can be more easily assimilated by aliens than things that are learned directly from other people. For high school and the basics of a university degree this is fine. But for more advanced education it seems that having a large face to face component might help keep the value away from the aliens. More Ideas? What do you think I missed on this list? I wasn t trying to list every possibility, just the more important ones. Also for any goals other than increasing inequality for it s own sake we should improve health and education for the world.

15 December 2022

Reproducible Builds: Supporter spotlight: David A. Wheeler on supply chain security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the sixth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC, the Google Open Source Security Team (GOSST), Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix and Hans-Christoph Steiner of the F-Droid project. Today, however, we will be talking with David A. Wheeler, the Director of Open Source Supply Chain Security at the Linux Foundation.

Holger Levsen: Welcome, David, thanks for taking the time to talk with us today. First, could you briefly tell me about yourself? David: Sure! I m David A. Wheeler and I work for the Linux Foundation as the Director of Open Source Supply Chain Security. That just means that my job is to help open source software projects improve their security, including its development, build, distribution, and incorporation in larger works, all the way out to its eventual use by end-users. In my copious free time I also teach at George Mason University (GMU); in particular, I teach a graduate course on how to design and implement secure software. My background is technical. I have a Bachelor s in Electronics Engineering, a Master s in Computer Science and a PhD in Information Technology. My PhD dissertation is connected to reproducible builds. My PhD dissertation was on countering the Trusting Trust attack, an attack that subverts fundamental build system tools such as compilers. The attack was discovered by Karger & Schell in the 1970s, and later demonstrated & popularized by Ken Thompson. In my dissertation on trusting trust I showed that a process called Diverse Double-Compiling (DDC) could detect trusting trust attacks. That process is a specialized kind of reproducible build specifically designed to detect trusting trust style attacks. In addition, countering the trusting trust attack primarily becomes more important only when reproducible builds become more common. Reproducible builds enable detection of build-time subversions. Most attackers wouldn t bother with a trusting trust attack if they could just directly use a build-time subversion of the software they actually want to subvert.
Holger: Thanks for taking the time to introduce yourself to us. What do you think are the biggest challenges today in computing? There are many big challenges in computing today. For example:
Holger: Do you think reproducible builds are an important part in secure computing today already? David: Yes, but first let s put things in context. Today, when attackers exploit software vulnerabilities, they re primarily exploiting unintentional vulnerabilities that were created by the software developers. There are a lot of efforts to counter this: We re just starting to get better at this, which is good. However, attackers always try to attack the easiest target. As our deployed software has started to be hardened against attack, attackers have dramatically increased their attacks on the software supply chain (Sonatype found in 2022 that there s been a 742% increase year-over-year). The software supply chain hasn t historically gotten much attention, making it the easy target. There are simple supply chain attacks with simple solutions: Unfortunately, attackers know there are other lines of attack. One of the most dangerous is subverted build systems, as demonstrated by the subversion of SolarWinds Orion system. In a subverted build system, developers can review the software source code all day and see no problem, because there is no problem there. Instead, the process to convert source code into the code people run, called the build system , is subverted by an attacker. One solution for countering subverted build systems is to make the build systems harder to attack. That s a good thing to do, but you can never be confident that it was good enough . How can you be sure it s not subverted, if there s no way to know? A stronger defense against subverted build systems is the idea of verified reproducible builds. A build is reproducible if given the same source code, build environment and build instructions, any party can recreate bit-by-bit identical copies of all specified artifacts. A build is verified if multiple different parties verify that they get the same result for that situation. When you have a verified reproducible build, either all the parties colluded (and you could always double-check it yourself), or the build process isn t subverted. There is one last turtle: What if the build system tools or machines are subverted themselves? This is not a common attack today, but it s important to know if we can address them when the time comes. The good news is that we can address this. For some situations reproducible builds can also counter such attacks. If there s a loop (that is, a compiler is used to generate itself), that s called the trusting trust attack, and that is more challenging. Thankfully, the trusting trust attack has been known about for decades and there are known solutions. The diverse double-compiling (DDC) process that I explained in my PhD dissertation, as well as the bootstrappable builds process, can both counter trusting trust attacks in the software space. So there is no reason to lose hope: there is a bottom turtle , as it were.
Holger: Thankfully, this has all slowly started to change and supply chain issues are now widely discussed, as evident by efforts like Securing the Software Supply Chain: Recommended Practices Guide for Developers which you shared on our mailing list. In there, Reproducible Builds are mentioned as recommended advanced practice, which is both pretty cool (we ve come a long way!), but to me it also sounds like this will take another decade until it s become standard normal procedure. Do you agree on that timeline? David: I don t think there will be any particular timeframe. Different projects and ecosystems will move at different speeds. I wouldn t be surprised if it took a decade or so for them to become relatively common there are good reasons for that. Today the most common kinds of attacks based on software vulnerabilities still involve unintentional vulnerabilities in operational systems. Attackers are starting to apply supply chain attacks, but the top such attacks today are typosquatting (creating packages with similar names) and dependency confusion) (convincing projects to download packages from the wrong repositories). Reproducible builds don t counter those kinds of attacks, they counter subverted builds. It s important to eventually have verified reproducible builds, but understandably other issues are currently getting prioritized first. That said, reproducible builds are important long term. Many people are working on countering unintentional vulnerabilities and the most common kinds of supply chain attacks. As these other threats are countered, attackers will increasingly target build systems. Attackers always go for the weakest link. We will eventually need verified reproducible builds in many situations, and it ll take a while to get build systems able to widely perform reproducible builds, so we need to start that work now. That s true for anything where you know you ll need it but it will take a long time to get ready you need to start now.
Holger: What are your suggestions to accelerate adoption? David: Reproducible builds need to be: I think there s a snowball effect. Once many projects packages are reproducible, it will be easier to convince other projects to make their packages reproducible. I also think there should be some prioritization. If a package is in wide use (e.g., part of minimum set of packages for a widely-used Linux distribution or framework), its reproducibility should be a special focus. If a package is vital for supporting some societally important critical infrastructure (e.g., running dams), it should also be considered important. You can then work on the ones that are less important over time.
Holger: How is the Best Practices Badge going? How many projects are participating and how many are missing? David: It s going very well. You can see some automatically-generated statistics, showing we have over 5,000 projects, adding more than 1/day on average. We have more than 900 projects that have earned at least the passing badge level.
Holger: How many of the projects participating in the Best Practices badge engaging with reproducible builds? David: As of this writing there are 168 projects that report meeting the reproducible builds criterion. That s a relatively small percentage of projects. However, note that this criterion (labelled build_reproducible) is only required for the gold badge. It s not required for the passing or silver level badge. Currently we ve been strategically focused on getting projects to at least earn a passing badge, and less on earning silver or gold badges. We would love for all projects to get earn a silver or gold badge, of course, but our theory is that projects that can t even earn a passing badge present the most risk to their users. That said, there are some projects we especially want to see implementing higher badge levels. Those include projects that are very widely used, so that vulnerabilities in them can impact many systems. Examples of such projects include the Linux kernel and curl. In addition, some projects are used within systems where it s important to society that they not have serious security vulnerabilities. Examples include projects used by chemical manufacturers, financial systems and weapons. We definitely encourage any of those kinds of projects to earn higher badge levels.
Holger: Many thanks for this interview, David, and for all of your work at the Linux Foundation and elsewhere!




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

12 December 2022

Russ Allbery: Review: The Unbroken

Review: The Unbroken, by C.L. Clark
Series: Magic of the Lost #1
Publisher: Orbit
Copyright: March 2021
ISBN: 0-316-54267-9
Format: Kindle
Pages: 490
The Unbroken is the first book of a projected fantasy trilogy. It is C.L. Clark's first novel. Lieutenant Touraine is one of the Sands, the derogatory name for the Balladairan Colonial Brigade. She, like the others of her squad, are conscript soldiers, kidnapped by the Balladairan Empire from their colonies as children and beaten into "civilized" behavior by Balladairan training. They fought in the Balladairan war against the Taargens. Now, they've been reassigned to El-Wast, capital city of Qaz l, the foremost of the southern colonies. The place where Touraine was born, from which she was taken at the age of five. Balladaire is not France and Qaz l is not Algeria, but the parallels are obvious and strongly implied by the map and the climates. Touraine and her squad are part of the forces accompanying Princess Luca, the crown princess of the Balladairan Empire, who has been sent to take charge of Qaz l and quell a rebellion. Luca's parents died in the Withering, the latest round of a recurrent plague that haunts Balladaire. She is the rightful heir, but her uncle rules as regent and is reluctant to give her the throne. Qaz l is where she is to prove herself. If she can bring the colony in line, she can prove that she's ready to rule: her birthright and her destiny. The Qaz li are uninterested in being part of Luca's grand plan of personal accomplishment. She steps off her ship into an assassination attempt, foiled by Touraine's sharp eyes and quick reactions, which brings the Sand to the princess's attention. Touraine's reward is to be assigned the execution of the captured rebels, one of whom recognizes her and names her mother before he dies. This sets up the core of the plot: Qaz li rebellion against an oppressive colonial empire, Luca's attempt to use the colony as a political stepping stone, and Touraine caught in between. One of the reasons why I am happy to see increased diversity in SFF authors is that the way we tell stories is shaped by our cultural upbringing. I was taught to tell stories about colonialism and rebellion in a specific ideological shape. It's hard to describe briefly, but the core idea is that being under the rule of someone else is unnatural as well as being an injustice. It's a deviation from the way the world should work, something unexpected that is inherently unstable. Once people unite to overthrow their oppressors, eventual success is inevitable; it's not only right or moral, it's the natural path of history. This is what you get when you try to peel the supremacy part away from white supremacy but leave the unshakable self-confidence and bedrock assumption that the universe cares what we think. We were also taught that rebellion is primarily ideological. One may be motivated by personal injustice, but the correct use of that injustice is to subsume it into concepts such as freedom and democracy. Those concepts are more "real" in some foundational sense, more central to the right functioning of the world, than individual circumstance. When the now-dominant group tells stories of long-ago revolution, there is no personal experience of oppression and survival in which to ground the story; instead, it's linked to anticipatory fear in the reader, to the idea that one's privileges could be taken away by a foreign oppressor and that the counter to this threat is ideological unity. Obviously, not every white fantasy author uses this story shape, but the tendency runs deep because we're taught it young. You can see it everywhere in fantasy, from Lord of the Rings to Tigana. The Unbroken uses a much different story shape, and I don't think it's a coincidence that the author is Black. Touraine is not sympathetic to the Qaz li. These are not her people and this is not her life. She went through hell in Balladairan schools, but she won a place, however tenuous. Her personal role model is General Cantic, the Balladairan Blood General who was also one of her instructors. Cantic is hard as nails, unforgiving, unbending, and probably a war criminal, but also the embodiment of a military ethic. She is tough but fair with the conscript soldiers. She doesn't put a stop to their harassment by the regular Balladairan troops, but neither does she let it go too far. Cantic has power, she knows how to keep it, and there is a place for Touraine in Cantic's world. And, critically, that place is not just hers: it's one she shares with her squad. Touraine's primary loyalty is not to Balladaire or to Qaz l. It's to the Sands. Her soldiers are neither one thing nor the other, and they disagree vehemently among themselves about what Qaz l and their other colonial homes should be to them, but they learned together, fought together, and died together. That theme is woven throughout The Unbroken: personal bonds, third and fourth loyalties, and practical ethics of survival that complicate and contradict simple dichotomies of oppressor and oppressed. Touraine is repeatedly offered ideological motives that the protagonist in the typical story shape would adopt. And she repeatedly rejects them for personal bonds: trying to keep her people safe, in a world that is not looking out for them. The consequence is that this book tears Touraine apart. She tries to walk a precarious path between Luca, the Qaz li, Cantic, and the Sands, and she falls off that path a lot. Each time I thought I knew where this book was going, there's another reversal, often brutal. I tend to be a happily-ever-after reader who wants the protagonist to get everything they need, so this isn't my normal fare. The amount of hell that Touraine goes through made for difficult reading, worse because much of it is due to her own mistakes or betrayals. But Clark makes those decisions believable given the impossible position Touraine is in and the lack of role models she has for making other choices. She's set up to fail, and the price of small victories is to have no one understand the decisions that she makes, or to believe her motives. Luca is the other viewpoint character of the book (and yes, this is also a love affair, which complicates both of their loyalties). She is the heroine of a more typical genre fantasy novel: the outsider princess with a physical disability and a razor-sharp mind, ambitious but fair (at least in her own mind), with a trusted bodyguard advisor who also knew her father and a sincere desire to be kinder and more even-handed in her governance of the colony. All of this is real; Luca is a protagonist, and the reader is not being set up to dislike her. But compared to Touraine's grappling with identity, loyalty, and ethics, Luca is never in any real danger, and her concerns start to feel too calculated and superficial. It's hard to be seriously invested in whether Luca proves herself or gets her throne when people are being slaughtered and abused. This, I think, is the best part of this book. Clark tells a traditional ideological fantasy of learning to be a good ruler, but she puts it alongside a much deeper and more complex story of multi-faceted oppression. She has the two protagonists fall in love with each other and challenges them to understand each other, and Luca does not come off well in this comparison. Touraine is frustrated, impulsive, physical, and sometimes has catastrophically poor judgment. Luca is analytical and calculating, and in most ways understands the political dynamics far better than Touraine. We know how this story usually goes: Luca sees Touraine's brilliance and lifts her out of the ranks into a role of importance and influence, which Touraine should reward with loyalty. But Touraine's world is more real, more grounded, and more authentic, and both Touraine and the reader know what Luca could offer is contingent and comes with a higher price than Luca understands. (Incidentally, the cover of The Unbroken, designed by Lauren Panepinto with art by Tommy Arnold, is astonishingly good at capturing both Touraine's character and the overall feeling of the book. Here's a larger version.) The writing is good but uneven. Clark loves reversals, and they did keep me reading, but I think there were too many of them. By the end of the book, the escalation of betrayals and setbacks was more exhausting than exciting, and I'd stopped trusting anything good would last. (Admittedly, this is an accurate reflection of how Touraine felt.) Touraine's inner monologue also gets a bit repetitive when she's thrashing in the jaws of an emotional trap. I think some of this is first-novel problems of over-explaining emotional states and character reasoning, but these problems combine to make the book feel a bit over-long. I'm also not in love with the ending. It's perhaps the one place in the book where I am more cynical about the politics than Clark is, although she does lay the groundwork for it. But this book is also full of places small and large where it goes a different direction than most fantasy and is better for it. I think my favorite small moment is Touraine's quiet refusal to defend herself against certain insinuations. This is such a beautiful bit of characterization; she knows she won't be believed anyway, and refuses to demean herself by trying. I'm not sure I can recommend this book unconditionally, since I think you have to be in the mood for it, but it's one of the most thoughtful and nuanced looks at colonialism and rebellion I can remember seeing in fantasy. I found it frustrating in places, but I'm also still thinking about it. If you're looking for a political fantasy with teeth, you could do a lot worse, although expect to come out the other side a bit battered and bruised. Followed by The Faithless, and I have no idea where Clark is going to go with the second book. I suppose I'll have to read and find out. Content note: In addition to a lot of violence, gore, and death, including significant character death, there's also a major plague. If you're not feeling up to reading about panic caused by contageous illness, proceed with caution. Rating: 7 out of 10

8 December 2022

Russell Coker: Thinkpad X1 Carbon Gen5

Gen1 Since February 2018 I have been using a Thinkpad X1 Carbon Gen1 [1] as my main laptop. Generally I ve been very happy with it, it s small and light, has good performance for web browsing etc, and with my transition to doing all compiles etc on servers it works well. When I wrote my original review I was unhappy with the keyboard, but I got used to that and found it to be reasonably good. The things that I have found as limits on it are the display resolution as 1600*900 isn t that great by modern standards (most phones are a lot higher resolution), the size (slightly too large for the pocket of my Scott e Vest [2] jacket), and the lack of USB-C. Modern laptops can charge via USB-C/Thunderbolt while also doing USB and DisplayPort video over the same cable. USB-C monitors which support charging a laptop over the same cable as used for video input are becoming common (last time I checked the Dell web site for many models of monitor there was a USB-C one that cost about $100 more). I work at a company with lots of USB-C monitors and docks so being able to use my personal laptop with the same displays when on breaks is really handy. A final problem with the Gen1 is that it has a proprietary and unusual connector for the SSD which means that a replacement SSD costs about what I paid for the entire laptop. Ever since the SSD gave a BTRFS checksum error I ve been thinking of replacing it. Choosing a Replacement The Gen5 is the first Thinkpad X1 Carbon to have USB-C. For work I had used a Gen6 which was quite nice [3]. But it didn t seem to offer much over the Gen5. So I started looking for cheap Thinkpad X1 Carbons of Gen5+. A Cheap? Gen5 In July I saw an ebay advert for a Gen5 with FullHD display for $370 or nearest offer, with the downside being that the BIOS password had been lost. I offered $330 and the seller accepted, in retrospect that was unusually cheap and should have been a clue that I needed to do further investigation. It turned out that resetting the BIOS password is unusually difficult as it s in the TPM so the system would only boot Windows. When I learned that I should have sold the laptop to someone who wanted to run Windows and bought another. Instead I followed some instructions on the Internet about entering a wrong password multiple times to get to a password recovery screen, instead the machine locked up entirely and became unusable for windows (so don t do that). Then I looked for ways of fixing the motherboard. The cheapest was $75.25 for a replacement BIOS flash chip that had a BIOS that didn t check the validity of passwords. The aim was to solder that on, set a new password (with any random text being accepted as the old password), then solder the old one back on for normal functionality. It turned out that I m not good at fine soldering, after I had hacked at it a friend diagnosed the chip and motherboard to probably both be damaged (he couldn t get it going). The end solution was that my friend found a replacement motherboard for $170 from China. This gave a total cost of $575.25 for the laptop which is more than the usual price of a Gen6 and more than I expected to pay. In the past when advocating buying second hand or refurbished laptops people would say what happens if you get one that doesn t work properly , the answer to that question is that I paid a lot less than the new cost of $2700+ for a Thinkpad X1 Carbon and got a computer that does everything I need. One of the advantages of getting a cheap laptop is that I won t be so unhappy if I happen to drop it. A Cheap Gen6 After the failed experiment with a replacement BIOS on the Gen5 I was considering selling it for scrap. So I bought a Gen6 from Australian Computer Traders via Amazon for $390 in August. The advert clearly stated that it was for a laptop with USB-C and Thunderbolt (Gen5+ features) but they shipped me a Gen4 that didn t even have USB-C. They eventually refunded me but I will try to avoid buying from them again. Finally Working The laptop I now have has a i5-6300U CPU that rates 3242 on cpubenchmark.net. My Gen1 thinkpad has a i7-3667U CPU that rates 2378 on cpubenchmark.net, note that the cpubenchmark.net people have rescaled their benchmark since my review of the Gen1 in 2018. So according to the benchmarks my latest laptop is about 36% faster for CPU operations. Not much of a difference when comparing systems manufactured in 2012 and 2017! According to the benchmarks a medium to high end recent CPU will be more than 10* faster than the one in my Gen5 laptop, but such a CPU would cost more than my laptop cost. The storage is a 256G NVMe device that can do sustained reads at 900MB/s, that s not even twice as fast as the SSD in my Gen1 laptop although NVMe is designed to perform better for small IO. It has 2*USB-C ports both of which can be used for charging, which is a significant benefit over the Gen6 I had for work in 2018 which only had one. I don t know why Lenovo made Gen6 machines that were lesser than Gen5 in such an important way. It can power my Desklab portable 4K monitor [4] but won t send a DisplayPort signal over the same USB-C cable. I don t know if this is a USB-C cable issue or some problem with the laptop recognising displays. It works nicely with Dell USB-C monitors and docks that power the laptop over the same cable as used for DisplayPort. Also the HDMI port works with 4K monitors, so at worst I could connect my Desklab monitor via a USB-C cable for power and HDMI for data. The inability to change the battery without disassembly is still a problem, but hopefully USB-C connected batteries capable of charging such a laptop will become affordable in the near future and I have had some practice at disassembling this laptop. It still has the Ethernet dongle annoyance, and of course the seller didn t include that. But USB ethernet devices are quite good and I have a few of them. In conclusion it s worth the $575.25 I paid for it and would have been even better value for money if I had been a bit smarter when buying. It meets the initial criteria of USB-C power and display and of fitting in my jacket pocket as well as being slightly better than my old laptop in every other way.

30 November 2022

Russ Allbery: Review: The Fed Unbound

Review: The Fed Unbound, by Lev Menand
Publisher: Columbia Global Reports
Copyright: 2022
ISBN: 1-7359137-1-5
Format: Kindle
Pages: 156
The Fed Unbound is a short non-fiction exploration of US Federal Reserve actions to reducing systemic risk caused by shadow banking. Its particular focus is the role of the Fed from the 2008 financial crisis to the present, including the COVID shock, but it includes a history of what Menand calls the "American Monetary Settlement," the political compromise that gave rise to the Federal Reserve. In Menand's view, a central cause of instability in the US financial system (and, given the influence of the dollar system, the world financial system as well) is shadow banking: institutions that act as banks without being insured like banks or subject to bank regulations. A bank, in this definition, is an organization that takes deposits. I'm simplifying somewhat, but what distinguishes a deposit from a security or an investment is that deposits can be withdrawn at any time, or at least on very short notice. When you want to spend the money in your checking account, you don't have to wait for a three-month maturity period or pay an early withdrawal penalty. You simply withdraw the money, with immediate effect. This property is what makes deposits "money," rather than something that you can (but possibly cannot) sell for money, such as stocks or bonds. Most people are familiar with the basic story of how banks work. Essentially no bank simply takes people's money and puts it in a vault until the person wants it again. If that were the case, you would need to pay the bank to store your money. Instead, a bank takes in deposits and then lends some portion of that money out to others. Those loans, for things like cars or houses or credit card spending, come due over time, with interest. The interest rate the bank charges on the loans is much higher than the rate it has to pay on its deposits, and it pockets the difference. The problem with this model, of course, is that the bank doesn't have your money, so if all the depositors go to the bank at the same time and ask for their money, the bank won't be able to repay them and will collapse. (See, for example, the movie It's a Wonderful Life, or Mary Poppins, or any number of other movies or books.) Retail banks are therefore subject to stringent regulations designed to promote public trust and to ensure that traditional banking is a boring (if still lucrative) business. Banks are also normally insured, which in the US means that if they do experience a run, federal regulators will step in, shut down the bank in an orderly fashion, and ensure every depositor gets their money back (at least up to the insurance limit). Alas, if you thought people would settle for boring work that makes a comfortable profit, you don't know the financial industry. Highly-regulated insured deposits are less lucrative than acting like a bank without all of those restrictions and rules and deposit insurance payments. As Menand relates in his brief history of US banking, financial institutions constantly invent new forms of deposits with similar properties but without all the pesky rules: eurodollars (which have nothing to do with the European currency), commercial paper, repo, and many others. These forms of deposits are primarily used by large institutions like corporations. The details vary, but they tend to be prone to the same fundamental instability as bank deposits: if there's a run on the market, there may not be enough liquidity for everyone to withdraw their money at once. Unlike bank deposits, though, there is no insurance, no federal regulator to step in and make depositors whole, and much less regulation to ensure that runs are unlikely. Instead, there's the Federal Reserve, which has increasingly become the bulwark against liquidity crises among shadow banks. This happened in 2008 during the financial crisis (which Menand argues can be seen as a shadow bank run sparked by losses on mortgage securities), and again at a larger scale in 2020 during the initial COVID crisis. Menand is clear that these interventions from the Federal Reserve were necessary. The alternative would have been an uncontrolled collapse of large sections of the financial system, with unknown consequences. But the Fed was not intended to perform those types of interventions. It has no regulatory authority to reform the underlying financial systems to make them safer, remove executives who failed to maintain sufficient liquidity for a crisis, or (as is standard for all traditional US banks) prohibit combining banking and more speculative investment on the same balance sheet. What the Federal Reserve can do, and did, is function as a buyer of last resort, bailing out shadow banks by purchasing assets with newly-created money. This works, in the sense that it averts the immediate crisis, but it creates other distortions. Most importantly, constant Fed intervention doesn't create an incentive to avoid situations that require that intervention; if anything, it encourages more dangerous risk-taking. The above, plus an all-too-brief history of the politics of US banking, is the meat of this book. It's a useful summary, as far as it goes, and I learned a few new things. But I think The Fed Unbound is confused about its audience. This type of high-level summary and capsule history seems most useful for people without an economics background and who haven't been following macroeconomics closely. But Menand doesn't write for that audience. He assumes definitions of words like "deposits" and "money" that are going to be confusing or even incomprehensible to the lay reader. For example, Menand describes ordinary retail banks as creating money, even saying that a bank loans money by simply incrementing the numbers in a customer's deposit account. This is correct in the technical economic definition of money (fractional reserve banking effectively creates new money), but it's going to sound to someone not well-versed in the terminology as if retail banks can create new dollars out of the ether. That power is, of course, reserved for the Federal Reserve, and indeed is largely the point of its existence. Much of this book relies on a very specific definition of money and money supply that will only be familiar to those with economics training. Similarly, the history of the Federal Reserve is interesting but slight, and at no point does Menand explain clearly how the record-keeping between it and retail banks works, or what the Fed's "balance sheet" means in practice. I realize this book isn't trying to give a detailed description or history of the Federal Reserve system, but the most obvious audience is likely to flounder at the level of detail Menand provides. Perhaps, therefore, this book is aimed at an audience already familiar with macroeconomics? But, if so, I'm not sure it says anything new. I follow macroeconomic policy moderately closely and found most of Menand's observations obvious and not very novel. There were tidbits here and there that I hadn't understood, but my time would probably have been better invested in another book. Menand proposes some reforms, but they mostly consist of "Congress should do its job and not create situations where the Federal Reserve has to act without the right infrastructure and political oversight," and, well, yes. It's hard to disagree with that, and it's also hard to see how it will ever happen. It's far too convenient to outsource such problems to central banking, where they are hidden behind financial mechanics that are incomprehensible to the average voter. This is an important topic, but I don't think this is the book to read about it. If you want a clearer and easier-to-understand role of the Federal Reserve in shadow banking crises, read Crashed instead. If you want to learn more about how retail bank regulation works, and hear a strong case for why the same principles should be applied to shadow banks, see Sheila Bair's Bull by the Horns. I'm still looking for a great history and explainer of the Federal Reserve system as a whole, but that is not this book. Rating: 5 out of 10

6 November 2022

Michael Ablassmeier: virtnbdbackup in unstable/bookworm

Besides several bugfixes, the latest version now supports using higher compression levels and logging to syslog facility. I also finished packaging and official packages are now available,

4 November 2022

Alastair McKinstry: 1.3 billion announced for new Forestry Support

1.3 billion announced for new Forestry Support Funds to be delivered through new Forestry Programme Premiums for planting trees to be increased by between 46% and 66% and extended to 20 years for farmers #GreensInGovernment The Taoiseach, Miche l Martin TD, Minister of State with responsibility for Forestry, Senator Pippa Hackett, and Minister for Agriculture, Food and the Marine, Charlie McConalogue T.D today announced a proposed investment by the Government of 1.3 billion in Irish forestry. The funding will be for the next national Forestry Programme and represents the largest ever investment by an Irish Government in tree-planting. The programme will now be the subject of state-aid approval by the European Commission. The Taoiseach said: This commitment by the Government to such a substantial financial package reflects the seriousness with which we view the climate change and biodiversity challenges, which affect all of society. Forestry is at the heart of delivering on our sustainability goals and strong support is needed to encourage engagement from all our stakeholders in reaching our objectives. Minister Hackett said: I m delighted to have secured a package of 1.318 billion for forestry. This will support the biggest and best-funded Forestry Programme ever in Ireland. It comes at an appropriate time, given the urgency of taking climate mitigation measures. Planting trees is one of the most effective methods of tackling climate change as well as contributing to improved biodiversity and water quality. One of my main aims is to re-engage farmers in afforestation. I m delighted therefore to be proposing a new 20-year premium term exclusively for farmers, as well as introducing a new small-scale native woodland scheme which will allow farmers to plant up to 1 hectare of native woodland on farmland and along watercourses outside of the forestry licensing process. Minister McConalogue said: Today we commit to providing unprecedented incentives to encourage the planting of trees that can provide a valuable addition to farm income and help to meet national climate and biodiversity objectives. This funding guarantees continued payments to those forest owners who planted under the current scheme and who are still in receipt of premiums. It also offers new and improved financial supports to those who undertake planting and sustainable forest management under the new Programme. We intend to increase premiums for planting trees by between 46% and 66% and to extend the premium period from 15 to 20 years for farmers. "We are approaching a new and exciting period for forestry in Ireland. The new Forestry Programme will drive a new and brighter future for forestry, for farmers and for our climate. The proposed new Forestry Programme is currently out to public consultation as part of the Strategic Environmental Assessment and Appropriate Assessment process. The Programme is the main implementation mechanism for the new Forest Strategy (2023 -2030) which reflects the ambitions contained in the recently published Shared National Vision for Trees, Woods and Forests in Ireland until 2050. The public consultation closes on 29th November, 2022 and any changes which result from this process will be incorporated into the Programme and the Forest Strategy. Minister Hackett commented: The draft Forestry Programme and Forest Strategy are the product of extensive stakeholder consultation and feedback, and both documents are open to public consultation for the next number of weeks. I would strongly encourage all interested parties to engage with the consultation in advance of the Strategy and Programme being finalised. The new Programme is built around the principle of right trees in the right places for the right reasons with the right management. It aims to deliver more diverse forest which will meet multiple societal objectives, economic, social and environmental. Higher grant rates for forest establishment are also proposed with increases of approximately 20% to reflect rising living costs. The new one hectare native tree area scheme will also make it easier for landowners who wish to plant small areas of trees on their farm. The Taoiseach concluded I welcome this milestone and I believe that this funding injection will be an important catalyst in delivering on the ambition outlined in the new Forest Strategy. Our environmental challenges are huge but so is our commitment to overcoming them and this Forestry Programme is key to delivering so many of our priorities. The new Programme will be 100% Exchequer funded and is subject to State Aid approval from the EU Commission. The Department is in contact with the Commission in relation to this approval which is a rigorous process.

Louis-Philippe V ronneau: Book Review: Chokepoint Capitalism, by Rebecca Giblin and Cory Doctorow

Two weeks ago, I had the chance to go see Cory Doctorow at my local independent bookstore, in Montr al. He was there to present his latest essay, co-written with Rebecca Giblin1. Titled Chokepoint Capitalism: How Big Tech and Big Content Captured Creative Labor Markets and How We'll Win Them Back, it focuses on the impact of monopolies and monopsonies (more on this later) on creative workers. The book is divided in two main parts: A picture of the book cover Although Doctorow is known for his strong political stances, I have to say I'm quite surprised by the quality of the research Giblin and he did for this book. They both show a pretty advanced understanding of the market dynamics they look at, and even though most of the solutions they propose aren't new or groundbreaking, they manage to be convincing and clear. That is to say, you certainly don't need to be an economist to understand or enjoy this book :) As I have mentioned before, the book heavily criticises monopolies, but also monopsonies a market structure that has only one buyer (instead of one seller). I find this quite interesting, as whereas people are often familiar with the concept of monopolies, monopsonies are frequently overlooked. The classic example of a monopsony is a labor market with a single employer: there is a multitude of workers trying to sell their labor power, but in the end, working conditions are dictated by the sole employer, who gets to decide who has a job and who hasn't. Mining towns are good real-world examples of monopsonies. In the book, the authors argue most of the contemporary work produced by creative workers (especially musicians and writers) is sold to monopsonies and oligopsonies, like Amazon2 or major music labels. This creates a situation where the consumers are less directly affected by the lack of competition in the market (they often get better prices), but where creators have an increasingly hard time making ends meet. Not only this, but natural monopsonies3 are relatively rare, making the case for breaking the existing ones even stronger. Apart from the evident need to actually start applying (the quite good) antitrust laws in the USA, some of the other solutions put forward are: Overall, I found this book quite enjoying and well written. Since I am not a creative worker myself and don't experience first-hand the hardships presented in the book, it was the occasion for me to delve more deeply in this topic. Chances are I'll reuse some of the expos s in my classes too.

  1. Professor at the Melbourne Law School and Director of the Intellectual Property Research Institute of Australia, amongst other things. More on her here.
  2. Amazon owns more than 50% of the US physical book retail market and has an even higher market share for ebooks and audiobooks (via Audible). Not only this, but with the decline of the physical book market, audiobooks are an increasingly important source of revenue for authors.
  3. Natural monopolies happen when it does not make economic sense for multiple enterprises to compete in a market. Critical infrastructures, like water supply or electricity, make for good examples of natural monopolies. It simply wouldn't be efficient to have 10 separate electrical cables connecting your house to 10 separate electric grids. In my opinion, such monopolies are acceptable (and even desirable), as long as they are collectively owned, either by the State or by local entities (municipalities, non-profits, etc.).

1 November 2022

Paul Tagliamonte: Decoding LDPC: k-Bit Brute Forcing

Before you go on: I've been warned off implementing this in practice on a few counts; namely, the space tradeoff isn't worth it, and it's unlikely to correct meaningful errors. I'm going to leave this post up, but please do take the content with a very large grain of salt!
My initial efforts to build a PHY and Data Link layer from scratch using my own code have been progressing nicely since the initial BPSK based protocol I ve documented under the PACKRAT series. As part of that, I ve been diving deep into FEC, and in particular, LDPC. I won t be able to do an overview of LDPC justice in this post with any luck that ll come in a later post to come as part of the RATPACK series, so some knowledge is assumed. As such this post is less useful for those looking to learn about LDPC, and a bit more targeted to those who enjoy talking and thinking about FEC.
Hey, heads up! - This post contains extremely unvalidated and back of the napkin quality work without any effort to prove this out generally. Hopefully this work can be of help to others, but please double check anything below if you need it for your own work!
While implementing LDPC, I ve gotten an encoder and checker working, enough to use LDPC like a checksum. The next big step is to write a Decoder, which can do error correction. The two popular approaches for the actual correction that I ve seen while reading about LDPC are Belief Propagation, and some class of linear programming that I haven t dug into yet. I m not thrilled at how expensive this all is in software, so while implementing the stack I ve been exploring every shady side ally to try and learn more about how encoders and decoders work, both in theory - and in practice.

Processing an LDPC Message Checking if a message is correct is fairly straightforward with LDPC (as with encoding, I ll note). As a quick refresher given the LDPC H (check) matrix of width N, you can check your message vector (msg) of length N by multipling H and msg, and checking if the output vector is all zero.
 // scheme contains our G (generator) and
 // H (check) matrices.
 scheme :=  G: Matrix ... , H: Matrix ... 
// msg contains our LDPC message (data and
 // check bits).
 msg := Vector ... 
// N is also the length of the encoded
 // msg vector after check bits have been
 // added.
 N := scheme.G.Width
// Now, let's generate our 'check' vector.
 ch := Multiply(scheme.H, msg)
We can now see if the message is correct or not:
 // if the ch vector is all zeros, we know
 // that the message is valid, and we don't
 // need to do anything.
 if ch.IsZero()  
// handle the case where the message
 // is fine as-is.
 return ...
 
// Expensive decode here
This is great for getting a thumbs up / thumbs down on the message being correct, but correcting errors still requires pulling the LDPC matrix values from the g (generator) matrix out, building a bipartite graph, and iteratively reprocessing the bit values, until constraints are satisfied and the message has been corrected. This got me thinking - what is the output vector when it s not all zeros? Since 1 values in the output vector indicates consistency problems in the message bits as they relate to the check bits, I wondered if this could be used to speed up my LDPC decoder. It appears to work, so this post is half an attempt to document this technique before I put it in my hot path, and half a plea for those who do like to talk about FEC to tell me what name this technique actually is.

k-Bit Brute Forcing Given that the output Vector s non-zero bit pattern is set due to the position of errors in the message vector, let s use that fact to build up a table of k-Bit errors that we can index into.
 // for clarity's sake, the Vector
 // type is being used as the lookup
 // key here, even though it may
 // need to be a hash or string in
 // some cases.
 idx := map[Vector]int 
for i := 0; i < N; i++  
// Create a vector of length N
 v := Vector 
v.FlipBit(i)
// Now, let's use the generator matrix to encode
 // the data with checksums, and then use the
 // check matrix on the message to figure out what
 // bit pattern results
 ev := Multiply(scheme.H, Multiply(v, scheme.G))
idx[ev] = i
 
This can be extended to multiple bits (hence: k-Bits), but I ve only done one here for illustration. Now that we have our idx mapping, we can now go back to the hot path on Checking the incoming message data:
 // if the ch vector is all zeros, we know
 // that the message is valid, and we don't
 // need to do anything.
 if ch.IsZero()  
// handle the case where the message
 // is fine as-is.
 return ...
 
errIdx, ok := idx[ch]
if ok  
msg.FlipBit(errIdx)
// Verify the LDPC message using
 // H again here.
 return ...
 
// Expensive decode here
Since map lookups wind up a heck of a lot faster than message-passing bit state, the hope here is this will short-circuit easy to solve errors for k-Bits, for some value of k that the system memory can tolerate.

Does this work? Frankly I have no idea. I ve written a small program and brute forced single-bit errors in all bit positions using random data to start with, and I ve not been able to find any collisions in the 1-bit error set, using the LDPC matrix from 802.3an-2006. Even if I was to find a collision for a higher-order k-Bit value, I m tempted to continue with this approach, and treat each set of bits in the Vector s bin (like a hash-table), checking the LDPC validity after each bit set in the bin. As long as the collision rate is small enough, it should be possible to correct k-Bits of error faster than the more expensive Belief Propagation approach. That being said, I m not entirely convinced collisions will be very common, but it ll take a bit more time working through the math to say that with any confidence. Have you seen this approach called something official in publications? See an obvious flaw in the system? Send me a tip, please!

28 October 2022

Antoine Beaupr : Debating VPN options

In my home lab(s), I have a handful of machines spread around a few points of presence, with mostly residential/commercial cable/DSL uplinks, which means, generally, NAT. This makes monitoring those devices kind of impossible. While I do punch holes for SSH, using jump hosts gets old quick, so I'm considering adding a virtual private network (a "VPN", not a VPN service) so that all machines can be reachable from everywhere. I see three ways this can work:
  1. a home-made Wireguard VPN, deployed with Puppet
  2. a Wireguard VPN overlay, with Tailscale or equivalent
  3. IPv6, native or with tunnels
So which one will it be?

Wireguard Puppet modules As is (unfortunately) typical with Puppet, I found multiple different modules to talk with Wireguard.
module score downloads release stars watch forks license docs contrib issue PR notes
halyard 3.1 1,807 2022-10-14 0 0 0 MIT no requires firewall and Configvault_Write modules?
voxpupuli 5.0 4,201 2022-10-01 2 23 7 AGPLv3 good 1/9 1/4 1/61 optionnally configures ferm, uses systemd-networkd, recommends systemd module with manage_systemd to true, purges unknown keys
abaranov 4.7 17,017 2021-08-20 9 3 38 MIT okay 1/17 4/7 4/28 requires pre-generated private keys
arrnorets 3.1 16,646 2020-12-28 1 2 1 Apache-2 okay 1 0 0 requires pre-generated private keys?
The voxpupuli module seems to be the most promising. The abaranov module is more popular and has more contributors, but it has more open issues and PRs. More critically, the voxpupuli module was written after the abaranov author didn't respond to a PR from the voxpupuli author trying to add more automation (namely private key management). It looks like setting up a wireguard network would be as simple as this on node A:
wireguard::interface   'wg0':
  source_addresses => ['2003:4f8:c17:4cf::1', '149.9.255.4'],
  public_key       => $facts['wireguard_pubkeys']['nodeB'],
  endpoint         => 'nodeB.example.com:53668',
  addresses        => [ 'Address' => '192.168.123.6/30', , 'Address' => 'fe80::beef:1/64' ,],
 
This configuration come from this pull request I sent to the module to document how to use that fact. Note that the addresses used here are examples that shouldn't be reused and do not confirm to RFC5737 ("IPv4 Address Blocks Reserved for Documentation", 192.0.2.0/24 (TEST-NET-1), 198.51.100.0/24 (TEST-NET-2), and 203.0.113.0/24 (TEST-NET-3)) or RFC3849 ("IPv6 Address Prefix Reserved for Documentation", 2001:DB8::/32), but that's another story. (To avoid boostrapping problems, the resubmit-facts configuration could be used so that other nodes facts are more immediately available.) One problem with the above approach is that you explicitly need to take care of routing, network topology, and addressing. This can get complicated quickly, especially if you have lots of devices, behind NAT, in multiple locations (which is basically my life at home, unfortunately). Concretely, basic Wireguard only support one peer behind NAT. There are some workarounds for this, but they generally imply a relay server of some sort, or some custom registry, it's kind of a mess. And this is where overlay networks like Tailscale come in.

Tailscale Tailscale is basically designed to deal with this problem. It's not fully opensource, but pretty close, and they have an interesting philosophy behind that. The client is opensource, and there is an opensource version of the server side, called headscale. They have recently (late 2022) hired the main headscale developer while promising to keep supporting it, which is pretty amazing. Tailscale provides an overlay network based on Wireguard, where each peer basically has a peer-to-peer encrypted connexion, with automatic key rotation. They also ship a multitude of applications and features on top of that like file sharing, keyless SSH access, and so on. The authentication layer is based on an existing SSO provider, you don't just register with Tailscale with new account, you login with Google, Microsoft, or GitHub (which, really, is still Microsoft). The Headscale server ships with many features out of that:
  • Full "base" support of Tailscale's features
  • Configurable DNS
    • Split DNS
    • MagicDNS (each user gets a name)
  • Node registration
    • Single-Sign-On (via Open ID Connect)
    • Pre authenticated key
  • Taildrop (File Sharing)
  • Access control lists
  • Support for multiple IP ranges in the tailnet
  • Dual stack (IPv4 and IPv6)
  • Routing advertising (including exit nodes)
  • Ephemeral nodes
  • Embedded DERP server (AKA NAT-to-NAT traversal)
Neither project (client or server) is in Debian (RFP 972439 for the client, none filed yet for the server), which makes deploying this for my use case rather problematic. Their install instructions are basically a curl bash but they also provide packages for various platforms. Their Debian install instructions are surprisingly good, and check most of the third party checklist we're trying to establish. (It's missing a pin.) There's also a Puppet module for tailscale, naturally. What I find a little disturbing with Tailscale is that you not only need to trust Tailscale with authorizing your devices, you also basically delegate that trust also to the SSO provider. So, in my case, GitHub (or anyone who compromises my account there) can penetrate the VPN. A little scary. Tailscale is also kind of an "all or nothing" thing. They have MagicDNS, file transfers, all sorts of things, but those things require you to hook up your resolver with Tailscale. In fact, Tailscale kind of assumes you will use their nameservers, and have suffered great lengths to figure out how to do that. And naturally, here, it doesn't seem to work reliably; my resolv.conf somehow gets replaced and the magic resolution of the ts.net domain fails. (I wonder why we can't opt in to just publicly resolve the ts.net domain. I don't care if someone can enumerate the private IP addreses or machines in use in my VPN, at least I don't care as much as fighting with resolv.conf everywhere.) Because I mostly have access to the routers on the networks I'm on, I don't think I'll be using tailscale in the long term. But it's pretty impressive stuff: in the time it took me to even review the Puppet modules to configure Wireguard (which is what I'll probably end up doing), I was up and running with Tailscale (but with a broken DNS, naturally). (And yes, basic Wireguard won't bring me DNS either, but at least I won't have to trust Tailscale's Debian packages, and Tailscale, and Microsoft, and GitHub with this thing.)

IPv6 IPv6 is actually what is supposed to solve this. Not NAT port forwarding crap, just real IPs everywhere. The problem is: even though IPv6 adoption is still growing, it's kind of reaching a plateau at around 40% world-wide, with Canada lagging behind at 34%. It doesn't help that major ISPs in Canada (e.g. Bell Canada, Videotron) don't care at all about IPv6 (e.g. Videotron in beta since 2011). So we can't rely on those companies to do the right thing here. The typical solution here is often to use a tunnel like HE's tunnelbroker.net. It's kind of tricky to configure, but once it's done, it works. You get end-to-end connectivity as long as everyone on the network is on IPv6. And that's really where the problem lies here; the second one of your nodes can't setup such a tunnel, you're kind of stuck and that tool completely breaks down. IPv6 tunnels also don't give you the kind of security a VPN provides as well, naturally. The other downside of a tunnel is you don't really get peer-to-peer connectivity: you go through the tunnel. So you can expect higher latencies and possibly lower bandwidth as well. Also, HE.net doesn't currently charge for this service (and they've been doing this for a long time), but this could change in the future (just like Tailscale, that said). Concretely, the latency difference is rather minimal, Google:
--- ipv6.l.google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,8ms
RTT[ms]: min = 13, median = 14, p(90) = 14, max = 15
--- google.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 136,0ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14
In the case of GitHub, latency is actually lower, interestingly:
--- ipv6.github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 134,6ms
RTT[ms]: min = 13, median = 13, p(90) = 14, max = 14
--- github.com ping statistics ---
10 packets transmitted, 10 received, 0,00% packet loss, time 293,1ms
RTT[ms]: min = 29, median = 29, p(90) = 29, max = 30
That is because HE.net peers directly with my ISP and Fastly (which is behind GitHub.com's IPv6, apparently?), so it's only 6 hops away. While over IPv4, the ping goes over New York, before landing AWS's Ashburn, Virginia datacenters, for a whopping 13 hops... I managed setup a HE.net tunnel at home, because I also need IPv6 for other reasons (namely debugging at work). My first attempt at setting this up in the office failed, but now that I found the openwrt.org guide, it worked... for a while, and I was able to produce the above, encouraging, mini benchmarks. Unfortunately, a few minutes later, IPv6 just went down again. And the problem with that is that many programs (and especially OpenSSH) do not respect the Happy Eyeballs protocol (RFC 8305), which means various mysterious "hangs" at random times on random applications. It's kind of a terrible user experience, on top of breaking the one thing it's supposed to do, of course, which is to give me transparent access to all the nodes I maintain. Even worse, it would still be a problem for other remote nodes I might setup where I might not have acess to the router to setup the tunnel. It's also not absolutely clear what happens if you setup the same tunnel in two places... Presumably, something is smart enough to distribute only a part of the /48 block selectively, but I don't really feel like going that far, considering how flaky the setup is already.

Other options If this post sounds a little biased towards IPv6 and Wireguard, it's because it is. I would like everyone to migrate to IPv6 already, and Wireguard seems like a simple and sound system. I'm aware of many other options to make VPNs. So before anyone jumps in and says "but what about...", do know that I have personnally experimented with:
  • tinc: nice, automatic meshing, used for the Montreal mesh, serious design flaws in the crypto that make it generally unsafe to use; supposedly, v1.1 (or 2.0?) will fix this, but that's been promised for over a decade by now
  • ipsec, specifically strongswan: hard to configure (especially configure correctly!), harder even to debug, otherwise really nice because transparent (e.g. no need for special subnets), used at work, but also considering a replacement there because it's a major barrier to entry to train new staff
  • OpenVPN: mostly used as a client for [VPN service][]s like Riseup VPN or Mullvad, mostly relevant for client-server configurations, not really peer-to-peer, shared secrets or TLS, kind of an hassle to maintain, see also SoftEther for an alternative implementation
All of those solutions have significant problems and I do not wish to use any of those for this project. Also note that Tailscale is only one of many projects laid over Wireguard to do that kind of thing, see this LWN review for others (basically NetbBird, Firezone, and Netmaker).

Future work Those are options that came up after writing this post, and might warrant further examination in the future.
  • Meshbird, a "distributed private networking" with little information about how it actually works other than "encrypted with strong AES-256"
  • Nebula, "A scalable overlay networking tool with a focus on performance, simplicity and security", written by Slack people to replace IPsec, docs, runs as an overlay for Slack's 50k node network, only packaged in Debian experimental, lagging behind upstream (1.4.0, from May 2021 vs upstream's 1.6.1 from September 2022), requires a central CA, Golang, I'm in "wait and see" mode for now
  • n2n: "layer two VPN", seems packaged in Debian but inactive
  • ouroboros: "peer-to-peer packet network prototype", sounds and seems complicated
  • QuickTUN is interesting because it's just a small wrapper around NaCL, and it's in Debian... but maybe too obscure for my own good
  • unetd: Wireguard-based full mesh networking from OpenWRT, not in Debian
  • vpncloud: "high performance peer-to-peer mesh VPN over UDP supporting strong encryption, NAT traversal and a simple configuration", sounds interesting, not in Debian
  • Yggdrasil: actually a pretty good match for my use case, but I didn't think of it when starting the experiments here; packaged in Debian, with the Golang version planned, Puppet module; major caveat: nodes exposed publicly inside the global mesh unless configured otherwise (firewall suggested), requires port forwards, alpha status

Conclusion Right now, I'm going to deploy Wireguard tunnels with Puppet. It seems like kind of a pain in the back, but it's something I will be able to reuse for work, possibly completely replacing strongswan. I have another Puppet module for IPsec which I was planning to publish, but now I'm thinking I should just abort that and replace everything with Wireguard, assuming we still need VPNs at work in the future. (I have a number of reasons to believe we might not need any in the near future anyways...)

19 October 2022

Shirish Agarwal: Pune Rains, Uncosted Budgets, Hearing Loss Covid, Fracking

Pune Rains Lemme start with a slightly funny picture that tells as much about Pune, my city as anything else does.
Pune- Leave your attitude behind, we have our own
This and similar tags, puns and whatnot you will find if you are entering Pune from the road highway. You can also find similar similar symbols and Puns all over the city and they are partly sarcasm and ironic and partly the truth. Puneities work from the attitude that they know everything rather than nothing, including yours truly  . What is the basis of that or why is there such kind of confidence I have no clue or idea, it is what it is. Approximately 24 hrs. ago, apparently we had a cloudburst. What I came to know later is that we got 100 mm of rain. Sharing from local news site. Much more interesting was a thread made on Reddit where many people half-seriously asked where they can buy a boat. One of the reasons being even if it s October, in fact, we passed middle of October and it s still raining. Even today in the evening, it rained for quite a while. As I had shared in a few blog posts before, June where rains should have started, it didn t, it actually started late July or even August, so something has shifted. The current leadership does not believe in Anthropogenic Climate Change or human activity induced climate change even though that is a reality. I could share many links and even using the term above should give links to various studies. Most of the people who are opposed to it are either misinformed or influenced from the fossil fuel industry. Again, could share many links, but will share just one atm. I have talked to quite a few people about it but nobody has ever been able to give a convincing answer as to why GM had to crush the cars. Let s even take the argument that it was the worst manufactured car in history and there have been quite a few, have the others been crushed? If not, then the reason shared or given by most people sounds hollow. And if you look into it, they had an opportunity that they let go, and now most of them are scrambling and yet most of the legacy auto manufacturers will be out of existence if they don t get back into the game in the next 2-3 years. There have been a bunch of announcements but we are yet to see. The Chinese though have moved far ahead, although one has to remark that they have been doing that for the last decade, so they have a 10-year head start, hardly surprising then. But I need to get back to the subject, another gentleman on Reddit remarked that if you start to use boat, and others start to use boat, then the Govt. will tax it. In fact, somebody had shared the below the other day
Different types of taxes collected by GOI
Many of the taxes that I have shared above are by the Modi Govt. who came on the platform, manifesto that once they come to power they will reduce taxes for the common man, they have reduced taxes but only for the Corporates. For the common man, the taxes have only gone up, both direct tax and indirect tax. Any reference to the Tory party who have also done similar things and have also shared that it is labor who had done large expenditures even though they have been 8 years in power, I am sure for most is purely coincidental. Incidentally, that is the same tack that was taken even by the Republican party. They all like to give tax benefits to the 1% while for the rest is austerity claiming some reason, even if it has been proven to be false.
Corporate Tax Rate, Revenue Loss to Govt.
The figures mentioned above are findings of parliamentary panel so nobody can accuse of anybody having a bias. Also, I probably had shared this but still feel the need to re-share it as people still believe that 2G scam happened even though there are plenty of arguments I can share to prove how it was all a fabricated lie.
Vinod Rai Mafinama in Uttarakhand High Court.
Part 2 of the same Mafinama.
How pathetic Mr. Rai s understanding of economics is can be gauged from the fact that he was made Chairman of IDFC and subsequently had to be thrown out. That whole lie was engineered to throw UPA out and it worked. There are and have been many such coincidences happening over the last 8 years, parallel stories happening in India and UK. This was just yesterday, about a year back Air India was given back to the Tatas, There was controversy about the supposed auctions as only Indigo was the only other party allowed to be at auction but not allowed to buy but more as a spectator as they already have 60% of the Indian civil aviation market. And there was lot of cheering from the Govt. side that finally Air India has been bought home to its true owners, the Tatas. The Tatas too started cheering and sharing how they will take down all the workers, worker unions and everything will be happy glory within a year. In fact, just couple of days back they shared new plans. Btw for the takeover of Air India, they had bought loans from the Banks and they are in the category of too big to fail. As I have shared couple of times before, RBI has not shared any inspection reports of nationalized or private banks after 2013/14. While by law, RBI is supposed to do inspection reports every 3 months and share them in the Public domain. And if you ask any of their supporter, for everything they will say UPA did x or y, which only goes to show morally bankrupt the present Govt. is. Coming back to the topic, before I forget, the idea of sharing their plans is so that they can again borrow money from the banks. But that is not the only story. Just one day back, Smita Prakash, one of the biggest cheerleaders of the present Govt. (she is the boss at Asia News International (ANI)) posted how Air India had treated her sister and other 21 passengers. Basically, they had bought business tickets but the whole cabin was dirty, they complained and they were forced to sit in economy class, not just her cousin sister but the other 21 odd passengers too. Of course Ms Smita became calm as her sister was given free air tickets on Vistara and other goodies. Of course, after that she didn t post anything about the other 21 odd passengers after that. And yes, I understand she is supposed to be a reporter but as can be seen from the twitter thread, there is or was no follow up. Incidentally, she is one of many who has calling others about Revdi culture (freebies to the masses.) but guess that only applies to other people not her or her sister. Again, if there are any coincidences of similar nature in the UK or when Trump was P.M. of the U.S. they are just coincidental .

The Uncosted budget India and the UK have many parallels, it s mind boggling. Before we get into the nitty-gritties, saw something that would be of some interest to the people here.
For those who might not be able to see above, apparently there is place in UK called Tufton Street where there are quite a few organizations that are shadowy and whose finances are not known as to how they are financed. Ms. Truss and quite a few of the people in the cabinet are from the same shadowy organizations. Mr. Kwasi Kwarteng, the just-departed chancellor is and was part of the same group. Now even for me it was a new term to learn and understand what is an uncosted budget is. To make it much more easier I share the example using a common person who goes to the bank for a loan

Now M/S X wants a loan of say Rs. 1000/- for whatever reason. He/she/they go to the bank and asks give me a loan of say INR 1000/- The banks asks them to produce a statement of accounts to show what their financial position is. They produce a somewhat half-filled statement of accounts In which all liabilities are shown, but incomes are not. The bank says you already have so much liabilities, how are you gonna pay those, accounts have to be matched otherwise you are not solvent. M/s X adamantly refuses to do any changes citing that they don t need to. At this point, M/s X credit rating goes down and nobody in the market will give them a loan. At the same time, the assets they had held, their value also depreciates because it became known that they can t act responsibly. So whose to say whether or not M/s X has those number of assets and priced them accurately. But the drama doesn t end there, M/s X says this is the responsibility of actually Mr. Z ( cue The Bank of England) as they are my accountant/lawyer etc. M/s Z says as any lawyer/accountant should. This is not under my remit. If the clients either gives incomplete information or false information or whatever then it is their responsibility not mine. And in fact, the Chancellor is supposed to be the one who is given the responsibility of making the budget. The Chancellor is very similar to our Finance Minister. Because the UK has constitutional monarchy, I am guessing the terms are slightly different, otherwise the functionality seems to be the same. For two weeks, there was lot of chaos, lot of pension funds lost quite a bit in the market and in the end Mr. Kwasi Kwarteng was ousted out of the job. Incredibly, the same media and newspapers who had praised Mr. Kwarteng just few weeks back as the best Tory budget, they couldn t wait to bury him. And while I have attempted to simplify what happened, the best explanation of what has happened can be found in an article from the guardian. Speculation is rife in the UK as to who s ruling atm as the new Chancellor has reversed almost all the policies that Ms. Truss had bought and she is now more or less a figurehead. Mr. Hunt, the new chancellor doesn t have anybody behind him. Apparently, the gentleman wanted to throw his hat the ring in the Tory leadership contest that was held about a month back and he couldn t get 20 MP s to support him. Another thing that is different between UK and India is that in UK by law the PM has to answer questions put up to him or her by the opposition leaders. That is the way accountability is measured there. This is known as PMQ s or Prime Minister Questions and Answers. One can just go to YouTube or any streaming service and give Liz Truss and PMQ s and if they are interested of a certain date, give a date and they can see how she answered the questions thrown at her. Unfortunately, all she could do in both times were non-answers. In fact, the Tories seem to be using some of Labor s policies after they had bad-mouthed the same policies. Politics of right-wing both in the UK and the US seems so out of touch with the people whom they are supposed to protect and administer. An article about cyclists which is sort of half-truth, half irony shows how screwed up the policies are of the RW (right-wing). Now they are questions about the pensions triple-lock. Sadly, it is the working class who would suffer the most, most of the rich have already moved their money abroad several years ago. The Financial Times, did share a video about how things have been unfolding

Seems Ms. Truss forgot to add Financial Times in the list of anti-growth coalition she is so fond of. Also, the Tory party seems to want to create more tax havens in the UK and calling them investment zones. Of course, most of the top tax havens are situated around the UK itself. I wouldn t go more into as that would probably require its own article, although most of that information is all in public domain. Fracking I don t really want to take much time as the blog post has become long. There have been many articles written why Fracking is bad and that is why even the Tories had put in their Manifesto that they won t allow Fracking but apparently, today they are trying to reopen Fracking. And again how bad it is and can be found out by the article in Guardian.

14 October 2022

Shirish Agarwal: Dowry, Racism, Railways

Dowry Few days back, had posted about the movie Raksha Bandhan and whatever I felt about it. Sadly, just couple of days back, somebody shared this link. Part of me was shocked and part of me was not. Couple of acquaintances of mine in the past had said the same thing for their daughters. And in such situations you are generally left speechless because you don t know what the right thing to do is. If he has shared it with you being an outsider, how many times he must have told the same to their wife and daughters? And from what little I have gathered in life, many people have justified it on similar lines. And while the protests were there, sadly the book was not removed. Now if nurses are reading such literature, how their thought process might be forming, you can tell :(. And these are the ones whom we call for when we are sick and tired :(. And I have not taken into account how the girls/women themselves might be feeling. There are similar things in another country but probably not the same, nor the same motivations though although feeling helplessness in both would be a common thing. But such statements are not alone. Another gentleman in slightly different context shared this as well
The above is a statement shared in a book recommended for CTET (Central Teacher s Eligibility Test that became mandatory to be taken as the RTE (Right To Education) Act came in.). The statement says People from cold places are white, beautiful, well-built, healthy and wise. And people from hot places are black, irritable and of violent nature. Now while I can agree with one part of the statement that people residing in colder regions are more fair than others but there are loads of other factors that determine fairness or skin color/skin pigmentation. After a bit of search came to know that this and similar articulation have been made in an idea/work called Environmental Determinism . Now if you look at that page, you would realize this was what colonialism is and was all about. The idea that the white man had god-given right to rule over others. Similarly, if you are fair, you can lord over others. Seems simplistic, but yet it has a powerful hold on many people in India. Forget the common man, this thinking is and was applicable to some of our better-known Freedom fighters. Pune s own Bal Gangadhar Tilak The Artic Home to the Vedas. It sort of talks about Aryans and how they invaded India and became settled here. I haven t read or have access to the book so have to rely on third-party sources. The reason I m sharing all this is that the right-wing has been doing this myth-making for sometime now and unless and until you put a light on it, it will continue to perpetuate  . For those who have read this blog, do know that India is and has been in casteism from ever. They even took the fair comment and applied it to all Brahmins. According to them, all Brahmins are fair and hence have god-given right to lord over others. What is called the Eton boy s network serves the same in this casteism. The only solution is those idea under limelight and investigate. To take the above, how does one prove that all fair people are wise and peaceful while all people black and brown are violent. If that is so, how does one count for Mahatma Gandhi, Martin Luther King Junior, Nelson Mandela, Michael Jackson the list is probably endless. And not to forget that when Mahatma Gandhiji did his nonviolent movements either in India or in South Africa, both black and brown people in millions took part. Similar examples of Martin Luther King Jr. I know and read of so many non-violent civl movements that took place in the U.S. For e.g. Rosa Parks and the Montgomery Bus Boycott. So just based on these examples, one can conclude that at least the part about the fair having exclusive rights to being fair and noble is not correct. Now as far as violence goes, while every race, every community has had done violence in the past or been a victim of the same. So no one is and can be blameless, although in light of the above statement, the question can argumentated as to who were the Vikings? Both popular imagination and serious history shares stories about Vikings. The Vikings were somewhat nomadic in nature even though they had permanent settlements but even then they went on raids, raped women, captured both men and women and sold them at slaves. So they are what pirates came to be, but not the kind Hollywood romanticizes about. Europe in itself has been a tale in conflict since time immemorial. It is only after the formation of EU that most of these countries stopped fighting each other From a historical point perspective, it is too new. So even the part of fair being non-violent dies in face of this evidence. I could go on but this is enough on that topic.

Railways and Industrial Action around the World. While I have shared about Railways so many times on this blog, it continues to fascinate me that how people don t understand the first things about Railways. For e.g. Railways is a natural monopoly. What that means is and you can look at all and any type of privatization around the world, you will see it is a monopoly. Unlike the road or Skies, Railways is and would always be limited by infrastructure and the ability to have new infrastructure. Unlike in road or Skies (even they have their limits) you cannot run train services on a whim. At any particular point in time, only a single train could and should occupy a stretch of Railway network. You could have more trains on one line, but then the likelihood of front or rear-end collisions becomes a real possibility. You also need all sorts of good and reliable communications, redundant infrastructure so if one thing fails then you have something in place. The reason being a single train can carry anywhere from 2000 to 5000 passengers or more. While this is true of Indian Railways, Railways around the world would probably have some sort of similar numbers.It is in this light that I share the below videos.
To be more precise, see the fuller video
Now to give context to the recording above, Mike Lynch is the general secretary at RMT. For those who came in late, both UK and the U.S. have been threatened by railway strikes. And the reason for the strikes or threat of strikes is similar. Now from the company perspective, all they care is to invest less and make the most profits that can be given to equity shareholders. At the same time, they have freezed the salaries of railway workers for the last 3 years. While the politicians who were asking the questions, apparently gave themselves raise twice this year. They are asking them to negotiate at 8% while inflation in the UK has been 12.3% and projected to go higher. And it is not only the money. Since the 1980s when UK privatized the Railways, they stopped investing in the infrastructure. And that meant that the UK Railway infrastructure over period of time started getting behind and is even behind say Indian Railways which used to provide most bang for the buck. And Indian Railways is far from ideal. Ironically, most of the operators on UK are nationalized Railways of France, Germany etc. but after the hard Brexit, they too are mulling to cut their operations short, they have too  There is also the EU Entry/Exit system that would come next year. Why am I sharing about what is happening in UK Rail, because the Indian Government wants to follow the same thing, and fooling the public into saying we would do it better. What inevitably will happen is that ticket prices go up, people no longer use the service, the number of services go down and eventually they are cancelled. This has happened both in Indian Railways as well as Airlines. In fact, GOI just recently announced a credit scheme just a few days back to help Airlines stay afloat. I was chatting with a friend who had come down to Pune from Chennai and the round-trip cost him INR 15k/- on that single trip alone. We reminisced how a few years ago, 8 years to be precise, we could buy an Air ticket for 2.5k/- just a few days before the trip and did it. I remember doing/experiencing at least a dozen odd trips via air in the years before 2014. My friend used to come to Pune, almost every weekend because he could afford it, now he can t do that. And these are people who are in the above 5-10% of the population. And this is not just in UK, but also in the United States. There is one big difference though, the U.S. is mainly a freight carrier while the UK Railway Operations are mostly passenger based. What was and is interesting that Scotland had to nationalize their services as they realized the Operators cannot or will not function when they were most needed. Most of the public even in the UK seem to want a nationalized rail service, at least their polls say so. So, it would definitely be interesting to see what happens in the UK next year. In the end, I know I promised to share about books, but the above incidents have just been too fascinating to not just share the news but also share what I think about them. Free markets function good where there is competition, for example what is and has been happening in China for EV s but not where you have natural monopolies. In all Railway privatization, you have to handover the area to one person, then they have no motivation. If you have multiple operators, then there would always be haggling as to who will run the train and at what time. In either scenario, it doesn t work and raises prices while not delivering anything better  I do take examples from UK because lot of things are India are still the legacy of the British. The whole civil department that was created in 1953 is/was a copy of the British civil department at that time and it is to this day. P.S. Just came to know that the UK Chancellor Kwasi Kwarteng was just sacked as UK Chancellor. I do commend Truss for facing the press even though she might be dumped a week later unlike our PM who hasn t faced a single press conference in the last 8 odd years.

https://www.youtube.com/watch?v=oTP6ogBqU7of The difference in Indian and UK politics seems to be that the English are now asking questions while here in India, most people are still sleeping without a care in the world. Another thing to note Minidebconf Palakkad is gonna happen 12-13th November 2022. I am probably not gonna go but would request everyone who wants to do something in free software to attend it. I am not sure whether I would be of any use like this and also when I get back, it would be an empty house. But for people young and old, who want to do anything with free/open source software it is a chance not to be missed. Registration of the same closes on 1st of November 2022. All the best, break a leg  Just read this, beautifully done.

2 October 2022

Steve McIntyre: Firmware vote result - the people have spoken!

It's time for another update on Debian's firmware GR. I wrote about the problem back in April and about the vote itself a few days back. Voting closed last night and we have a result! This is unofficial so far - the official result will follow shortly when the Project Secretary sends a signed mail to confirm it. But that's normally just a formality at this point. A Result! The headline result is: Choice 5 / Proposal E won: Change SC for non-free firmware in installer, one installer. I'm happy with this - it's the option that I voted highest, after all. More importantly, however, it's a clear choice of direction, as I was hoping for. Of the 366 Debian Developers who voted, 289 of them voted this option above NOTA and 63 below, so it also meets the 3:1 super-majority requirement for amending a Debian Foundation Document (Constitution section 4.1.3). So, what happens next? We have quite a few things to do now, ideally before the freeze for Debian 12 (bookworm), due January 2023. This list of work items is almost definitely not complete, and Cyril and I are aiming to get together this week and do more detailed planning for the d-i pieces. Off the top of my head I can think of the following: If you think I've missed anything here, please let me and Cyril know - the best place would be the mailing list (debian-boot@lists.debian.org). If you'd like to help implement any of these changes, that would be lovely too!

30 August 2022

John Goerzen: The PC & Internet Revolution in Rural America

Inspired by several others (such as Alex Schroeder s post and Szcze uja s prompt), as well as a desire to get this down for my kids, I figure it s time to write a bit about living through the PC and Internet revolution where I did: outside a tiny town in rural Kansas. And, as I ve been back in that same area for the past 15 years, I reflect some on the challenges that continue to play out. Although the stories from the others were primarily about getting online, I want to start by setting some background. Those of you that didn t grow up in the same era as I did probably never realized that a typical business PC setup might cost $10,000 in today s dollars, for instance. So let me start with the background.

Nothing was easy This story begins in the 1980s. Somewhere around my Kindergarten year of school, around 1985, my parents bought a TRS-80 Color Computer 2 (aka CoCo II). It had 64K of RAM and used a TV for display and sound. This got you the computer. It didn t get you any disk drive or anything, no joysticks (required by a number of games). So whenever the system powered down, or it hung and you had to power cycle it a frequent event you d lose whatever you were doing and would have to re-enter the program, literally by typing it in. The floppy drive for the CoCo II cost more than the computer, and it was quite common for people to buy the computer first and then the floppy drive later when they d saved up the money for that. I particularly want to mention that computers then didn t come with a modem. What would be like buying a laptop or a tablet without wifi today. A modem, which I ll talk about in a bit, was another expensive accessory. To cobble together a system in the 80s that was capable of talking to others with persistent storage (floppy, or hard drive), screen, keyboard, and modem would be quite expensive. Adjusted for inflation, if you re talking a PC-style device (a clone of the IBM PC that ran DOS), this would easily be more expensive than the Macbook Pros of today. Few people back in the 80s had a computer at home. And the portion of those that had even the capability to get online in a meaningful way was even smaller. Eventually my parents bought a PC clone with 640K RAM and dual floppy drives. This was primarily used for my mom s work, but I did my best to take it over whenever possible. It ran DOS and, despite its monochrome screen, was generally a more capable machine than the CoCo II. For instance, it supported lowercase. (I m not even kidding; the CoCo II pretty much didn t.) A while later, they purchased a 32MB hard drive for it what luxury! Just getting a machine to work wasn t easy. Say you d bought a PC, and then bought a hard drive, and a modem. You didn t just plug in the hard drive and it would work. You would have to fight it every step of the way. The BIOS and DOS partition tables of the day used a cylinder/head/sector method of addressing the drive, and various parts of that those addresses had too few bits to work with the big drives of the day above 20MB. So you would have to lie to the BIOS and fdisk in various ways, and sort of work out how to do it for each drive. For each peripheral serial port, sound card (in later years), etc., you d have to set jumpers for DMA and IRQs, hoping not to conflict with anything already in the system. Perhaps you can now start to see why USB and PCI were so welcomed.

Sharing and finding resources Despite the two computers in our home, it wasn t as if software written on one machine just ran on another. A lot of software for PC clones assumed a CGA color display. The monochrome HGC in our PC wasn t particularly compatible. You could find a TSR program to emulate the CGA on the HGC, but it wasn t particularly stable, and there s only so much you can do when a program that assumes color displays on a monitor that can only show black, dark amber, or light amber. So I d periodically get to use other computers most commonly at an office in the evening when it wasn t being used. There were some local computer clubs that my dad took me to periodically. Software was swapped back then; disks copied, shareware exchanged, and so forth. For me, at least, there was no online to download software from, and selling software over the Internet wasn t a thing at all.

Three Different Worlds There were sort of three different worlds of computing experience in the 80s:
  1. Home users. Initially using a wide variety of software from Apple, Commodore, Tandy/RadioShack, etc., but eventually coming to be mostly dominated by IBM PC clones
  2. Small and mid-sized business users. Some of them had larger minicomputers or small mainframes, but most that I had contact with by the early 90s were standardized on DOS-based PCs. More advanced ones had a network running Netware, most commonly. Networking hardware and software was generally too expensive for home users to use in the early days.
  3. Universities and large institutions. These are the places that had the mainframes, the earliest implementations of TCP/IP, the earliest users of UUCP, and so forth.
The difference between the home computing experience and the large institution experience were vast. Not only in terms of dollars the large institution hardware could easily cost anywhere from tens of thousands to millions of dollars but also in terms of sheer resources required (large rooms, enormous power circuits, support staff, etc). Nothing was in common between them; not operating systems, not software, not experience. I was never much aware of the third category until the differences started to collapse in the mid-90s, and even then I only was exposed to it once the collapse was well underway. You might say to me, Well, Google certainly isn t running what I m running at home! And, yes of course, it s different. But fundamentally, most large datacenters are running on x86_64 hardware, with Linux as the operating system, and a TCP/IP network. It s a different scale, obviously, but at a fundamental level, the hardware and operating system stack are pretty similar to what you can readily run at home. Back in the 80s and 90s, this wasn t the case. TCP/IP wasn t even available for DOS or Windows until much later, and when it was, it was a clunky beast that was difficult. One of the things Kevin Driscoll highlights in his book called Modem World see my short post about it is that the history of the Internet we usually receive is focused on case 3: the large institutions. In reality, the Internet was and is literally a network of networks. Gateways to and from Internet existed from all three kinds of users for years, and while TCP/IP ultimately won the battle of the internetworking protocol, the other two streams of users also shaped the Internet as we now know it. Like many, I had no access to the large institution networks, but as I ve been reflecting on my experiences, I ve found a new appreciation for the way that those of us that grew up with primarily home PCs shaped the evolution of today s online world also.

An Era of Scarcity I should take a moment to comment about the cost of software back then. A newspaper article from 1985 comments that WordPerfect, then the most powerful word processing program, sold for $495 (or $219 if you could score a mail order discount). That s $1360/$600 in 2022 money. Other popular software, such as Lotus 1-2-3, was up there as well. If you were to buy a new PC clone in the mid to late 80s, it would often cost $2000 in 1980s dollars. Now add a printer a low-end dot matrix for $300 or a laser for $1500 or even more. A modem: another $300. So the basic system would be $3600, or $9900 in 2022 dollars. If you wanted a nice printer, you re now pushing well over $10,000 in 2022 dollars. You start to see one barrier here, and also why things like shareware and piracy if it was indeed even recognized as such were common in those days. So you can see, from a home computer setup (TRS-80, Commodore C64, Apple ][, etc) to a business-class PC setup was an order of magnitude increase in cost. From there to the high-end minis/mainframes was another order of magnitude (at least!) increase. Eventually there was price pressure on the higher end and things all got better, which is probably why the non-DOS PCs lasted until the early 90s.

Increasing Capabilities My first exposure to computers in school was in the 4th grade, when I would have been about 9. There was a single Apple ][ machine in that room. I primarily remember playing Oregon Trail on it. The next year, the school added a computer lab. Remember, this is a small rural area, so each graduating class might have about 25 people in it; this lab was shared by everyone in the K-8 building. It was full of some flavor of IBM PS/2 machines running DOS and Netware. There was a dedicated computer teacher too, though I think she was a regular teacher that was given somewhat minimal training on computers. We were going to learn typing that year, but I did so well on the very first typing program that we soon worked out that I could do programming instead. I started going to school early these machines were far more powerful than the XT at home and worked on programming projects there. Eventually my parents bought me a Gateway 486SX/25 with a VGA monitor and hard drive. Wow! This was a whole different world. It may have come with Windows 3.0 or 3.1 on it, but I mainly remember running OS/2 on that machine. More on that below.

Programming That CoCo II came with a BASIC interpreter in ROM. It came with a large manual, which served as a BASIC tutorial as well. The BASIC interpreter was also the shell, so literally you could not use the computer without at least a bit of BASIC. Once I had access to a DOS machine, it also had a basic interpreter: GW-BASIC. There was a fair bit of software written in BASIC at the time, but most of the more advanced software wasn t. I wondered how these .EXE and .COM programs were written. I could find vague references to DEBUG.EXE, assemblers, and such. But it wasn t until I got a copy of Turbo Pascal that I was able to do that sort of thing myself. Eventually I got Borland C++ and taught myself C as well. A few years later, I wanted to try writing GUI programs for Windows, and bought Watcom C++ much cheaper than the competition, and it could target Windows, DOS (and I think even OS/2). Notice that, aside from BASIC, none of this was free, and none of it was bundled. You couldn t just download a C compiler, or Python interpreter, or whatnot back then. You had to pay for the ability to write any kind of serious code on the computer you already owned.

The Microsoft Domination Microsoft came to dominate the PC landscape, and then even the computing landscape as a whole. IBM very quickly lost control over the hardware side of PCs as Compaq and others made clones, but Microsoft has managed in varying degrees even to this day to keep a stranglehold on the software, and especially the operating system, side. Yes, there was occasional talk of things like DR-DOS, but by and large the dominant platform came to be the PC, and if you had a PC, you ran DOS (and later Windows) from Microsoft. For awhile, it looked like IBM was going to challenge Microsoft on the operating system front; they had OS/2, and when I switched to it sometime around the version 2.1 era in 1993, it was unquestionably more advanced technically than the consumer-grade Windows from Microsoft at the time. It had Internet support baked in, could run most DOS and Windows programs, and had introduced a replacement for the by-then terrible FAT filesystem: HPFS, in 1988. Microsoft wouldn t introduce a better filesystem for its consumer operating systems until Windows XP in 2001, 13 years later. But more on that story later.

Free Software, Shareware, and Commercial Software I ve covered the high cost of software already. Obviously $500 software wasn t going to sell in the home market. So what did we have? Mainly, these things:
  1. Public domain software. It was free to use, and if implemented in BASIC, probably had source code with it too.
  2. Shareware
  3. Commercial software (some of it from small publishers was a lot cheaper than $500)
Let s talk about shareware. The idea with shareware was that a company would release a useful program, sometimes limited. You were encouraged to register , or pay for, it if you liked it and used it. And, regardless of whether you registered it or not, were told please copy! Sometimes shareware was fully functional, and registering it got you nothing more than printed manuals and an easy conscience (guilt trips for not registering weren t necessarily very subtle). Sometimes unregistered shareware would have a nag screen a delay of a few seconds while they told you to register. Sometimes they d be limited in some way; you d get more features if you registered. With games, it was popular to have a trilogy, and release the first episode inevitably ending with a cliffhanger as shareware, and the subsequent episodes would require registration. In any event, a lot of software people used in the 80s and 90s was shareware. Also pirated commercial software, though in the earlier days of computing, I think some people didn t even know the difference. Notice what s missing: Free Software / FLOSS in the Richard Stallman sense of the word. Stallman lived in the big institution world after all, he worked at MIT and what he was doing with the Free Software Foundation and GNU project beginning in 1983 never really filtered into the DOS/Windows world at the time. I had no awareness of it even existing until into the 90s, when I first started getting some hints of it as a port of gcc became available for OS/2. The Internet was what really brought this home, but I m getting ahead of myself. I want to say again: FLOSS never really entered the DOS and Windows 3.x ecosystems. You d see it make a few inroads here and there in later versions of Windows, and moreso now that Microsoft has been sort of forced to accept it, but still, reflect on its legacy. What is the software market like in Windows compared to Linux, even today? Now it is, finally, time to talk about connectivity!

Getting On-Line What does it even mean to get on line? Certainly not connecting to a wifi access point. The answer is, unsurprisingly, complex. But for everyone except the large institutional users, it begins with a telephone.

The telephone system By the 80s, there was one communication network that already reached into nearly every home in America: the phone system. Virtually every household (note I don t say every person) was uniquely identified by a 10-digit phone number. You could, at least in theory, call up virtually any other phone in the country and be connected in less than a minute. But I ve got to talk about cost. The way things worked in the USA, you paid a monthly fee for a phone line. Included in that monthly fee was unlimited local calling. What is a local call? That was an extremely complex question. Generally it meant, roughly, calling within your city. But of course, as you deal with things like suburbs and cities growing into each other (eg, the Dallas-Ft. Worth metroplex), things got complicated fast. But let s just say for simplicity you could call others in your city. What about calling people not in your city? That was long distance , and you paid often hugely by the minute for it. Long distance rates were difficult to figure out, but were generally most expensive during business hours and cheapest at night or on weekends. Prices eventually started to come down when competition was introduced for long distance carriers, but even then you often were stuck with a single carrier for long distance calls outside your city but within your state. Anyhow, let s just leave it at this: local calls were virtually free, and long distance calls were extremely expensive.

Getting a modem I remember getting a modem that ran at either 1200bps or 2400bps. Either way, quite slow; you could often read even plain text faster than the modem could display it. But what was a modem? A modem hooked up to a computer with a serial cable, and to the phone system. By the time I got one, modems could automatically dial and answer. You would send a command like ATDT5551212 and it would dial 555-1212. Modems had speakers, because often things wouldn t work right, and the telephone system was oriented around speech, so you could hear what was happening. You d hear it wait for dial tone, then dial, then hopefully the remote end would ring, a modem there would answer, you d hear the screeching of a handshake, and eventually your terminal would say CONNECT 2400. Now your computer was bridged to the other; anything going out your serial port was encoded as sound by your modem and decoded at the other end, and vice-versa. But what, exactly, was the other end? It might have been another person at their computer. Turn on local echo, and you can see what they did. Maybe you d send files to each other. But in my case, the answer was different: PC Magazine.

PC Magazine and CompuServe Starting around 1986 (so I would have been about 6 years old), I got to read PC Magazine. My dad would bring copies that were being discarded at his office home for me to read, and I think eventually bought me a subscription directly. This was not just a standard magazine; it ran something like 350-400 pages an issue, and came out every other week. This thing was a monster. It had reviews of hardware and software, descriptions of upcoming technologies, pages and pages of ads (that often had some degree of being informative to them). And they had sections on programming. Many issues would talk about BASIC or Pascal programming, and there d be a utility in most issues. What do I mean by a utility in most issues ? Did they include a floppy disk with software? No, of course not. There was a literal program listing printed in the magazine. If you wanted the utility, you had to type it in. And a lot of them were written in assembler, so you had to have an assembler. An assembler, of course, was not free and I didn t have one. Or maybe they wrote it in Microsoft C, and I had Borland C, and (of course) they weren t compatible. Sometimes they would list the program sort of in binary: line after line of a BASIC program, with lines like 64, 193, 253, 0, 53, 0, 87 that you would type in for hours, hopefully correctly. Running the BASIC program would, if you got it correct, emit a .COM file that you could then run. They did have a rudimentary checksum system built in, but it wasn t even a CRC, so something like swapping two numbers you d never notice except when the program would mysteriously hang. Eventually they teamed up with CompuServe to offer a limited slice of CompuServe for the purpose of downloading PC Magazine utilities. This was called PC MagNet. I am foggy on the details, but I believe that for a time you could connect to the limited PC MagNet part of CompuServe for free (after the cost of the long-distance call, that is) rather than paying for CompuServe itself (because, OF COURSE, that also charged you per the minute.) So in the early days, I would get special permission from my parents to place a long distance call, and after some nerve-wracking minutes in which we were aware every minute was racking up charges, I could navigate the menus, download what I wanted, and log off immediately. I still, incidentally, mourn what PC Magazine became. As with computing generally, it followed the mass market. It lost its deep technical chops, cut its programming columns, stopped talking about things like how SCSI worked, and so forth. By the time it stopped printing in 2009, it was no longer a square-bound 400-page beheamoth, but rather looked more like a copy of Newsweek, but with less depth.

Continuing with CompuServe CompuServe was a much larger service than just PC MagNet. Eventually, our family got a subscription. It was still an expensive and scarce resource; I d call it only after hours when the long-distance rates were cheapest. Everyone had a numerical username separated by commas; mine was 71510,1421. CompuServe had forums, and files. Eventually I would use TapCIS to queue up things I wanted to do offline, to minimize phone usage online. CompuServe eventually added a gateway to the Internet. For the sum of somewhere around $1 a message, you could send or receive an email from someone with an Internet email address! I remember the thrill of one time, as a kid of probably 11 years, sending a message to one of the editors of PC Magazine and getting a kind, if brief, reply back! But inevitably I had

The Godzilla Phone Bill Yes, one month I became lax in tracking my time online. I ran up my parents phone bill. I don t remember how high, but I remember it was hundreds of dollars, a hefty sum at the time. As I watched Jason Scott s BBS Documentary, I realized how common an experience this was. I think this was the end of CompuServe for me for awhile.

Toll-Free Numbers I lived near a town with a population of 500. Not even IN town, but near town. The calling area included another town with a population of maybe 1500, so all told, there were maybe 2000 people total I could talk to with a local call though far fewer numbers, because remember, telephones were allocated by the household. There was, as far as I know, zero modems that were a local call (aside from one that belonged to a friend I met in around 1992). So basically everything was long-distance. But there was a special feature of the telephone network: toll-free numbers. Normally when calling long-distance, you, the caller, paid the bill. But with a toll-free number, beginning with 1-800, the recipient paid the bill. These numbers almost inevitably belonged to corporations that wanted to make it easy for people to call. Sales and ordering lines, for instance. Some of these companies started to set up modems on toll-free numbers. There were few of these, but they existed, so of course I had to try them! One of them was a company called PennyWise that sold office supplies. They had a toll-free line you could call with a modem to order stuff. Yes, online ordering before the web! I loved office supplies. And, because I lived far from a big city, if the local K-Mart didn t have it, I probably couldn t get it. Of course, the interface was entirely text, but you could search for products and place orders with the modem. I had loads of fun exploring the system, and actually ordered things from them and probably actually saved money doing so. With the first order they shipped a monster full-color catalog. That thing must have been 500 pages, like the Sears catalogs of the day. Every item had a part number, which streamlined ordering through the modem.

Inbound FAXes By the 90s, a number of modems became able to send and receive FAXes as well. For those that don t know, a FAX machine was essentially a special modem. It would scan a page and digitally transmit it over the phone system, where it would at least in the early days be printed out in real time (because the machines didn t have the memory to store an entire page as an image). Eventually, PC modems integrated FAX capabilities. There still wasn t anything useful I could do locally, but there were ways I could get other companies to FAX something to me. I remember two of them. One was for US Robotics. They had an on demand FAX system. You d call up a toll-free number, which was an automated IVR system. You could navigate through it and select various documents of interest to you: spec sheets and the like. You d key in your FAX number, hang up, and US Robotics would call YOU and FAX you the documents you wanted. Yes! I was talking to a computer (of a sorts) at no cost to me! The New York Times also ran a service for awhile called TimesFax. Every day, they would FAX out a page or two of summaries of the day s top stories. This was pretty cool in an era in which I had no other way to access anything from the New York Times. I managed to sign up for TimesFax I have no idea how, anymore and for awhile I would get a daily FAX of their top stories. When my family got its first laser printer, I could them even print these FAXes complete with the gothic New York Times masthead. Wow! (OK, so technically I could print it on a dot-matrix printer also, but graphics on a 9-pin dot matrix is a kind of pain that is a whole other article.)

My own phone line Remember how I discussed that phone lines were allocated per household? This was a problem for a lot of reasons:
  1. Anybody that tried to call my family while I was using my modem would get a busy signal (unable to complete the call)
  2. If anybody in the house picked up the phone while I was using it, that would degrade the quality of the ongoing call and either mess up or disconnect the call in progress. In many cases, that could cancel a file transfer (which wasn t necessarily easy or possible to resume), prompting howls of annoyance from me.
  3. Generally we all had to work around each other
So eventually I found various small jobs and used the money I made to pay for my own phone line and my own long distance costs. Eventually I upgraded to a 28.8Kbps US Robotics Courier modem even! Yes, you heard it right: I got a job and a bank account so I could have a phone line and a faster modem. Uh, isn t that why every teenager gets a job? Now my local friend and I could call each other freely at least on my end (I can t remember if he had his own phone line too). We could exchange files using HS/Link, which had the added benefit of allowing split-screen chat even while a file transfer is in progress. I m sure we spent hours chatting to each other keyboard-to-keyboard while sharing files with each other.

Technology in Schools By this point in the story, we re in the late 80s and early 90s. I m still using PC-style OSs at home; OS/2 in the later years of this period, DOS or maybe a bit of Windows in the earlier years. I mentioned that they let me work on programming at school starting in 5th grade. It was soon apparent that I knew more about computers than anybody on staff, and I started getting pulled out of class to help teachers or administrators with vexing school problems. This continued until I graduated from high school, incidentally often to my enjoyment, and the annoyance of one particular teacher who, I must say, I was fine with annoying in this way. That s not to say that there was institutional support for what I was doing. It was, after all, a small school. Larger schools might have introduced BASIC or maybe Logo in high school. But I had already taught myself BASIC, Pascal, and C by the time I was somewhere around 12 years old. So I wouldn t have had any use for that anyhow. There were programming contests occasionally held in the area. Schools would send teams. My school didn t really send anybody, but I went as an individual. One of them was run by a local college (but for jr. high or high school students. Years later, I met one of the professors that ran it. He remembered me, and that day, better than I did. The programming contest had problems one could solve in BASIC or Logo. I knew nothing about what to expect going into it, but I had lugged my computer and screen along, and asked him, Can I write my solutions in C? He was, apparently, stunned, but said sure, go for it. I took first place that day, leading to some rather confused teams from much larger schools. The Netware network that the school had was, as these generally were, itself isolated. There was no link to the Internet or anything like it. Several schools across three local counties eventually invested in a fiber-optic network linking them together. This built a larger, but still closed, network. Its primary purpose was to allow students to be exposed to a wider variety of classes at high schools. Participating schools had an ITV room , outfitted with cameras and mics. So students at any school could take classes offered over ITV at other schools. For instance, only my school taught German classes, so people at any of those participating schools could take German. It was an early Zoom room. But alongside the TV signal, there was enough bandwidth to run some Netware frames. By about 1995 or so, this let one of the schools purchase some CD-ROM software that was made available on a file server and could be accessed by any participating school. Nice! But Netware was mainly about file and printer sharing; there wasn t even a facility like email, at least not on our deployment.

BBSs My last hop before the Internet was the BBS. A BBS was a computer program, usually ran by a hobbyist like me, on a computer with a modem connected. Callers would call it up, and they d interact with the BBS. Most BBSs had discussion groups like forums and file areas. Some also had games. I, of course, continued to have that most vexing of problems: they were all long-distance. There were some ways to help with that, chiefly QWK and BlueWave. These, somewhat like TapCIS in the CompuServe days, let me download new message posts for reading offline, and queue up my own messages to send later. QWK and BlueWave didn t help with file downloading, though.

BBSs get networked BBSs were an interesting thing. You d call up one, and inevitably somewhere in the file area would be a BBS list. Download the BBS list and you ve suddenly got a list of phone numbers to try calling. All of them were long distance, of course. You d try calling them at random and have a success rate of maybe 20%. The other 80% would be defunct; you might get the dreaded this number is no longer in service or the even more dreaded angry human answering the phone (and of course a modem can t talk to a human, so they d just get silence for probably the nth time that week). The phone company cared nothing about BBSs and recycled their numbers just as fast as any others. To talk to various people, or participate in certain discussion groups, you d have to call specific BBSs. That s annoying enough in the general case, but even more so for someone paying long distance for it all, because it takes a few minutes to establish a connection to a BBS: handshaking, logging in, menu navigation, etc. But BBSs started talking to each other. The earliest successful such effort was FidoNet, and for the duration of the BBS era, it remained by far the largest. FidoNet was analogous to the UUCP that the institutional users had, but ran on the much cheaper PC hardware. Basically, BBSs that participated in FidoNet would relay email, forum posts, and files between themselves overnight. Eventually, as with UUCP, by hopping through this network, messages could reach around the globe, and forums could have worldwide participation asynchronously, long before they could link to each other directly via the Internet. It was almost entirely volunteer-run.

Running my own BBS At age 13, I eventually chose to set up my own BBS. It ran on my single phone line, so of course when I was dialing up something else, nobody could dial up me. Not that this was a huge problem; in my town of 500, I probably had a good 1 or 2 regular callers in the beginning. In the PC era, there was a big difference between a server and a client. Server-class software was expensive and rare. Maybe in later years you had an email client, but an email server would be completely unavailable to you as a home user. But with a BBS, I could effectively run a server. I even ran serial lines in our house so that the BBS could be connected from other rooms! Since I was running OS/2, the BBS didn t tie up the computer; I could continue using it for other things. FidoNet had an Internet email gateway. This one, unlike CompuServe s, was free. Once I had a BBS on FidoNet, you could reach me from the Internet using the FidoNet address. This didn t support attachments, but then email of the day didn t really, either. Various others outside Kansas ran FidoNet distribution points. I believe one of them was mgmtsys; my memory is quite vague, but I think they offered a direct gateway and I would call them to pick up Internet mail via FidoNet protocols, but I m not at all certain of this.

Pros and Cons of the Non-Microsoft World As mentioned, Microsoft was and is the dominant operating system vendor for PCs. But I left that world in 1993, and here, nearly 30 years later, have never really returned. I got an operating system with more technical capabilities than the DOS and Windows of the day, but the tradeoff was a much smaller software ecosystem. OS/2 could run DOS programs, but it ran OS/2 programs a lot better. So if I were to run a BBS, I wanted one that had a native OS/2 version limiting me to a small fraction of available BBS server software. On the other hand, as a fully 32-bit operating system, there started to be OS/2 ports of certain software with a Unix heritage; most notably for me at the time, gcc. At some point, I eventually came across the RMS essays and started to be hooked.

Internet: The Hunt Begins I certainly was aware that the Internet was out there and interesting. But the first problem was: how the heck do I get connected to the Internet?

Computer labs There was one place that tended to have Internet access: colleges and universities. In 7th grade, I participated in a program that resulted in me being invited to visit Duke University, and in 8th grade, I participated in National History Day, resulting in a trip to visit the University of Maryland. I probably sought out computer labs at both of those. My most distinct memory was finding my way into a computer lab at one of those universities, and it was full of NeXT workstations. I had never seen or used NeXT before, and had no idea how to operate it. I had brought a box of floppy disks, unaware that the DOS disks probably weren t compatible with NeXT. Closer to home, a small college had a computer lab that I could also visit. I would go there in summer or when it wasn t used with my stack of floppies. I remember downloading disk images of FLOSS operating systems: FreeBSD, Slackware, or Debian, at the time. The hash marks from the DOS-based FTP client would creep across the screen as the 1.44MB disk images would slowly download. telnet was also available on those machines, so I could telnet to things like public-access Archie servers and libraries though not Gopher. Still, FTP and telnet access opened up a lot, and I learned quite a bit in those years.

Continuing the Journey At some point, I got a copy of the Whole Internet User s Guide and Catalog, published in 1994. I still have it. If it hadn t already figured it out by then, I certainly became aware from it that Unix was the dominant operating system on the Internet. The examples in Whole Internet covered FTP, telnet, gopher all assuming the user somehow got to a Unix prompt. The web was introduced about 300 pages in; clearly viewed as something that wasn t page 1 material. And it covered the command-line www client before introducing the graphical Mosaic. Even then, though, the book highlighted Mosaic s utility as a front-end for Gopher and FTP, and even the ability to launch telnet sessions by clicking on links. But having a copy of the book didn t equate to having any way to run Mosaic. The machines in the computer lab I mentioned above all ran DOS and were incapable of running a graphical browser. I had no SLIP or PPP (both ways to run Internet traffic over a modem) connectivity at home. In short, the Web was something for the large institutional users at the time.

CD-ROMs As CD-ROMs came out, with their huge (for the day) 650MB capacity, various companies started collecting software that could be downloaded on the Internet and selling it on CD-ROM. The two most popular ones were Walnut Creek CD-ROM and Infomagic. One could buy extensive Shareware and gaming collections, and then even entire Linux and BSD distributions. Although not exactly an Internet service per se, it was a way of bringing what may ordinarily only be accessible to institutional users into the home computer realm.

Free Software Jumps In As I mentioned, by the mid 90s, I had come across RMS s writings about free software most probably his 1992 essay Why Software Should Be Free. (Please note, this is not a commentary on the more recently-revealed issues surrounding RMS, but rather his writings and work as I encountered them in the 90s.) The notion of a Free operating system not just in cost but in openness was incredibly appealing. Not only could I tinker with it to a much greater extent due to having source for everything, but it included so much software that I d otherwise have to pay for. Compilers! Interpreters! Editors! Terminal emulators! And, especially, server software of all sorts. There d be no way I could afford or run Netware, but with a Free Unixy operating system, I could do all that. My interest was obviously piqued. Add to that the fact that I could actually participate and contribute I was about to become hooked on something that I ve stayed hooked on for decades. But then the question was: which Free operating system? Eventually I chose FreeBSD to begin with; that would have been sometime in 1995. I don t recall the exact reasons for that. I remember downloading Slackware install floppies, and probably the fact that Debian wasn t yet at 1.0 scared me off for a time. FreeBSD s fantastic Handbook far better than anything I could find for Linux at the time was no doubt also a factor.

The de Raadt Factor Why not NetBSD or OpenBSD? The short answer is Theo de Raadt. Somewhere in this time, when I was somewhere between 14 and 16 years old, I asked some questions comparing NetBSD to the other two free BSDs. This was on a NetBSD mailing list, but for some reason Theo saw it and got a flame war going, which CC d me. Now keep in mind that even if NetBSD had a web presence at the time, it would have been minimal, and I would have not all that unusually for the time had no way to access it. I was certainly not aware of the, shall we say, acrimony between Theo and NetBSD. While I had certainly seen an online flamewar before, this took on a different and more disturbing tone; months later, Theo randomly emailed me under the subject SLIME saying that I was, well, SLIME . I seem to recall periodic emails from him thereafter reminding me that he hates me and that he had blocked me. (Disclaimer: I have poor email archives from this period, so the full details are lost to me, but I believe I am accurately conveying these events from over 25 years ago) This was a surprise, and an unpleasant one. I was trying to learn, and while it is possible I didn t understand some aspect or other of netiquette (or Theo s personal hatred of NetBSD) at the time, still that is not a reason to flame a 16-year-old (though he would have had no way to know my age). This didn t leave any kind of scar, but did leave a lasting impression; to this day, I am particularly concerned with how FLOSS projects handle poisonous people. Debian, for instance, has come a long way in this over the years, and even Linus Torvalds has turned over a new leaf. I don t know if Theo has. In any case, I didn t use NetBSD then. I did try it periodically in the years since, but never found it compelling enough to justify a large switch from Debian. I never tried OpenBSD for various reasons, but one of them was that I didn t want to join a community that tolerates behavior such as Theo s from its leader.

Moving to FreeBSD Moving from OS/2 to FreeBSD was final. That is, I didn t have enough hard drive space to keep both. I also didn t have the backup capacity to back up OS/2 completely. My BBS, which ran Virtual BBS (and at some point also AdeptXBBS) was deleted and reincarnated in a different form. My BBS was a member of both FidoNet and VirtualNet; the latter was specific to VBBS, and had to be dropped. I believe I may have also had to drop the FidoNet link for a time. This was the biggest change of computing in my life to that point. The earlier experiences hadn t literally destroyed what came before. OS/2 could still run my DOS programs. Its command shell was quite DOS-like. It ran Windows programs. I was going to throw all that away and leap into the unknown. I wish I had saved a copy of my BBS; I would love to see the messages I exchanged back then, or see its menu screens again. I have little memory of what it looked like. But other than that, I have no regrets. Pursuing Free, Unixy operating systems brought me a lot of enjoyment and a good career. That s not to say it was easy. All the problems of not being in the Microsoft ecosystem were magnified under FreeBSD and Linux. In a day before EDID, monitor timings had to be calculated manually and you risked destroying your monitor if you got them wrong. Word processing and spreadsheet software was pretty much not there for FreeBSD or Linux at the time; I was therefore forced to learn LaTeX and actually appreciated that. Software like PageMaker or CorelDraw was certainly nowhere to be found for those free operating systems either. But I got a ton of new capabilities. I mentioned the BBS didn t shut down, and indeed it didn t. I ran what was surely a supremely unique oddity: a free, dialin Unix shell server in the middle of a small town in Kansas. I m sure I provided things such as pine for email and some help text and maybe even printouts for how to use it. The set of callers slowly grew over the time period, in fact. And then I got UUCP.

Enter UUCP Even throughout all this, there was no local Internet provider and things were still long distance. I had Internet Email access via assorted strange routes, but they were all strange. And, I wanted access to Usenet. In 1995, it happened. The local ISP I mentioned offered UUCP access. Though I couldn t afford the dialup shell (or later, SLIP/PPP) that they offered due to long-distance costs, UUCP s very efficient batched processes looked doable. I believe I established that link when I was 15, so in 1995. I worked to register my domain, complete.org, as well. At the time, the process was a bit lengthy and involved downloading a text file form, filling it out in a precise way, sending it to InterNIC, and probably mailing them a check. Well I did that, and in September of 1995, complete.org became mine. I set up sendmail on my local system, as well as INN to handle the limited Usenet newsfeed I requested from the ISP. I even ran Majordomo to host some mailing lists, including some that were surprisingly high-traffic for a few-times-a-day long-distance modem UUCP link! The modem client programs for FreeBSD were somewhat less advanced than for OS/2, but I believe I wound up using Minicom or Seyon to continue to dial out to BBSs and, I believe, continue to use Learning Link. So all the while I was setting up my local BBS, I continued to have access to the text Internet, consisting of chiefly Gopher for me.

Switching to Debian I switched to Debian sometime in 1995 or 1996, and have been using Debian as my primary OS ever since. I continued to offer shell access, but added the WorldVU Atlantis menuing BBS system. This provided a return of a more BBS-like interface (by default; shell was still an uption) as well as some BBS door games such as LoRD and TradeWars 2002, running under DOS emulation. I also continued to run INN, and ran ifgate to allow FidoNet echomail to be presented into INN Usenet-like newsgroups, and netmail to be gated to Unix email. This worked pretty well. The BBS continued to grow in these days, peaking at about two dozen total user accounts, and maybe a dozen regular users.

Dial-up access availability I believe it was in 1996 that dial up PPP access finally became available in my small town. What a thrill! FINALLY! I could now FTP, use Gopher, telnet, and the web all from home. Of course, it was at modem speeds, but still. (Strangely, I have a memory of accessing the Web using WebExplorer from OS/2. I don t know exactly why; it s possible that by this time, I had upgraded to a 486 DX2/66 and was able to reinstall OS/2 on the old 25MHz 486, or maybe something was wrong with the timeline from my memories from 25 years ago above. Or perhaps I made the occasional long-distance call somewhere before I ditched OS/2.) Gopher sites still existed at this point, and I could access them using Netscape Navigator which likely became my standard Gopher client at that point. I don t recall using UMN text-mode gopher client locally at that time, though it s certainly possible I did.

The city Starting when I was 15, I took computer science classes at Wichita State University. The first one was a class in the summer of 1995 on C++. I remember being worried about being good enough for it I was, after all, just after my HS freshman year and had never taken the prerequisite C class. I loved it and got an A! By 1996, I was taking more classes. In 1996 or 1997 I stayed in Wichita during the day due to having more than one class. So, what would I do then but enjoy the computer lab? The CS dept. had two of them: one that had NCD X terminals connected to a pair of SunOS servers, and another one running Windows. I spent most of the time in the Unix lab with the NCDs; I d use Netscape or pine, write code, enjoy the University s fast Internet connection, and so forth. In 1997 I had graduated high school and that summer I moved to Wichita to attend college. As was so often the case, I shut down the BBS at that time. It would be 5 years until I again dealt with Internet at home in a rural community. By the time I moved to my apartment in Wichita, I had stopped using OS/2 entirely. I have no memory of ever having OS/2 there. Along the way, I had bought a Pentium 166, and then the most expensive piece of computing equipment I have ever owned: a DEC Alpha, which, of course, ran Linux.

ISDN I must have used dialup PPP for a time, but I eventually got a job working for the ISP I had used for UUCP, and then PPP. While there, I got a 128Kbps ISDN line installed in my apartment, and they gave me a discount on the service for it. That was around 3x the speed of a modem, and crucially was always on and gave me a public IP. No longer did I have to use UUCP; now I got to host my own things! By at least 1998, I was running a web server on www.complete.org, and I had an FTP server going as well.

Even Bigger Cities In 1999 I moved to Dallas, and there got my first broadband connection: an ADSL link at, I think, 1.5Mbps! Now that was something! But it had some reliability problems. I eventually put together a server and had it hosted at an acquantaince s place who had SDSL in his apartment. Within a couple of years, I had switched to various kinds of proper hosting for it, but that is a whole other article. In Indianapolis, I got a cable modem for the first time, with even tighter speeds but prohibitions on running servers on it. Yuck.

Challenges Being non-Microsoft continued to have challenges. Until the advent of Firefox, a web browser was one of the biggest. While Netscape supported Linux on i386, it didn t support Linux on Alpha. I hobbled along with various attempts at emulators, old versions of Mosaic, and so forth. And, until StarOffice was open-sourced as Open Office, reading Microsoft file formats was also a challenge, though WordPerfect was briefly available for Linux. Over the years, I have become used to the Linux ecosystem. Perhaps I use Gimp instead of Photoshop and digikam instead of well, whatever somebody would use on Windows. But I get ZFS, and containers, and so much that isn t available there. Yes, I know Apple never went away and is a thing, but for most of the time period I discuss in this article, at least after the rise of DOS, it was niche compared to the PC market.

Back to Kansas In 2002, I moved back to Kansas, to a rural home near a different small town in the county next to where I grew up. Over there, it was back to dialup at home, but I had faster access at work. I didn t much care for this, and thus began a 20+-year effort to get broadband in the country. At first, I got a wireless link, which worked well enough in the winter, but had serious problems in the summer when the trees leafed out. Eventually DSL became available locally highly unreliable, but still, it was something. Then I moved back to the community I grew up in, a few miles from where I grew up. Again I got DSL a bit better. But after some years, being at the end of the run of DSL meant I had poor speeds and reliability problems. I eventually switched to various wireless ISPs, which continues to the present day; while people in cities can get Gbps service, I can get, at best, about 50Mbps. Long-distance fees are gone, but the speed disparity remains.

Concluding Reflections I am glad I grew up where I did; the strong community has a lot of advantages I don t have room to discuss here. In a number of very real senses, having no local services made things a lot more difficult than they otherwise would have been. However, perhaps I could say that I also learned a lot through the need to come up with inventive solutions to those challenges. To this day, I think a lot about computing in remote environments: partially because I live in one, and partially because I enjoy visiting places that are remote enough that they have no Internet, phone, or cell service whatsoever. I have written articles like Tools for Communicating Offline and in Difficult Circumstances based on my own personal experience. I instinctively think about making protocols robust in the face of various kinds of connectivity failures because I experience various kinds of connectivity failures myself.

(Almost) Everything Lives On In 2002, Gopher turned 10 years old. It had probably been about 9 or 10 years since I had first used Gopher, which was the first way I got on live Internet from my house. It was hard to believe. By that point, I had an always-on Internet link at home and at work. I had my Alpha, and probably also at least PCMCIA Ethernet for a laptop (many laptops had modems by the 90s also). Despite its popularity in the early 90s, less than 10 years after it came on the scene and started to unify the Internet, it was mostly forgotten. And it was at that moment that I decided to try to resurrect it. The University of Minnesota finally released it under an Open Source license. I wrote the first new gopher server in years, pygopherd, and introduced gopher to Debian. Gopher lives on; there are now quite a few Gopher clients and servers out there, newly started post-2002. The Gemini protocol can be thought of as something akin to Gopher 2.0, and it too has a small but blossoming ecosystem. Archie, the old FTP search tool, is dead though. Same for WAIS and a number of the other pre-web search tools. But still, even FTP lives on today. And BBSs? Well, they didn t go away either. Jason Scott s fabulous BBS documentary looks back at the history of the BBS, while Back to the BBS from last year talks about the modern BBS scene. FidoNet somehow is still alive and kicking. UUCP still has its place and has inspired a whole string of successors. Some, like NNCP, are clearly direct descendents of UUCP. Filespooler lives in that ecosystem, and you can even see UUCP concepts in projects as far afield as Syncthing and Meshtastic. Usenet still exists, and you can now run Usenet over NNCP just as I ran Usenet over UUCP back in the day (which you can still do as well). Telnet, of course, has been largely supplanted by ssh, but the concept is more popular now than ever, as Linux has made ssh be available on everything from Raspberry Pi to Android. And I still run a Gopher server, looking pretty much like it did in 2002. This post also has a permanent home on my website, where it may be periodically updated.

Next.

Previous.