Search Results: "metal"

4 January 2022

Jonathan McDowell: Upgrading from a CC2531 to a CC2538 Zigbee coordinator

Previously I setup a CC2531 as a Zigbee coordinator for my home automation. This has turned out to be a good move, with the 4 gang wireless switch being particularly useful. However the range of the CC2531 is fairly poor; it has a simple PCB antenna. It s also a very basic device. I set about trying to improve the range and scalability and settled upon a CC2538 + CC2592 device, which feature an MMCX antenna connector. This device also has the advantage that it s ARM based, which I m hopeful means I might be able to build some firmware myself using a standard GCC toolchain. For now I fetched the JetHome firmware from https://github.com/jethome-ru/zigbee-firmware/tree/master/ti/coordinator/cc2538_cc2592 (JH_2538_2592_ZNP_UART_20211222.hex) - while it s possible to do USB directly with the CC2538 my board doesn t have those bits so going the external USB UART route is easier. The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.
OpenOCD config
source [find interface/buspirate.cfg]
buspirate_port /dev/ttyUSB1
buspirate_mode normal
buspirate_vreg 1
buspirate_pullup 0
transport select jtag
source [find target/cc2538.cfg]
Steps to erase
$ telnet localhost 4444
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Open On-Chip Debugger
> mww 0x400D300C 0x7F800
> mww 0x400D3008 0x0205
> shutdown
shutdown command invoked
Connection closed by foreign host.
At that point I can switch to the UART connection (on PA0 + PA1) and flash using cc2538-bsl:
$ git clone https://github.com/JelmerT/cc2538-bsl.git
$ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex
Opening port /dev/ttyUSB1, baud 500000
Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex
Firmware file: Intel Hex
Connecting to target...
CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4
Primary IEEE Address: 00:12:4B:00:22:22:22:22
    Performing mass erase
Erasing 524288 bytes starting at address 0x00200000
    Erase done
Writing 524256 bytes starting at address 0x00200000
Write 232 bytes at 0x0027FEF88
    Write done
Verifying by comparing CRC32 calculations.
    Verified (match: 0x74f2b0a1)
I then wanted to migrate from the old device to the new without having to repair everything. So I shut down Home Assistant and backed up the CC2531 network information using zigpy-znp (which is already installed for Home Assistant):
python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
I copied the backup to cc2538-network.json and modified the coordinator_ieee to be the new device s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:
python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
The old CC2531 needed unplugged first, otherwise I got an RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed. error. After that I updated my udev rules to map the CC2538 to /dev/zigbee and restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as unk_manufacturer. Fixing that involved editing /etc/homeassistant/.storage/core.device_registry and removing the entry which had the old MAC address, removing the device entry in /etc/homeassistant/.storage/zha.storage for the old MAC and then finally firing up sqlite to modify the Zigbee database:
$ sqlite3 /etc/homeassistant/zigbee.db
SQLite version 3.34.1 2021-01-20 14:10:07
Enter ".help" for usage hints.
sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
sqlite> .quit
So far it all seems a bit happier than with the CC2531; I ve been able to pair a light bulb that was previously detected but would not integrate, which suggests the range is improved. (This post another in the set of things I should write down so I can just grep my own website when I forget what I did to do foo .)

31 December 2021

Russell Coker: Links December 2021

Wired magazine has many short documentary films on YouTube, this one about How Photography is Affecting Our Brains is particularly good [1]. Matt Blaze wrote an informative blog post about Faraday cages for phones [2]. It seems that the commercial shielded bags are all pretty good while doing it yourself with aluminium foil may get similar results or may get much worse results with no obvious difference in the quality of the wrapping. Aluminium foil doesn t protect that well and doesn t protect consistently. A metal biscuit tin performed quite well and consistently, so that s a cheap option for reducing signals. Umair Haque wrote an insightful article about the single word that describes most of the problems the world faces right now [3]. Forbes has an informative article about the early days of the Ford company when they doubled wages, it proves that they didn t do so to enable workders to afford cars but to avoid staff turnover (which is expensive) [4]. Also the Ford company had a fascistic approach to employees, controlling what they were allowed to do in their spare time if they wanted the bonus payment. The wages weren t doubled, there was a bonus payment that would double the salary if the employee was eligible for the bonus. One thing that Forbes gets wrong is that they claim that it was only having higher pay than other companies that provided a benefit and that a higher minimum wage wouldn t, the problem with that idea is that a higher minimum wage would discourage people from having multiple jobs and allow more families to not have the mother working (a condition for a man to get the Ford bonus was for his wife to not work). The WSJ has an interesting article about Intel s datacenter for running all the different configurations of CPUs that they have supported over the last 10 years for security tests [5]. My Thinkpad (which is less than 10yo) is vulnerable to one of the SPECTRE family of exploits as Intel hasn t released microcode to fix it, getting fixed microcode out for all the systems from major vendors like Lenovo would be a good idea if they want to improve their security. NPR has an interesting article about the correlation between support for Trump in counties of the US with lack of vaccination and Covid19 deaths [6]. No surprises, but it s good to see the graphs. Cory Doctorow wrote an interesting article on the lack of slack in the current American education system [7]. It s not that bad in Australia but we are unfortunately moving in the American direction. Teen Vogue has an insightful article about the problems with the focus on resilience [8], while resilience is good we should make it a higher priority to avoid putting people in situations where they need to be resiliant than on encouraging resilience.

29 December 2021

Noah Meyerhans: When You Could Hear Security Scans

Have you ever wondered what a security probe of a computer sounded like? I d guess probably not, because on the face of it that doesn t make a whole lot of sense. But there was a time when I could very clearly discern the sound of a computer being scanned. It sounded like a small mechanical heart beat: Click-click click-click click-click Prior to 2010, I had a computer under my desk with what at the time were not unheard-of properties: Its storage was based on a stack of spinning metal platters (a now-antiquated device known as a hard drive ), and it had a publicly routable IPv4 address with an unfiltered connection to the Internet. Naturally it ran Linux and an ssh server. As was common in those days, service logging was handled by a syslog daemon. The syslog daemon would sort log messages based on various criteria and record them somewhere. In most simple environments, somewhere was simply a file on local storage. When writing to a local file, syslog daemons can be optionally configured to use the fsync() system call to ensure that writes are flushed to disk. Practically speaking, what this meant is that a page of disk-backed memory would be written to the disk as soon as an event occurred that triggered a log message. Because of potential performance implications, fsync() was not typically enabled for most log files. However, due to the more sensitive nature of authentication logs, it was often enabled for /var/log/auth.log. In the first decade of the 2000 s, there was a fairly unsophisticated worm loose on the Internet that would probe sshd with some common username/password combinations. The worm would pause for a second or so between login attempts, most likely in an effort to avoid automated security responses. The effect was that a system being probed by this worm would generate disk write every second, with a very distinct audible signature from the hard drive. I think this situation is a fun demonstration of a side-channel data leak. It s primitive and doesn t leak very much information, but it was certainly enough to make some inference about the state of the system in question. Of course, side-channel leakage issues have been a concern for ages, but I like this one for its simplicity. It was something that could be explained and demonstrated easily, even to somebody with relatively limited understanding of how computers work , unlike, for instance measuring electromagnetic emanations from CPU power management units. For a different take on the sounds of a computing infrastructure, Peep (The Network Auralizer) won an award at a USENIX conference long, long ago. I d love to see a modern deployment of such a system. I m sure you could build something for your cloud deployment using something like AWS EventBridge or Amazon SQS fairly easily. For more on research into actual real-world side-channel attacks, you can read A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography or A Survey of Electromagnetic Side-Channel Attacks and Discussion on their Case-Progressing Potential for Digital Forensics.

12 December 2021

Andrej Shadura: Coffee gear upgrade

Two weeks ago I decided to make myself a combined birthday and Christmas present and upgrade my coffee gear. I ve got my first espresso machine back in 2013, it was a cheap Saeco Philips Poemia, which made reasonably drinkable coffee, but not being able to make good coffee made me increasingly unhappy about it. However, since it worked, I wasn t motivated enough to change anything until it stopped working. One day the nut holding the shower screen broke, and I couldn t replace it. Having no coffee machine is arguably worse than having a mediocre one, so I started looking for a new one in the budget range. Having spent about two months reading reviews for all sorts of manual espresso machines, I realised the best thing I can probably do for the money I was willing to spend at the time was to buy a second-hand Gaggia Classic. Which is what I did: I paid 260 to a person who apparently decided they prefer to press a button to get their espresso rather than have to prepare it themselves. My first attempts at making espresso weren t very much successful, as my hand grinder couldn t produce the right grind for espresso (without using pressurised baskets), so I quickly upgraded it to the 50 De Longhi electric grinder, which was much better for espresso.
Gaggia Classic 2015 and De Longhi grinderGaggia Classic 2015 and De Longhi grinder
This setup has worked for me for nearly 4 years, but over the time the Gaggia started malfunctioning. See, this particular Gaggia Classic is the 2015 model, which resulted in the overhaul of the design after Gaggia was acquired by Philips. They replaced the boiler, changed the exterior design a bit, and importantly for me replaced the fully metal group head with the metal and plastic version typically found in cheap espresso machines like my old Poemia.
The Gaggia Classic 2015 group headThe Gaggia Classic 2015 group head
The trouble with this one is that the plastic bit (barely seen on the picture, but it s inserted into the notches on the sides of the group head) is that it gets damaged over the time, especially when the portafilter is inserted very tightly. The more damaged it gets, the tigher it is necessary to insert the portafilter to avoid leakage, the more damaged it gets and so on. At one point, the Gaggia was leaking water every time I was making coffee, affecting the quality of the brew and making a mess in the kitchen. I made a mistake and removed the plastic bit only to realise it cannot be purchased separately and nobody knows how to put it back once it s been removed; I ended up paying more than a hundred euro to replace the group head as a whole. Once I ve got the Gaggia back, I became too conscious of the potential damage I can make by overengaging the portafilter, I decided it s probably the time to get a new coffee machine.
Gaggia Classic 2015 vs 2019Gaggia Classic 2015 vs 2019
The makers of Gaggia listened to the critics and undid the 2015 changes to the Gaggia Classic design, reverting to the previous one and fixing it they basically merged the fixes many of the owners of the old Gaggia did themselves. The group head is now without any plastic, so I don t have to worry that much about damaging it accidentally. A friend pointed out that my grinder is probably not good enough and recommended a couple of models to me; I checked Kev s Coffee Blog and found a grinder, Sage Dose Control Pro, which was available on sale in my local shop for a reasonable price. I ve also got a portafilter holder to make tamping more comfortable I used to tamp against an edge of the sink:
The final setup:
Gaggia Classic 2019 and Sage BCG600 Dose Control ProGaggia Classic 2019 and Sage BCG600 Dose Control Pro
What I learnt from this is that the grinder does indeed make a huge difference. I am now able to consistently produce brews I would only occasionally get with the old De Longhi grinder. There is one downside to the new grinder. Grinder review at Alza.sk Happy with my new purchase, I went to read this review and thought to myself: lucky me, my grinder is absolutely quiet! And then I realised that the noise in my kitchen is not, in fact, produced by the fridge, but the grinder. Well, a cheap switched plug solved the issue completely (I wish sockets here each had a switch like they usually to in the UK!) Switched plug

5 November 2021

Jonathan Dowland: 25 things I would like to 3D print

Last year I started collecting ideas of things I would like to 3D print one day, on Twitter. Twitter is fundamentally ephemeral, so I'll collect it here instead. I got up to 14 items on Twitter, and now I'm up to 25. I don't own a 3D printer, but I have access to one at the work office. Perhaps this list is my subconcious trying to convince me to buy one. What am I missing? What else should I be thinking of printing? Let me know!
  1. Some kind of 45 leaning prong to dry bottles and flasks on
  2. A tea tray and coasters
  3. a replacement prop arm/foot for my computer keyboard (something like this but for the Lenovo Ultranav)
  4. some attempted representation of Borges Library of Babel, a la @jwz The Library of Babel, again
  5. an exploration of the geometry of Susanna Clarke s Piranesi
  6. further iterations of my castle
  7. Small tins to keep loose-leaf tea in
  8. Who am I kidding, bound to be a map from DOOM. E1M1 perhaps, or something more regular (MAP07? E2M8?) See also this amazing print of Quake 3: Arena's "Camping Grounds"
  9. replacement bits for kids toy sets, e.g. a bolt with long shank from Early Learning Centre Build It Deluxe Set, without all 4 of which you can't build much of anything
  10. A stand for a decorative Christmas bauble (kid's hand print on it) A roll of cellotape works pretty well in the mean time
  11. DIY bits-and-bobs sorter/storage (nuts and bolts etc)
  12. A space ship from Elite/Frontier. Probably a Cobra mk3 or maybe a Viper mk2. In a glow-in-the-dark PLA which i d overpaint with gunmetal except for the fuselage
  13. A watch stand/holder/storage thing Except it would look nicer in wood (And I m more inclined to get rid of all but one watch instead)
  14. Little tabbed 7 dividers for vinyl records, with A-Z cut into the tabs 12 ones might be a stretch (something like this)
  15. A low-profile custom trackpoint cover like the ones by SaotoTech (e.g.)
  16. A vinyl record. (Not sure that any 3D printer I would have access to would have the necessary resolution. I haven't done any research yet.)
  17. A free-standing inclined vinyl record display stand (e.g.)
  18. A "Settlers of Catan" set. I've got the travel edition which is great but it would be nicer to have a larger-sized set. There are some things I really like about the travel set that the full-size set lacks; so designing and printing a larger set myself could incorporate them. Also I don't feel inclined to buy the full-size set for 50 or so to end up with essentially the same game I already bought. No doubt I'd spent at least that much in PLA.
  19. Little kids trinkets. Pacman ghosts, that sort-of thing. Whatever my daughters come up with next.
  20. Lego storage/sorters.
  21. Some kind of lenticular picture. Perhaps a gift or Christmas card combined in one.
  22. A bracket to install a Gotek drive in my Amiga 500 (e.g.). I've managed without but the fit isn't great.
  23. An attempt at using the 3D printer for 2D drawing. I would never get the same kind of quality results you can get from a proper plotter, but still Take a look at some proper plotter art!
  24. Garden decorations. I like the idea of porous geometric shapes that you can plant mosses or ferns into, but also things which might be taken over and "used" by nature in ways I hadn't thought of.
  25. Floor plans / 3D plans of my house (including variations if I remodelled)

5 August 2021

Ian Wienand: Lyte Portable Projector Investigation

I recently picked up this portable projector for a reasonable price. It might also be called a "M5" projector, but I can not find one canonical source. In terms of projection, it performs as well as a 5cm cube could be expected to. They made a poor choice to eschew adding an external video input which severely limits the device's usefulness. The design is nice and getting into it is quite an effort. There is no wasted space! After pulling off the rubber top covering and base, you have to pry the decorative metal shielding off all sides to access the screws to open it. This almost unavoidably bends it so it will never quite be the same. To avoid you having to bother, some photos: Lyte ProjectorIt is fairly locked down. I found a couple of ways in; installing the Disney+ app from the "Aptoide TV" store it ships with does not work, but the app prompts you to update it, which sends you to an action where you can then choose to open the Google Play store. From there, you can install things that work on it's Android 7 OS. This allowed me to install a system-viewer app which revealed its specs: Another weird thing I found was that if you go into the custom launcher "About" page under settings and keep clicking the "OK" button on the version number, it will open the standard Android settings page. From there you can enable developer options. I could not get it connecting to ADB, although you perhaps need a USB OTG cable which I didn't have. It has some sort of built-in Miracast app that I could not get anything to detect. It doesn't have the native Google app store; most of the apps in the provided system don't work. Somehow it runs Netflix via a webview or which is hard to use. If it had HDMI input it would still be a useful little thing to plug things into. You could perhaps sideload some sort of apps to get the screensharing working, or it plays media files off a USB stick or network shares. I don't believe there is any practical way to get a more recent Android on this, leaving it on an accelerated path to e-waste for all but the most boutique users.

4 August 2021

Petter Reinholdtsen: Mechanic's words in five languages, English, Norwegian and Northern S mi editions

Almost thirty years ago, some forward looking teachers at Samisk videreg ende skole og reindriftsskole teaching metal work and Northern S mi, decided to create a list of words used in Northern S mi metal work. After almost ten years this resulted in a dictionary database, published as the book "Mekanihkk rs nit : Mekanikerord = Mekaanisen alan sanasto = Mechanic's words" in 1999. The story of this work is available from the pen of Svein Lund, one of the leading actors behind this effort. They even got the dictionary approved by the S mi Language Council as the recommended metal work words to use. Fast forward twenty years, I came across this work when I recently became interested in metal work, and started watching educational and funny videos on the topic, like the ones from mrpete222 and This Old Tony. But they all talk English, but I wanted to know what the tools and techniques they used were called in Norwegian. Trying to track down a good dictionary from English to Norwegian, after much searching, I came across the database of words created almost thirty years ago, with translations into English, Norwegian, Northern S mi, Swedish and Finnish. This gave me a lot of the Norwegian phrases I had been looking for. To make it easier for the next person trying to track down a good Norwegian dictionary for the metal worker, and because I knew the person behind the database from my Skolelinux / Debian Edu days, I decided to ask if the database could be released to the public without any usage limitations, in other words as a Creative Commons licensed data set. And happily, after consulting with the S mi Parliament of Norway, the database is now available with the Creative Commons Attribution 4.0 International license from my gitlab repository. The dictionary entries look slightly different, depending on the language in focus. This is the same entry in the different editions. English
lathe
dreiebenk (nb) v rve, v rvenbea ka, jorahanbea ka, v tnanbea ka (se) svarv (sv) sorvi (fi)
Norwegian
dreiebenk
lathe (en) v rve, v rvenbea ka, jorahanbea ka, v tnanbea ka (se) svarv (sv) sorvi (fi)
(nb): sponskj rande bearbeidingsmaskin der ein med skj reverkt y lausgj r spon fr eit roterande arbetsstykke
Northern S mi
v rve, v rvenbea ka, jorahanbea ka, v tnanbea ka
dreiebenk (nb) lathe (en) svarv (sv) sorvi (fi)
(se): ma iidna mainna uohpp vuolahasaid jorri bargo vdnasis
(nb): sponskj rande bearbeidingsmaskin der ein med skj reverkt y lausgj r spon fr eit roterande arbetsstykke
The database included term description in both Norwegian and Northern S mi, but not English. Because of this, the Northern S mi edition include both descriptions, the Norwegian edition include the Norwegian description and the English edition lack a descripiton. Once the database was available without any usage restrictions, and armed with my experience in publishing books, I decided to publish a Norwegian/English dictionary as a book using the database, to make the data set available also on paper and as an ebook. Further into the project, it occurred to me that I could just as easily make an English dictionary, and talking to Svein and concluding that it was within reach, I decided to make a Northern S mi dictionary too. Thus I suddenly find myself publishing a Northern S mi dictionary, even though I do not understand the language myself. I hope it will be well received, and can help revive the impressive work done almost thirty years ago to document the vocabulary of metal workers. If I get some help, I might even extend it with some of the words I find missing, like collet, rotary broach, carbide, knurler, arbor press and others. But the first edition build from a lightly edited version of the original database, with no new entries added. If you would like to check it out, visit my list of published books and consider buying a paper or ebook copy from lulu.com. The paper edition is only available in hardcover to increase its durability in the workshop. I am very happy to report that in the process, and thanks to help from both Svein Lund and B rre Gaup who understand the language, the docbook tools I use to create books, dblatex and docbook-xsl, now include support for Northern S mi. Before I started, these lacked the needed locale settings for this language, but now the patches are included upstream. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22 July 2021

Junichi Uekawa: Added memory to ACER Chromebox CXI3 (fizz/sion).

Added memory to ACER Chromebox CXI3 (fizz/sion). Got 2 16GB SO-DIMMs and installed them. I could not find correct information on how to open this box on the internet. They seem to be explaining similar boxes from HP or ASUS which seem to have simpler procedure to opening. I had to ply out out the 4 rubber pieces at the bottom, and then open the 4 screws. Then I could ply open the front and back panel by applying force where the screws were. In the front panel there's two more shorter screws that needs to be opened; after taking out the two screws (that's 4+2), I could open the box into two pieces. Be careful they are connected, I think there's audio cable. After opening you can access the memory chips. Pull the metal piece open on left and right hand side of the memory chip so that it raises. Make sure the metal pieces latch closed when you insert the new memory, that should signify memory is in place. I didn't do that at the beginning and the machine didn't boot. So far so good. No longer using zram.

21 June 2021

Shirish Agarwal: Accessibility, Freenode and American imperialism.

Accessibility This is perhaps one of the strangest ways and yet also perhaps the straightest way to start the blog post. For the past weeks/months, a strange experience has been there. I am using a Logitech wireless keyboard and mouse for almost a decade. Now, for the past few months and weeks we observed a somewhat rare phenomena . While in-between us we have a single desktop computer. So me and mum take turns to be on the Desktop. At times, however, the system would sit idle and after some time it goes to low-power mode/sleep mode after 30 minutes. Then, when you want to come back, you obviously have to give your login credentials. At times, the keyboard refuses to input any data in the login screen. Interestingly, the mouse still functions. Much more interesting is the fact that both the mouse and the keyboard use the same transceiver sensor to send data. And I had changed batteries to ensure it was not a power issue but still no input :(. While my mother uses and used the power switch (I did teach her how to hold it for few minutes and then let it go) but for self, tried another thing. Using the mouse I logged of the session thinking perhaps some race condition or something might be in the session which was not letting the keystrokes be inputted into the system and having a new session might resolve it. But this was not to be  Luckily, on the screen you do have the option to reboot or power off. I did a reboot and lo, behold the system was able to input characters again. And this has happened time and again. I tried to find GOK and failed to remember that GOK had been retired. I looked up the accessibility page on Debian wiki. Very interesting, very detailed but sadly it did not and does not provide the backup I needed. I tried out florence but found that the app. is buggy. Moreover, the instructions provided on the lightdm screen does not work. I do not get the on-screen keyboard while I followed the instructions. Just to be clear this is all on Debian testing which is gonna be Debian stable soonish  I even tried the same with xvkbd but no avail. I do use mate as my desktop-manager so maybe the instructions need some refinement ???? $ cat /etc/lightdm/lightdm-gtk-greeter.conf grep keyboard
# a11y-states = states of accessibility features: name save state on exit, -name
disabled at start (default value for unlisted), +name enabled at start. Allowed names: contrast, font, keyboard, reader.
keyboard=xvkbd no-gnome focus &
# keyboard-position = x y[;width height] ( 50%,center -0;50% 25% by default) Works only for onboard
#keyboard= Interestingly, Debian does provide two more on-screen keyboards, matchbox as well as onboard which comes from Ubuntu. While I have both of them installed. I find xvkbd to be enough for my work, the only issue seems to be I cannot get it from the drop-down box of accessibility at the login screen. Just to make sure that I have not gone to Gnome-display manager, I did run

$ sudo dpkg-reconfigure gdm3 Only to find out that I am indeed running lightdm. So I am a bit confused why it doesn t come up as an option when I have the login window/login manager running. FWIW I do run metacity as the window manager as it plays nice with all the various desktop environments I have, almost all of them. So this is where I m stuck. If I do get any help, I probably would also add those instructions to the wiki page, so it would be convenient to the next person who comes with the same issue. I also need to figure out some way to know whether there is some race-condition or something which is happening, have no clue how would I go about it without having whole lot of noise. I am sure there are others who may have more of an idea. FWIW, I did search unix.stackexchange as well as reddit/debian to see if I could see any meaningful posts but came up empty.

Freenode I had not been using IRC for quite some time now. The reasons have been multiple issues with Riot (now element) taking the whole space on my desktop. I did get alerted to the whole thing about a week after the whole thing went down. Somebody messaged me DM. I *think* I put up a thread or a mini-thread about IRC or something in response to somebody praising telegram/WhatsApp or one of those apps. That probably triggered the DM. It took me a couple of minutes to hit upon this. I was angry and depressed, seeing the behavior of the new overlords of freenode. I did see that lot of channels moved over to Libera. It was also interesting to see that some communities were thinking of moving to some other obscure platform, which again could be held hostage to the same thing. One could argue one way or the other, but that would be tiresome and fact is any network needs lot of help to be grown and nurtured, whether it is online or offline. I also saw that Libera was also using a software Solanum which is ircv3 compliant. Now having done this initial investigation, it was time to move to an IRC client. The Libera documentation is and was pretty helpful in telling which IRC clients would be good with their network. So I first tried hexchat. I installed it and tried to add Libera server credentials, it didn t work. Did see that they had fixed the bug in sid/unstable and now it s in testing. But at the time it was in sid, the bug-fixed and I wanted to have something which just ran the damn thing. I chanced upon quassel. I had played around with quassel quite a number of times before, so I knew I could play/use it. Hence, I installed it and was able to use it on the first try. I did use the encrypted server and just had to tweak some settings before I could use it with some help with their documentation. Although, have to say that even quassel upstream needs to get its documentation in order. It is just all over the place, and they haven t put any effort into streamlining the documentation, so that finding things becomes easier. But that can be said of many projects upstream. There is one thing though that all of these IRC clients lack. The lack of a password manager. Now till that isn t fixed it will suck because you need another secure place to put your password/s. You either put it on your desktop somewhere (insecure) or store it in the cloud somewhere (somewhat secure but again need to remember that password), whatever you do is extra work. I am sure there will be a day when authenticating with Nickserv will be an automated task and people can just get on talking on channels and figuring out how to be part of the various communities. As can be seen, even now there is a bit of a learning curve for both newbies and people who know a bit about systems to get it working. Now, I know there are a lot of things that need to be fixed in the anonymity, security place if I put that sort of hat. For e.g. wouldn t it be cool if either the IRC client or one of its add-on gave throwaway usernames and passwords. The passwords would be complex. This would make it easier who are paranoid about security and many do and would have. As an example we can see of Fuchs. Now if the gentleman or lady is working in a professional capacity and would come to know of their real identity and perceive rightly or wrongly the role of that person, it will affect their career. Now, should it? I am sure a lot of people would be divided on the issue. Personally, as far as I am concerned, I would say no because whether right or wrong, whatever they were doing they were doing on their own time. Not on company time. So it doesn t concern the company at all. If we were to let companies police the behavior outside the time, individuals would be in a lot of trouble. Although, have to say that is a trend that has been seen in companies that are firing people either on the left or right. A recent example that comes to mind is Emily Wilder who was fired by Associated Press. Interestingly, she was interviewed by Democracy now, and it did come out that she is a Jew. As can be seen and understood there is a lot of nuance to her story and not the way she was fired. It doesn t give a good taste in the mouth, but then getting fired nobody does. On few forums, people did share of people getting fired of their job because they were dancing (cops). Again, it all depends, for me again, hats off to anybody who feels like dancing or whatever because there are just so many depressing stories all around.

Banned and FOE On few forums I was banned because I was talking about Brexit and American imperialism, both of which are seem to ruffle a few feathers in quite a few places. For instance, many people for obvious reasons do not like this video

Now I m sorry I am not able to and have not been able to give invidious links for the past few months. The reason being invidious itself went through some changes and the changes are good and bad. For e.g. now you need to share your google id with a third-party which at least to my mind is not a good idea. But that probably is another story altogether and it probably will need its own place. Coming back to the video itself, this was shared by Anthony hazard and the Title is The Atlantic slave trade: What too few textbooks told you . I did see this video quite a few years ago and still find it hard to swallow that tens of millions of Africans were bought as slaves to the Americas, although to be fair it does start with the Spanish settlement in the land which would be called the U.S. but they bought slaves with themselves. They even got the American natives, i.e. people from different tribes which made up America at that point. One point to note is that U.S. got its independence on July 4, 1776 so all the people before that were called as European settlers for want of a better word. Some or many of these European settlers would be convicts who were sent from UK. But as shared in the article, that would only happen with U.S. itself is mature and open enough for that discussion. Going back to the original point though, these European or American settlers bought lot of slaves from Africa. The video does also shed some of the cruelty the Europeans or Americans did on the slaves, men and women in different ways. The most revelatory part though which I also forget many a times that because lot of people were taken from Africa and many of them men, it did lead to imbalances in the African societies not just in weddings but economics in general. It also developed a theory called Critical Race theory in which it tries to paint the Africans as an inferior race otherwise how would Christianity work where their own good book says All men are born equal . That does in part explain why the African countries are still so far behind their European or American counterparts. But Africa can still be proud as they are richer than us, yup India. Sadly, I don t think America is ready to have that conversation anytime soon or if ever. And if it were to do, it would have to out-do any truth and reconciliation Committee which the world has seen. A mere apology or two would not just cut it. The problems of America sadly are not limited to just Africans but the natives of the land, for e.g. the Lakota people. In 1868, they put a letter stating we will give the land back to the Lakota people forever, but then the gold rush happened. In 2007, when the Lakota stated their proposal for independence, the U.S. through its force denied. So much for the paper, it was written on. Now from what I came to know over the years, the American natives are called First nations . Time and time again the American Govt. has tried or been foul towards them. Some of the examples include The Yucca Mountain nuclear waste repository . The same is and was the case with The Keystone pipeline which is now dead. Now one could say that it is America s internal matter and I would fully agree but when they speak of internal matters of other countries, then we should have the same freedom. But this is not restricted to just internal matters, sadly. Since the 1950 s i.e. the advent of the cold war, America s foreign policy made Regime changes all around the world. Sharing some of the examples from the Cold War

Iran 1953
Guatemala 1954
Democratic Republic of the Congo 1960
Republic of Ghana 1966
Iraq 1968
Chile 1973
Argentina 1976
Afghanistan 1978-1980s
Grenada
Nicaragua 1981-1990
1. Destabilization through CIA assets
2. Arming the Contras
El Salvador 1980-92
Philippines 1986 Even after the Cold War ended the situation was anonymolus, meaning they still continued with their old behavior. After the end of Cold War

Guatemala 1993
Serbia 2000
Iraq 2003-
Afghanistan 2001 ongoing There is a helpful Wikipedia article titled History of CIA which basically lists most of the covert regime changes done by U.S. The abvoe is merely a sub-set of the actions done by U.S. Now are all the behaviors above of a civilized nation ? And if one cares to notice, one would notice that all the above countries in the list which had the regime change had either Oil or precious metals. So U.S. is and was being what it accuses China, a profiteer. But this isn t just the U.S. China story but more about the American abuse of its power. My own country, India paid IMF loans till 1991 and we paid through the nose. There were economic sanctions against India. But then, this is again not just about U.S. India. Even with Europe or more precisely Norway which didn t want to side with America because their intelligence showed that no WMD were present in Iraq, the relationship still has issues.

Pandemic and the World So I do find that this whole blaming of China by U.S. quite theatrical and full of double-triple standards. Very early during the debates, it came to light that the Spanish Flu actually originated in Kensas, U.S.

What was also interesting as I found in the Pentagon Papers much before The Watergate scandal came out that U.S. had realized that China would be more of a competitor than Russia. And this itself was in 1960 s itself. This shows the level of intelligence that the Americans had. From what I can recollect from whatever I have read of that era, China was still mostly an agri-based economy. So, how the U.S. was able to deduce that China will surpass other economies is beyond me even now. They surely must have known something that even we today do not. One of the other interesting observations and understanding that I got while researching that every year we transfer an average of 7500 diseases from animal to humans and that should be a scary figure. I think more than anything else, loss of habitat and use of animals from food to clothing to medicine is probably the reason we are getting such diseases. I am also sure that there probably are and have been similar number of transfer of diseases from humans to animals as well but for well-known biases and whatnot those studies are neither done or are under-funded. There are and have been reports of something like 850,000 undiscovered viruses which various mammals and birds have. Also I did find that most of such pandemics are hard to identify, for e.g. SARS 1 took about 15 years, Ebola we don t know till date from where it came. Even HIV has questions for us. Hell, even why does hearing go away is a mystery to us. In all of this, we want to say China is culpable. And while China may or may not be culpable, only time will tell, this is surely the opportunity for all countries to spend and make capacities in public health. Countries which will take lessons from it and improve their public healthcare models will hopefully will not suffer as those who will suffer and are continuing to suffer now  To those who feel that habitat loss of animals is untrue, I would suggest them to see Sherni which depicts the human/animal conflict in all its brutality. I am gonna warn in advance that the ending is not nice but what can you expect from a country in which forest area cover has constantly declined and the Govt. itself is only interested in headline management

The only positive story I can share from India is that finally the Modi Govt. has said we will do free vaccine immunization for everybody. Although the pace is nothing to write home about. One additional thing they relaxed was instead of going to Cowin or any other portal, people could simply walk in using their identity papers. Although, given the pace of vaccinations, it is going to take anywhere between 13-18 months or more depending on availability of vaccines.

Looking forward to all and any replies have a virtual keyboard, preferably xvkbd as that is good enough for my use-case.

3 June 2021

John Goerzen: Roundup of Unique Data/Storage Hosting Options

Recently I have been taking another look at the services at rsync.net and it got me thinking: what would I do with a lot of storage? What might I want to run with it, if it were fairly cheap? Let s start taking a look at what s out there. I m going to try to focus on things that are unique for some reason: pricing, features, etc. Incidentally, good reviews are hard to find due to the proliferation of affiliate links. I have no affiliate relationships with anyone mentioned here and there are no affiliate links in this post. I ll start with the highest-end community and commercial options (though both are quite competitive on price for what they are), and then move on to the cheaper options. Community option: SDF SDF is somewhat hard to define. What is SDF? could prompt answers like: There s a lot there. SDF lets you use things for yourself, of course, but you can also join a community. It s not a commercial service backed by SLAs it s best-effort but it s been around more than 30 years and has a great track record. Top commercial option for backup storage: rsync.net rsync.net offers storage broadly over SSH: sftp, rsync, scp, borg, rclone, restic, git-annex, git, and such. You do not get a shell, but you do get to run a few noninteractive commands via ssh. You can, for instance, run git clone on the rsync server. The rsync special sauce is in ZFS. They run raidz3 on their arrays (and also offer dual location setups for an additional fee), offer both free and paid ZFS snapshots, etc. The service is designed to be extremely reliable, particularly for backups, and it seems to me to meet those goals. Basic storage is $0.025 per GB/mo, but with certain account types such as borg, can be had for $0.015 per GB/mo. The minimum size is 400GB or $10/mo. There are no bandwidth charges. This makes it quite economical even compared to, say, S3. Additional discounts start at 10TB, so 10TB with rsync.net would cost $204.80/mo or $81.92 on the borg plan. You won t run Nextcloud on this thing, but for backups that must be reliable, or even a photo collection or something, it makes perfect sense. When you look into other options, you ll find that other providers are a lot more vague about their storage setup than rsync.net. Various offerings from Hetzner Hetzner is one of Europe s large hosting companies, and they have several options of interest. Their Storage Box competes directly with the rsync.net service. Their per-GB storage cost is lower than rsync.net, and although they do include a certain amount of free bandwidth with each account, bandwidth is not unlimited and could result in charges. Still, if you don t drive 2x or more your storage usage in bandwidth each month, it would be cheaper than rsync. The Storage Box also uses ZFS with some kind of redundancy, though they don t specifcy details. What differentiates them from rsync.net is the protocol support. They support sftp, scp, Borg, ssh, rsync, etc. just as rsync.net does. But then they also throw in Samba/CIFS, FTPS, HTTPS, and WebDAV all optionally enabled or disabled by you. Although things like sshfs exist, they aren t particularly optimal for some use cases, and CIFS support may just be what you need in some situations. 10TB with Hetzner would cost EUR 39.90/mo, or about $48.84/mo. (This figure is higher for Europeans, who also have to pay VAT.) Hetzner also offers a Storage Share, which is a private Nextcloud instance. 10TB of that is exactly the same cost as 10TB of the Storage Box. You can add your own users, groups, etc. to this as your are the Nextcloud admin of your instance. Hetzner throws in automatic updates (which is great, as updates have been a pain in my side for a long time). Nextcloud is ideal for things like photo sharing, even has email and chat built in, etc. For about the same price at 2TB of Google One, you can have 2TB of Nextcloud with all those services for yourself. Not bad. You can also mount a Nextcloud instance with WebDAV. Interestingly, Nextcloud supports external storages as backend for the data. It supports another Nextcloud instance, OpenStack or S3 object storage, and SFTP, SMB/CIFS, and WebDAV. If you re thinking you d like both SFTP and Nextcloud access to a pool of storage, I imagine you could always get a large Storage Box from Hetzner (internal transfer is free), pair it with a small Nextcloud instance, and link the two with Nextcloud external storage. Dedicated Servers If you want a more DIY approach, you can find some interesting deals on actual dedicated server hardware you get the entire machine to yourself. I ve been using OVH s SoYouStart for a number of years, with good experienaces, and they have a number of server configurations available. For instance, for $45.99, you can get a Xeon box with 4x2TB drives and 32GB RAM. With RAID5 or raidz1, that s 6TB of available space and cheaper than the 6TB from rsync.net (though less redundant) plus you get the whole box to yourself too. OVH directly has some more storage servers; for instance, you can get a box with 4x4TB + 1x500GB SSD for $86.75/mo, giving you 12TB available with RAID5/raidz1, plus a 16GB server to do what you want with. Hetzner also has some larger options available, for instance 2x4TB at EUR39 or 2x8TB at EUR54, both with 64GB of RAM. Bargain Corner Yes, you can find 10TB for $25/mo. It s hosted on ceph, by what appears to be mostly a single person (though with a lot of experience and a fair bit of transparency). You re not going to have the round-the-clock support experience as with rsync.net, nor its raidz3 level of redundancy but if you don t need that, there are quite a few options. Let s start with Lima Labs. Yes, 10TB is $25/mo, and they support sftp, rsync, borg, and even NFS mounts on storage backed by Ceph. The owner, Sam, seems to be a nice guy but the service isn t going to be on the scale of rsync.net or Hetzner. That may or may not be OK for your needs I mean, you can even get 1TB for $5/mo, so there are some fantastic deals to be had here. BorgBase does Borg hosting and borg hosting only. You can get 1TB for $6.67/mo or, for instance, 10TB for $53.46. They don t say much about their infrastructure and it s hard to get a read on the company, but for Borg backups, it could be a nice option. Bargain Corner Part 2: Seedboxes There s a market out there of companies offering BitTorrent seeding and downloading services. Typically, these services offer you Unix ssh access to a shell, give you a bunch of space on completely non-redundant drives (theory being that the data on them is transient), lots of bandwidth, for a low price. Some people use them for BitTorrent, others for media serving and such. If you are willing to take the lowest in drive redundancy, there are some deals to be had. Whatbox is a popular leader here, and has an extensive wiki with info. Or you can find some seedbox.io shared storage plans for instance, 12TB for $32.49/mo. But it s completely non-redundant drives. Seedbox has a partner company, Walker Servers, with some interesting deals; for instance, 4x8TB for EUR 52.45. Not bad for 24TB usable with RAID5 but Walker Servers is completely unknown to me and doesn t publish a phone number. So, YMMV. Conclusion I m sure I ve left out many quality options here, but hopefully this is enough to lay out a general lay of the land. Leave other suggestions in the comments.

30 May 2021

Russ Allbery: Review: The Silver Chair

Review: The Silver Chair, by C.S. Lewis
Illustrator: Pauline Baynes
Series: Chronicles of Narnia #4
Publisher: Collier
Copyright: 1953
Printing: 1978
ISBN: 0-02-044250-5
Format: Mass market
Pages: 217
The Silver Chair is a sequel to The Voyage of the Dawn Treader and the fourth book of the Chronicles of Narnia in original publication order. (For more about publication order, see the introduction to my review of The Lion, the Witch and the Wardrobe.) Apart from a few references to The Voyage of the Dawn Treader at the start, it stands sufficiently on its own that you could read it without reading the other books, although I have no idea why you'd want to. We have finally arrived at my least favorite of the Narnia books and the one that I sometimes skipped during re-reads. (One of my objections to the new publication order is that it puts The Silver Chair and The Last Battle back-to-back, and I don't think you should do that to yourself as a reader.) I was hoping that there would be previously unnoticed depth to this book that would redeem it as an adult reader. Sadly, no; with one very notable exception, it's just not very good. MAJOR SPOILERS BELOW. The Silver Chair opens on the grounds of the awful school to which Eustace's parents sent him: Experiment House. That means it opens (and closes) with a more extended version of Lewis's rant about schools. I won't get into this in detail since it's mostly a framing device, but Lewis is remarkably vicious and petty. His snide contempt for putting girls and boys in the same school did not age well, nor did his emphasis at the end of the book that the incompetent head of the school is a woman. I also raised an eyebrow at holding up ordinary British schools as a model of preventing bullying. Thankfully, as Lewis says at the start, this is not a school story. This is prelude to Jill meeting Eustace and the two of them escaping the bullies via a magical door into Narnia. Unfortunately, that's the second place The Silver Chair gets off on the wrong foot. Jill and Eustace end up in what the reader of the series will recognize as Aslan's country and almost walk off the vast cliff at the end of the world, last seen from the bottom in The Voyage of the Dawn Treader. Eustace freaks out, Jill (who has a much better head for heights) goes intentionally close to the cliff in a momentary impulse of arrogance before realizing how high it is, Eustace tries to pull her back, and somehow Eustace falls over the edge. I do not have a good head for heights, and I wonder how much of it is due to this memorable scene. I certainly blame Lewis for my belief that pulling someone else back from the edge of a cliff can result in you being pushed off, something that on adult reflection makes very little sense but which is seared into my lizard brain. But worse, this sets the tone for the rest of the story: everything is constantly going wrong because Eustace and Jill either have normal human failings that are disproportionately punished or don't successfully follow esoteric and unreasonably opaque instructions from Aslan. Eustace is safe, of course; Aslan blows him to Narnia and then gives Jill instructions before sending her afterwards. (I suspect the whole business with the cliff was an authorial cheat to set up Jill's interaction with Aslan without Eustace there to explain anything.) She and Eustace have been summoned to Narnia to find the lost Prince, and she has to memorize four Signs that will lead her on the right path. Gah, the Signs. If you were the sort of kid that I was, you immediately went back and re-read the Signs several times to memorize them like Jill was told to. The rest of this book was then an exercise in anxious frustration. First, Eustace is an ass to Jill and refuses to even listen to the first Sign. They kind of follow the second but only with heavy foreshadowing that Jill isn't memorizing the Signs every day like she's supposed to. They mostly botch the third and have to backtrack to follow it. Meanwhile, the narrator is constantly reminding you that the kids (and Jill in particular) are screwing up their instructions. On re-reading, it's clear they're not doing that poorly given how obscure the Signs are, but the ominous foreshadowing is enough to leave a reader a nervous wreck. Worse, Eustace and Jill are just miserable to each other through the whole book. They constantly bicker and snipe, Eustace doesn't want to listen to her and blames her for everything, and the hard traveling makes it all worse. Lewis does know how to tell a satisfying redemption arc; one of the things I have always liked about Edmund's story is that he learns his lesson and becomes my favorite character in the subsequent stories. But, sadly, Eustace's redemption arc is another matter. He's totally different here than he was at the start of The Voyage of the Dawn Treader (to the degree that if he didn't have the same name in both books, I wouldn't recognize him as the same person), but rather than a better person he seems to have become a different sort of ass. There's no sign here of the humility and appreciation for friendship that he supposedly learned from his time as a dragon. On top of that, the story isn't very interesting. Rilian, the lost Prince, is a damp squib who talks in the irritating archaic accent that Lewis insists on using for all Narnian royalty. His story feels like Lewis lifted it from medieval Arthurian literature; most of it could be dropped into a collection of stories of knights of the Round Table without seeming out of place. When you have a country full of talking animals and weirdly fascinating bits of theology, it's disappointing to get a garden-variety story about an evil enchantress in which everyone is noble and tragic and extremely stupid. Thankfully, The Silver Chair has one important redeeming quality: Puddleglum. Puddleglum is a Marsh-wiggle, a bipedal amphibious sort who lives alone in the northern marshes. He's recruited by the owls to help the kids with their mission when they fail to get King Caspian's help after blowing the first Sign. Puddleglum is an absolute delight: endlessly pessimistic, certain the worst possible thing will happen at any moment, but also weirdly cheerful about it. I love Eeyore characters in general, but Puddleglum is even better because he gives the kids' endless bickering exactly the respect that it deserves.
"But we all need to be very careful about our tempers, seeing all the hard times we shall have to go through together. Won't do to quarrel, you know. At any rate, don't begin it too soon. I know these expeditions usually end that way; knifing one another, I shouldn't wonder, before all's done. But the longer we can keep off it "
It's even more obvious on re-reading that Puddleglum is the only effective member of the party. Jill has only a couple of moments where she gets the three of them past some obstacle. Eustace is completely useless; I can't remember a single helpful thing he does in the entire book. Puddleglum and his pessimistic determination, on the other hand, is right about nearly everything at each step. And he's the one who takes decisive action to break the Lady of the Green Kirtle's spell near the end. I was expecting a bit of sexism and (mostly in upcoming books) racism when re-reading these books as an adult given when they were written and who Lewis was, but what has caught me by surprise is the colonialism. Lewis is weirdly insistent on importing humans from England to fill all the important roles in stories, even stories that are entirely about Narnians. I know this is the inherent weakness of portal fantasy, but it bothers me how little Lewis believes in Narnians solving their own problems. The Silver Chair makes this blatantly obvious: if Aslan had just told Puddleglum the same information he told Jill and sent a Badger or a Beaver or a Mouse along with him, all the evidence in the book says the whole affair would have been sorted out with much less fuss and anxiety. Jill and Eustace are far more of a hindrance than a help, which makes for frustrating reading when they're supposedly the protagonists. The best part of this book is the underground bits, once they finally get through the first three Signs and stumble into the Lady's kingdom far below the surface. Rilian is a great disappointment, but the fight against the Lady's mind-altering magic leads to one of the great quotes of the series, on par with Reepicheep's speech in The Voyage of the Dawn Treader.
"Suppose we have only dreamed, or made up, all those things trees and grass and sun and moon and stars and Aslan himself. Suppose we have. Then all I can say is that, in that case, the made-up things seem a good deal more important than the real ones. Suppose this black pit of a kingdom of yours is the only world. Well, it strikes me as a pretty poor one. And that's a funny thing, when you come to think of it. We're just babies making up a game, if you're right. But four babies playing a game can make a play-world which licks your real world hollow. That's why I'm going to stand by the play world. I'm on Aslan's side even if there isn't any Aslan to lead it. I'm going to live as like a Narnian as I can even if there isn't any Narnia. So, thanking you kindly for our supper, if these two gentlemen and the young lady are ready, we're leaving your court at once and setting out in the dark to spend our lives looking for Overland. Not that our lives will be very long, I should think; but that's small loss if the world's as dull a place as you say."
This is Puddleglum, of course. And yes, I know that this is apologetics and Lewis is talking about Christianity and making the case for faith without proof, but put that aside for the moment, because this is still powerful life philosophy. It's a cynic's litany against cynicism. It's a pessimist's defense of hope. Suppose we have only dreamed all those things like justice and fairness and equality, community and consensus and collaboration, universal basic income and effective environmentalism. The dreary magic of the realists and the pragmatists say that such things are baby's games, silly fantasies. But you can still choose to live like you believe in them. In Alasdair Gray's reworking of a line from Dennis Lee, "work as if you live in the early days of a better nation." That's one moment that I'll always remember from this book. The other is after they kill the Lady of the Green Kirtle and her magic starts to fade, they have to escape from the underground caverns while surrounded by the Earthmen who served her and who they believe are hostile. It's a tense moment that turns into a delightful celebration when they realize that the Earthmen were just as much prisoners as the Prince was. They were forced from a far deeper land below, full of living metals and salamanders who speak from rivers of fire. It's the one moment in this book that I thought captured the magical strangeness of Narnia, that sense that there are wonderful things just out of sight that don't follow the normal patterns of medieval-ish fantasy. Other than a few great lines from Puddleglum and some moments in Aslan's country, the first 60% of this book is a loss and remarkably frustrating to read. The last 40% isn't bad, although I wish Rilian had any discernible character other than generic Arthurian knight. I don't know what Eustace is doing in this book at all other than providing a way for Jill to get into Narnia, and I wish Lewis had realized Puddleglum could be the protagonist. But as frustrating as The Silver Chair can be, I am still glad I re-read it. Puddleglum is one of the truly memorable characters of children's literature, and it's a shame he's buried in a weak mid-series book. Followed, in the original publication order, by The Horse and His Boy. Rating: 6 out of 10

29 May 2021

Joey Hess: the end of the olduse.net exhibit

Ten years ago I began the olduse.net exhibit, spooling out Usenet history in real time with a 30 year delay. My archive has reached its end, and ten years is more than long enough to keep running something you cobbled together overnight way back when. So, this is the end for olduse.net. The site will continue running for another week or so, to give you time to read the last posts. Find the very last one, if you can! The source code used to run it, and the content of the website have themselves been archived up for posterity at The Internet Archive. Sometime in 2022, a spammer will purchase the domain, but not find it to be of much value. The Utzoo archives that underlay it have currently sadly been censored off the Internet by someone. This will be unsuccessful; by now they have spread and many copies will live on.
I told a lie ten years ago.
You can post to olduse.net, but it won't show up for at least 30 years.
Actually, those posts drop right now! Here are the followups to 30-year-old Usenet posts that I've accumulated over the past decade. Mike replied in 2011 to JPM's post in 1981 on fa.arms-d "Re: CBS Reports"
A greeting from the future: I actually watched this yesterday (2011-06-10) after reading about it here.
Christian Brandt replied in 2011 to schrieb phyllis's post in 1981 on the "comments" newsgroup "Re: thank you rrg"
Funny, it will be four years until you post the first subnet post i ever read and another eight years until my own first subnet post shows up.
Bernard Peek replied in 2012 to mark's post in 1982 on net.sf-lovers "Re: luke - vader relationship"
i suggest that darth vader is luke skywalker's mother.
You may be on to something there.
Martijn Dekker replied in 2012 to henry's post in 1982 on the "test" newsgroup "Re: another boring test message" trentbuck replied in 2012 to dwl's post in 1982 on the "net.jokes" newsgroup "Re: A child hood poem" Eveline replied in 2013 to a post in 1983 on net.jokes.q "Re: A couple"
Ha!
Bill Leary replied in 2015 to Darin Johnson's post in 1985 on net.games.frp "Re: frp & artwork" Frederick Smith replied in 2021 to David Hoopes's post in 1990 on trial.rec.metalworking "Re: Is this group still active?"

Shirish Agarwal: Planes, Pandemic and Medical Devices I

The Great Electric Airplane Race It took me quite sometime to write as have been depressed about things. Then a few days back saw Nova s The Great Electric Airplane Race. While it was fabulous and a pleasure to see and know that there are more than 200 odd startups who are in the race of making an electric airplane which works and has FAA certification. I was disappointed though that there no coverage of any University projects. From what little I know, almost all advanced materials which U.S. had made has been first researched in mostly Universities and when it is close to fruition then either spin-off as a startup or give to some commercial organization/venture to make it scalable and profitable. If they had, I am sure more people could be convinced to join sciences and engineering in college. I actually do want to come to this as part of both general medicine and vaccine development in U.S. but will come later. The idea that industry works alone should be discouraged, but that perhaps may require another article to articulate why I believe so.

Medical Device Ventilators in India Before the pandemic, probably most didn t know what a ventilator is and was, at least I didn t, although I probably used it during my somewhat brief hospital stay a couple of years ago. It entered into the Indian twitter lexicon more so in the second wave as the number of people who got infected became more and more and the ventilators which were serving them became less and less just due to sheer mismatch of numbers and requirements. Rich countries donated/gifted ventilators to India on which GOI put GST of 28%. Apparently, they are a luxury item, just like my hearing aid. Last week Delhi High Court passed a judgement that imposition of GST should not be on a gift like ventilators or oxygenators. The order can be found here. Even without reading the judgement the shout from the right was judicial activism while after reading it is a good judgement which touches on several points. The first, in itself, stating the dichotomy that if a commercial organization wanted to import a ventilator or an oxygenator the IGST payable is nil while for an individual it is 12%. The State (here State refers to State Government in this case Gujarat Govt.) did reduce the IGST for state from 12% to NIL IGST for federal states but that to till only 30.06.2021. No relief to individuals on that account. The Court also made use of Mr. Arvind Datar, as Amicus Curiae or friend of court. The petitioner, an 85-year-old gentleman who has put it up has put broad assertions under Article 21 (right to live) and the court in its wisdom also added Article 14 which enshrines equality of everyone before law. The Amicus Curiae, as his duty, guided the court into how the IGST law works and shared a brief history of the law and the changes happening before and after it. During his submissions, he also shared the Mega Exemption Notification no. 50/2017 under which several items are there which are exempted from putting IGST. The Amicus Curiae did note that such exemptions were also there before Mega Exemption Notification had come into play. However, DGFT (Directorate General of Foreign Trade) on 30-04-2021 issued notification No. 4/2015-2020 through which oxygenators had been exempted from Custom Duty/BCD (Basic Customs Duty. In another notification on no. 30/2021 dated 01.05.2021 it reduced IGST from 28% to 12% for personal use. If however the oxygenator was procured by a canalizing agency (bodies such as State Trading Corporation of India (STC) or/and Metals and Minerals Corporation (MMTC) and such are defined as canalising agents) then it will be fully exempted from paying any sort of IGST, albeit subject to certain conditions. What the conditions are were not shared in the open court. The Amicus Curiae further observed that it is contrary to practice where both BCD and IGST has been exempted for canalising agents and others, some IGST has to be paid for personal use. To share within the narrow boundaries of the topic, he shared entry no. 607A of General Exemption no.190 where duty and IGST in case of life-saving drugs are zero provided the life-saving drugs imported have been provided by zero cost from an overseas supplier for personal use. He further shared that the oxygen generator would fall in the same entry of 607A as it fulfills all the criteria as shared for life-saving medicines and devices. He also used the help of Drugs and Cosmetics Act 1940 which provides such a relief. The Amicus Curiae further noted that GOI amended its foreign trade policy (2015-2020) via notification no.4/2015-2020, dated 30.04.2021, issued by DGFT where Rakhi and life-saving drugs for personal use has been exempted from BCD till 30-07-2021. No reason not to give the same exemption to oxygenators which fulfill the same thing. The Amicus Curiae, further observes that there are exceptional circumstances provisions as adverted to in sub-section (2) of Section 25 of the Customs Act, whereby Covid-19 which is known and labelled as a pandemic where the distinctions between the two classes of individuals or agencies do not make any sense. While he did make the observation that exemption from duty is not a right, in the light of the pandemic and Article 14, it does not make sense to have distinctions between the two classes of importers. He further shared from Circular no. 9/2014-Customs, dated 19.08.2014 by CBEC (Central Board of Excise and Customs) which gave broad exemptions under Section 25 (2) of the same act in respect of goods and services imported for safety and rehabilitation of people suffering and effected by natural disasters and epidemics. He further submits that the impugned notification is irrational as there is no intelligible differentia rule applied or observed in classifying the import of oxygen concentrators into two categories. One, by the State and its agencies; and the other, by an individual for personal use by way of gift. So there was an absence of adequate determining principle . To bolster his argument, he shared the judgements of

a) Union of India vs. N.S. Rathnam & Sons, (2015) 10 SCC 681 (N.S. Ratnams and Sons Case) b) Shayara Bano vs. Union of India, (2017) 9 SCC 1 (Shayara Bano Case) The Amicus Curiae also rightly observed that the right to life also encompasses within it, the right to health. You cannot have one without the other and within that is the right to have affordable treatment. He further stated that the state does not only have a duty but a positive obligation is cast upon it to ensure that the citizen s health is secured. He again cited Navtej Singh Johars vs Union of India (Navtej Singh Johar Case) in defence of right to life. Mr. Datar also shared that unlike in normal circumstances, it is and should be enough to show distinct and noticeable burdensomeness which is directly attributable to the impugned/questionable tax. The gentleman cited Indian Express Newspapers (Bombay) Private Limited vs. Union of India, (1985) 1 SCC 641 (Indian Express case) 1985 which shared both about Article 19 (1) (a) and Article 21. Bloggers note At this juncture, I should point out which I am sharing the judgement and I would be sharing only the Amicus Curiae POV and then the judge s final observations. While I was reading it, I was stuck by the fact that the Amicus Curiae had cited 4 cases till now, 3 of them are pretty well known both in the legal fraternity and even among public at large. Another 3 which have been shared below which are also of great significance. Hence, felt the need to share the whole judgement. The Amicus Curiae further observed that this tax would have to be disproportionately will have to be paid by the old and the infirm, and they might find it difficult to pay the amounts needed to pay the customs duty/IGST as well as find the agent to pay in this pandemic. Blogger Note The situation with the elderly is something like this. Now there are a few things to note, only Central Govt. employees and pensioners get pensions which has been freezed since last year. The rest of the elderly population does not. The rate of interest has fallen to record lows from 5-6% in savings interest rate to 2% and on Fixed Deposits at 4.9% while the nominal inflation rate has up by 6% while CPI and real inflation rates are and would be much more. And this is when there is absolutely no demand in the economy. To add to all this, RBI shared a couple of months ago that fraud of 5 trillion rupees has been committed between 2015 and 2019 in banks. And this is different from the number of record NPA s that have been both in Public and Private Sector banks. To get out of this, the banks have squeezed their customers and are squeezing as well as asking GOI for bailouts. How much GOI is responsible for the frauds as well as NPA s would probably require its own space. And even now, RBI and banks have made heavy provisions as lockdowns are still a facet and are supposed to remain a facet till the end of the year or even next year (all depending upon when we get the vaccine). The Amicus Curiae further argued that the ventilators which are available locally are of bad quality. The result of all this has resulted in a huge amount of unsurmountable pressure on hospitals which they are unable to overcome. Therefore, the levy of IGST on oxygenators has direct impact on health of the citizen. So the examination of the law should not be by what intention it was but how it is affecting citizen rights now. For this he shared R.C.Cooper vs Union of India (another famous case R.C. Cooper vs Union of India) especially paragraph 49 and Federation of Hotel & Restaurant Association of India vs. Union of India, (1989) at paragraph 46 (Federation of Hotel Case) Mr. Datar further shared the Supreme Court order dated 18.12.2020, passed in Suo Moto Writ Petition(Civil) No.7/2020, to buttress the plea that the right to health includes the right to affordable treatment. Blogger s Note For those, who don t know Suo Moto is when the Court, whether Supreme Court or the High Courts take up a matter for public good. It could be in anything, law and order, Banking, Finance, Public Health etc. etc. This was the norm before 2014. The excesses of the executive were curtailed by both the Higher and the lower Judiciary. That is and was the reason that Judiciary is and was known as the third pillar of Indian democracy. A good characterization of Suo Moto can be found here. Before ending his submission, the learned Amicus Curiae also shared Jeeja Ghosh vs. Union of India, (2016) (Jeeja Ghosh Case, an outstanding case as it deals with people with disabilities and their rights and the observations made by the Division Bench of Hon ble Mr. Justice A. K. Sikri as well as Hon ble Mr. Justice R. K. Agrawal.) After Amicus Curiae completed his submissions, it was the turn of Mr. Sudhir Nandrajog, and he adopted the arguments and submissions made by the Amicus Curiae. The gentleman reiterated the facts of the case and how the impugned notification was violative of both Article 14 and 21 of the Indian Constitution. Blogger s Note The High Court s judgement which shows all the above arguments by the Amicus Curiae and the petitioner s lawyer also shared the State s view. It is only on page 24, where the Delhi High Court starts to share its own observations on the arguments of both sides. Judgement continued The first observation that the Court makes is that while the petitioner demonstrated that the impugned tax imposition would have a distinct and noticeable burdensomeness while the State did not state or share in any way how much of a loss it would incur if such a tax were let go and how much additional work would have to be done in order to receive this specific tax. It didn t need to do something which is down the wire or mathematically precise, but it didn t even care to show even theoretically how many people will be affected by the above. The counter-affidavit by the State is silent on the whole issue. The Court also contended that the State failed to prove how collecting IGST from the concerned individuals would help in fighting coronavirus in any substantial manner for the public at large. The High Court shared observations from the Navtej Singh Johar case where it is observed that the State has both negative and positive obligations to ensure that its citizens are able to enjoy the right to health. The High Court further made the point that no respectable person does like to be turned into a charity case. If the State contends that those who obey the law should pay the taxes then it is also obligatory on the state s part to lessen exactions such as taxes at the very least in times of war, famine, floods, epidemics and pandemics. Such an approach would lead a person to live a life of dignity which is part of Article 21 of the Constitution. Another point that was made by the State that only the GST council is able to make any changes as regards to exemptions rather than the State were found to be false as the State had made some exemptions without going to the GST council using its own powers under Section 25 of the Customs Act. The Court also points out that it does send a discriminatory pattern when somebody like petitioner has to pay the tax for personal use while those who are buying it for commercial use do not have to pay the tax. The Court agreed of the view of the Amicus Curiae, Mr. Datar that oxygenator should be taxed at NIL rate at IGST as it is part of life-saving drugs and oxygenator fits the bill as medical equipment as it is used in the treatment, mitigation and prevention of spread of Coronavirus. Mr. Datar also did show that oxygenator is placed at the same level as other life-saving drugs. The Court felt further emboldened as the observations by Supreme Court in State of Andhra Pradesh vs. Linde India Limited, 2020 ( State of Andhra Pradesh vs Linde Ltd.) The Court further shared many subsequent notifications from the State and various press releases by the State itself which does make the Court s point that oxygenators indeed are drugs as defined in the court case above. The State should have it as part of notification 190. This would preserve the start of the notification date from 03.05.2021 and the state would not have to issue a new notification. The Court further went to postulate that any persons similar to the petitioner could avail of the same, if they furnish a letter of undertaking to an officer designated by the State that the medical equipment would not be put to commercial use. Till the state does not do that, in the interim the importer could give the same undertaking to Joint Secretary, Customs or their nominee can hand over the same to custom officer. The Court also shared that it does not disagree with the State s arguments but the challenges which have arisen are in a unique time period/circumstances, so they are basing their judgement based on how the situation is. The Court also mentioned an order given by Supreme Court Diary No. 10669/2020 passed on 20.03.2020 where SC has taken pains to understand the issues faced by the citizens. The court also mentioned the Small Scale Industrial Manufactures Association Case (both of these cases I don t know) . So in conclusion, the Court holds the imposition of IGST on oxygenator which are imported by individuals as gifts from their relatives as unconstitutional. They also shared that any taxes taken by GOI in above scenario have to be returned. The relief to the state is they will not have to pay interest cost on the same. To check misuse of the same, the petitioner or people who are in similar circumstances would have to give a letter of undertaking to an officer designated by the State within 7 days of the state notifying the patient or anybody authorized by him/her to act on their behalf to share the letter of undertaking with the State. And till the State doesn t provide an officer, the above will continue. Hence, both the writ petition and the pending application are disposed off. The Registry is directed to release any money deposited by the petitioner along with any interest occurred on it (if any) . At the end they record appreciation of Mr. Arvind Datar, Mr. Zoheb Hossain, Mr. Sudhir Nandrajog as well as Mr. Siddharth Bambha. It is only due to their assistance that the court could reach the conclusion it did. For Delhi High Court RAJIV SHAKDHER, J. TALWANT SINGH, J. May 21, 2020 Blogger s Observations Now, after the verdict GOI does have few choices, either accept the verdict or appeal in the SC. A third choice is to make a committee and come to the same conclusions via the committee. GOI has done something similar in the past. If that happens and the same conclusions are reached as before, then the aggrieved may have no choice but to appear in the highest court of law. And this will put the aggrieved at a much more vulnerable place than before as SC court fees, lawyer fees etc. are quite high compared to High Courts. So, there is a possibility that the petitioner may not even approach unless and until some non-profit (NGO) does decide to fight and put it up as common cause or something similar. There is another judgement that I will share, probably tomorrow. Thankfully, that one is pretty short compared to this one. So it should be far more easier to read. FWIW, I did learn about the whole freeenode stuff and many channels who have shifted from freenode to libera. I will share my own experience of the same but that probably will take a day or two.
Zeeshan of IYC (India Youth Congress) along with Salman Khan s non-profit Being Human getting oxygenators
The above picture of Zeeshan. There have been a whole team of Indian Youth Congress workers (main opposition party to the ruling party) who have doing lot of relief effort. They have been buying Oxygenators from abroad with help of Being Human Foundation started by Salman Khan, an actor who works in A-grade movies in Bollywood.

25 May 2021

Vincent Bernat: Jerikan+Ansible: a configuration management system for network

There are many resources for network automation with Ansible. Most of them only expose the first steps or limit themselves to a narrow scope. They give no clue on how to expand from that. Real network environments may be large, versatile, heterogeneous, and filled with exceptions. The lack of real-world examples for Ansible deployments, unlike Puppet and SaltStack, leads many teams to build brittle and incomplete automation solutions. We have released under an open-source license our attempt to tackle this problem: Here is a quick demo to configure a new peering:
This work is the collective effort of C dric Hasco t, Jean-Christophe Legatte, Lo c Pailhas, S bastien Hurtel, Tchadel Icard, and Vincent Bernat. We are the network team of Blade, a French company operating Shadow, a cloud-computing product. In May 2021, our company was bought by Octave Klaba and the infrastructure is being transferred to OVHcloud, saving Shadow as a product, but making our team redundant. Our network was around 800 devices, spanning over 10 datacenters with more than 2.5 Tbps of available egress bandwidth. The released material is therefore a substantial example of managing a medium-scale network using Ansible. We have left out the handling of our legacy datacenters to make the final result more readable while keeping enough material to not turn it into a trivial example.

Jerikan The first component is Jerikan. As input, it takes a list of devices, configuration data, templates, and validation scripts. It generates a set of configuration files for each device. Ansible could cover this task, but it has the following limitations:
  • it is slow;
  • errors are difficult to debug;1 and
  • the hierarchy to look up a variable is rigid.
Jerikan inputs and outputs
Jerikan inputs and outputs
If you want to follow the examples, you only need to have Docker and Docker Compose installed. Clone the repository and you are ready!

Source of truth We use YAML files, versioned with Git, as the single source of truth instead of using a database, like NetBox, or a mix of a database and text files. This provides many advantages:
  • anyone can use their preferred text editor;
  • the team prepares changes in branches;
  • the team reviews changes using merge requests;
  • the merge requests expose the changes to the generated configuration files;
  • rollback to a previous state is easy; and
  • it is fast.
The first file is devices.yaml. It contains the device list. The second file is classifier.yaml. It defines a scope for each device. A scope is a set of keys and values. It is used in templates and to look up data associated with a device.
$ ./run-jerikan scope to1-p1.sk1.blade-group.net
continent: apac
environment: prod
groups:
- tor
- tor-bgp
- tor-bgp-compute
host: to1-p1.sk1
location: sk1
member: '1'
model: dell-s4048
os: cumulus
pod: '1'
shorthost: to1-p1
The device name is matched against a list of regular expressions and the scope is extended by the result of each match. For to1-p1.sk1.blade-group.net, the following subset of classifier.yaml defines its scope:
matchers:
  - '^(([^.]*)\..*)\.blade-group\.net':
      environment: prod
      host: '\1'
      shorthost: '\2'
  - '\.(sk1)\.':
      location: '\1'
      continent: apac
  - '^to([12])-[as]?p(\d+)\.':
      member: '\1'
      pod: '\2'
  - '^to[12]-p\d+\.':
      groups:
        - tor
        - tor-bgp
        - tor-bgp-compute
  - '^to[12]-(p ap)\d+\.sk1\.':
      os: cumulus
      model: dell-s4048
The third file is searchpaths.py. It describes which directories to search for a variable. A Python function provides a list of paths to look up in data/ for a given scope. Here is a simplified version:2
def searchpaths(scope):
    paths = [
        "host/ scope[location] / scope[shorthost] ",
        "location/ scope[location] ",
        "os/ scope[os] - scope[model] ",
        "os/ scope[os] ",
        'common'
    ]
    for idx in range(len(paths)):
        try:
            paths[idx] = paths[idx].format(scope=scope)
        except KeyError:
            paths[idx] = None
    return [path for path in paths if path]
With this definition, the data for to1-p1.sk1.blade-group.net is looked up in the following paths:
$ ./run-jerikan scope to1-p1.sk1.blade-group.net
[ ]
Search paths:
  host/sk1/to1-p1
  location/sk1
  os/cumulus-dell-s4048
  os/cumulus
  common
Variables are scoped using a namespace that should be specified when doing a lookup. We use the following ones:
  • system for accounts, DNS, syslog servers,
  • topology for ports, interfaces, IP addresses, subnets,
  • bgp for BGP configuration
  • build for templates and validation scripts
  • apps for application variables
When looking up for a variable in a given namespace, Jerikan looks for a YAML file named after the namespace in each directory in the search paths. For example, if we look up a variable for to1-p1.sk1.blade-group.net in the bgp namespace, the following YAML files are processed: host/sk1/to1-p1/bgp.yaml, location/sk1/bgp.yaml, os/cumulus-dell-s4048/bgp.yaml, os/cumulus/bgp.yaml, and common/bgp.yaml. The search stops at the first match. The schema.yaml file allows us to override this behavior by asking to merge dictionaries and arrays across all matching files. Here is an excerpt of this file for the topology namespace:
system:
  users:
    merge: hash
  sampling:
    merge: hash
  ansible-vars:
    merge: hash
  netbox:
    merge: hash
The last feature of the source of truth is the ability to use Jinja2 templates for keys and values by prefixing them with ~ :
# In data/os/junos/system.yaml
netbox:
  manufacturer: Juniper
  model: "~  model upper  "
# In data/groups/tor-bgp-compute/system.yaml
netbox:
  role: net_tor_gpu_switch
Looking up for netbox in the system namespace for to1-p2.ussfo03.blade-group.net yields the following result:
$ ./run-jerikan scope to1-p2.ussfo03.blade-group.net
continent: us
environment: prod
groups:
- tor
- tor-bgp
- tor-bgp-compute
host: to1-p2.ussfo03
location: ussfo03
member: '1'
model: qfx5110-48s
os: junos
pod: '2'
shorthost: to1-p2
[ ]
Search paths:
[ ]
  groups/tor-bgp-compute
[ ]
  os/junos
  common
$ ./run-jerikan lookup to1-p2.ussfo03.blade-group.net system netbox
manufacturer: Juniper
model: QFX5110-48S
role: net_tor_gpu_switch
This also works for structured data:
# In groups/adm-gateway/topology.yaml
interface-rescue:
  address: "~  lookup('topology', 'addresses').rescue  "
  up:
    - "~ip route add default via   lookup('topology', 'addresses').rescue ipaddr('first_usable')   table rescue"
    - "~ip rule add from   lookup('topology', 'addresses').rescue ipaddr('address')   table rescue priority 10"
# In groups/adm-gateway-sk1/topology.yaml
interfaces:
  ens1f0: "~  lookup('topology', 'interface-rescue')  "
This yields the following result:
$ ./run-jerikan lookup gateway1.sk1.blade-group.net topology interfaces
[ ]
ens1f0:
  address: 121.78.242.10/29
  up:
  - ip route add default via 121.78.242.9 table rescue
  - ip rule add from 121.78.242.10 table rescue priority 10
When putting data in the source of truth, we use the following rules:
  1. Don t repeat yourself.
  2. Put the data in the most specific place without breaking the first rule.
  3. Use templates with parsimony, mostly to help with the previous rules.
  4. Restrict the data model to what is needed for your use case.
The first rule is important. For example, when specifying IP addresses for a point-to-point link, only specify one side and deduce the other value in the templates. The last rule means you do not need to mimic a BGP YANG model to specify BGP peers and policies:
peers:
  transit:
    cogent:
      asn: 174
      remote:
        - 38.140.30.233
        - 2001:550:2:B::1F9:1
      specific-import:
        - name: ATT-US
          as-path: ".*7018$"
          lp-delta: 50
  ix-sfmix:
    rs-sfmix:
      monitored: true
      asn: 63055
      remote:
        - 206.197.187.253
        - 206.197.187.254
        - 2001:504:30::ba06:3055:1
        - 2001:504:30::ba06:3055:2
    blizzard:
      asn: 57976
      remote:
        - 206.197.187.42
        - 2001:504:30::ba05:7976:1
      irr: AS-BLIZZARD

Templates The list of templates to compile for each device is stored in the source of truth, under the build namespace:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build templates
data.yaml: data.j2
config.txt: junos/main.j2
config-base.txt: junos/base.j2
config-irr.txt: junos/irr.j2
$ ./run-jerikan lookup to1-p1.ussfo03.blade-group.net build templates
data.yaml: data.j2
config.txt: cumulus/main.j2
frr.conf: cumulus/frr.j2
interfaces.conf: cumulus/interfaces.j2
ports.conf: cumulus/ports.j2
dhcpd.conf: cumulus/dhcp.j2
default-isc-dhcp: cumulus/default-isc-dhcp.j2
authorized_keys: cumulus/authorized-keys.j2
motd: linux/motd.j2
acl.rules: cumulus/acl.j2
rsyslog.conf: cumulus/rsyslog.conf.j2
Templates are using Jinja2. This is the same engine used in Ansible. Jerikan ships some custom filters but also reuse some of the useful filters from Ansible, notably ipaddr. Here is an excerpt of templates/junos/base.j2 to configure DNS and NTP servers on Juniper devices:
system  
  ntp  
 % for ntp in lookup("system", "ntp") % 
    server   ntp  ;
 % endfor % 
   
  name-server  
 % for dns in lookup("system", "dns") % 
      dns  ;
 % endfor % 
   
 
The equivalent template for Cisco IOS-XR is:
 % for dns in lookup('system', 'dns') % 
domain vrf VRF-MANAGEMENT name-server   dns  
 % endfor % 
!
 % for syslog in lookup('system', 'syslog') % 
logging   syslog   vrf VRF-MANAGEMENT
 % endfor % 
!
There are three helper functions provided:
  • devices() returns the list of devices matching a set of conditions on the scope. For example, devices("location==ussfo03", "groups==tor-bgp") returns the list of devices in San Francisco in the tor-bgp group. You can also omit the operator if you want the specified value to be equal to the one in the local scope. For example, devices("location") returns devices in the current location.
  • lookup() does a key lookup. It takes the namespace, the key, and optionally, a device name. If not provided, the current device is assumed.
  • scope() returns the scope of the provided device.
Here is how you would define iBGP sessions between edge devices in the same location:
 % for neighbor in devices("location", "groups==edge") if neighbor != device % 
   % for address in lookup("topology", "addresses", neighbor).loopback tolist % 
protocols bgp group IPV  address ipv  -EDGES-IBGP  
  neighbor   address    
    description "IPv  address ipv  : iBGP to   neighbor  ";
   
 
   % endfor % 
 % endfor % 
We also have a global key-value store to save information to be reused in another template or device. This is quite useful to automatically build DNS records. First, capture the IP address inserted into a template with store() as a filter:
interface Loopback0
 description 'Loopback:'
  % for address in lookup('topology', 'addresses').loopback tolist % 
 ipv  address ipv   address   address store('addresses', 'Loopback0') ipaddr('cidr')  
  % endfor % 
!
Then, reuse it later to build DNS records by iterating over store():4
 % for device, ip, interface in store('addresses') % 
   % set interface = interface replace('/', '-') replace('.', '-') replace(':', '-') % 
   % set name = ' . '.format(interface lower, device) % 
  name  . IN   'A' if ip ipv4 else 'AAAA'     ip ipaddr('address')  
 % endfor % 
Templates are compiled locally with ./run-jerikan build. The --limit argument restricts the devices to generate configuration files for. Build is not done in parallel because a template may depend on the data collected by another template. Currently, it takes 1 minute to compile around 3000 files spanning over 800 devices.
Jerikan outputs when building templates
Output of Jerikan after building configuration files for six devices
When an error occurs, a detailed traceback is displayed, including the template name, the line number and the value of all visible variables. This is a major time-saver compared to Ansible!
templates/opengear/config.j2:15: in top-level template code
    config.interfaces.  interface  .netmask   adddress   ipaddr("netmask")  
        continent  = 'us'
        device     = 'con1-ag2.ussfo03.blade-group.net'
        environment = 'prod'
        host       = 'con1-ag2.ussfo03'
        infos      =  'address': '172.30.24.19/21' 
        interface  = 'wan'
        location   = 'ussfo03'
        loop       = <LoopContext 1/2>
        member     = '2'
        model      = 'cm7132-2-dac'
        os         = 'opengear'
        shorthost  = 'con1-ag2'
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = JerkianUndefined, query = 'netmask', version = False, alias = 'ipaddr'
[ ]
        # Check if value is a list and parse each element
        if isinstance(value, (list, tuple, types.GeneratorType)):
            _ret = [ipaddr(element, str(query), version) for element in value]
            return [item for item in _ret if item]
>       elif not value or value is True:
E       jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'adddress'
We don t have general-purpose rules when writing templates. Like for the source of truth, there is no need to create generic templates able to produce any BGP configuration. There is a balance to be found between readability and avoiding duplication. Templates can become scary and complex: sometimes, it s better to write a filter or a function in jerikan/jinja.py. Mastering Jinja2 is a good investment. Take time to browse through our templates as some of them show interesting features.

Checks Optionally, each configuration file can be validated by a script in the checks/ directory. Jerikan looks up the key checks in the build namespace to know which checks to run:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build checks
- description: Juniper configuration file syntax check
  script: checks/junoser
  cache:
    input: config.txt
    output: config-set.txt
- description: check YAML data
  script: checks/data.yaml
  cache: data.yaml
In the above example, checks/junoser is executed if there is a change to the generated config.txt file. It also outputs a transformed version of the configuration file which is easier to understand when using diff. Junoser checks a Junos configuration file using Juniper s XML schema definition for Netconf.5 On error, Jerikan displays:
jerikan/build.py:127: RuntimeError
-------------- Captured syntax check with Junoser call --------------
P: checks/junoser edge2.ussfo03.blade-group.net
C: /app/jerikan
O:
E: Invalid syntax:  set system syslog archive size 10m files 10 word-readable
S: 1

Integration into GitLab CI The next step is to compile the templates using a CI. As we are using GitLab, Jerikan ships with a .gitlab-ci.yml file. When we need to make a change, we create a dedicated branch and a merge request. GitLab compiles the templates using the same environment we use on our laptops and store them as an artifact.
Merge requests in GitLab for a change
Merge request to add a new port in USSFO03. The templates were compiled successfully but approval from another team member is still required to merge.
Before approving the merge request, another team member looks at the changes in data and templates but also the differences for the generated configuration files:
Differences for generated configuration files
The change configures a port on the Juniper device, adds records to DNS, and updates NetBox with the new IP addresses (not shown).

Ansible After Jerikan has built the configuration files, Ansible takes over. It is also packaged as a Docker image to avoid the trouble to maintain the right Python virtual environment and ensure everyone is using the same versions.

Inventory Jerikan has generated an inventory file. It contains all the managed devices, the variables defined for each of them and the groups converted to Ansible groups:
ob1-n1.sk1.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
ob2-n1.sk1.blade-group.net ansible_host=172.29.15.13 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
ob1-n1.ussfo03.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
none ansible_connection=local
[oob]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[os-ios]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[model-c2960s]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[location-sk1]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
[location-ussfo03]
ob1-n1.ussfo03.blade-group.net
[in-sync]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
none
in-sync is a special group for devices which configuration should match the golden configuration. Daily and unattended, Ansible should be able to push configurations to this group. The mid-term goal is to cover all devices. none is a special device for tasks not related to a specific host. This includes synchronizing NetBox, IRR objects, and the DNS, updating the RPKI, and building the geofeed files.

Playbook We use a single playbook for all devices. It is described in the ansible/playbooks/site.yaml file. Here is a shortened version:
- hosts: adm-gateway:!done
  strategy: mitogen_linear
  roles:
    - blade.linux
    - blade.adm-gateway
    - done
- hosts: os-linux:!done
  strategy: mitogen_linear
  roles:
    - blade.linux
    - done
- hosts: os-junos:!done
  gather_facts: false
  roles:
    - blade.junos
    - done
- hosts: os-opengear:!done
  gather_facts: false
  roles:
    - blade.opengear
    - done
- hosts: none:!done
  gather_facts: false
  roles:
    - blade.none
    - done
A host executes only one of the play. For example, a Junos device executes the blade.junos role. Once a play has been executed, the device is added to the done group and the other plays are skipped. The playbook can be executed with the configuration files generated by the GitLab CI using the ./run-ansible-gitlab command. This is a wrapper around Docker and the ansible-playbook command and it accepts the same arguments. To deploy the configuration on the edge devices for the SK1 datacenter in check mode, we use:
$ ./run-ansible-gitlab playbooks/site.yaml --limit='edge:&location-sk1' --diff --check
[ ]
PLAY RECAP *************************************************************
edge1.sk1.blade-group.net  : ok=6    changed=0    unreachable=0    failed=0    skipped=3    rescued=0    ignored=0
edge2.sk1.blade-group.net  : ok=5    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
We have some rules when writing roles:
  • --check must detect if a change is needed;
  • --diff must provide a visualization of the planned changes;
  • --check and --diff must not display anything if there is nothing to change;
  • writing a custom module tailored to our needs is a valid solution;
  • the whole device configuration is managed;6
  • secrets must be stored in Vault;
  • templates should be avoided as we have Jerikan for that; and
  • avoid duplication and reuse tasks.7
We avoid using collections from Ansible Galaxy, the exception being collections to connect and interact with vendor devices, like cisco.iosxr collection. The quality of Ansible Galaxy collections is quite random and it is an additional maintenance burden. It seems better to write roles tailored to our needs. The collections we use are in ci/ansible/ansible-galaxy.yaml. We use Mitogen to get a 10 speedup on Ansible executions on Linux hosts. We also have a few playbooks for operational purpose: upgrading the OS version, isolate an edge router, etc. We were also planning on how to add operational checks in roles: are all the BGP sessions up? They could have been used to validate a deployment and rollback if there is an issue. Currently, our playbooks are run from our laptops. To keep tabs, we are using ARA. A weekly dry-run on devices in the in-sync group also provides a dashboard on which devices we need to run Ansible on.

Configuration data and templates Jerikan ships with pre-populated data and templates matching the configuration of our USSFO03 and SK1 datacenters. They do not exist anymore but, we promise, all this was used in production back in the days!
Network architecture for Blade datacenter
The latest iteration of our network infrastructure for SK1, USSFO03, and future data centers. The production network is using BGPttH using a spine-leaf fabric. The out-of-band network is using a simple L2 design, using the spanning tree protocol, as well as a set of console servers.
Notably, you can find the configuration for:
our edge routers
Some are running on Junos, like edge2.ussfo03, the others on IOS-XR, like edge1.sk1. The implemented functionalities are similar in both cases and we could swap one for the other. It includes the BGP configuration for transits, peerings, and IX as well as the associated BGP policies. PeeringDB is queried to get the maximum number of prefixes to accept for peerings. bgpq3 and a containerized IRRd help to filter received routes. A firewall is added to protect the routing engine. Both IPv4 and IPv6 are configured.
our BGP-based fabric
BGP is used inside the datacenter8 and is extended on bare-metal hosts. The configuration is automatically derived from the device location and the port number.9 Top-of-the-rack devices are using passive BGP sessions for ports towards servers. They are also serving a provisioning network to let them boot using DHCP and PXE. They also act as a DHCP server. The design is multivendor. Some devices are running Cumulus Linux, like to1-p1.ussfo03, while some others are running Junos, like to1-p2.ussfo03.
our out-of-band fabric
We are using Cisco Catalyst 2960 switches to build an L2 out-of-band network. To provide redundancy and saving a few bucks on wiring, we build small loops and run the spanning-tree protocol. See ob1-p1.ussfo03. It is redundantly connected to our gateway servers. We also use OpenGear devices for console access. See con1-n1.ussfo03
our administrative gateways
These Linux servers have multiple purposes: SSH jump boxes, rescue connection, direct access to the out-of-band network, zero-touch provisioning of network devices,10 Internet access for management flows, centralization of the console servers using Conserver, and API for autoconfiguration of BGP sessions for bare-metal servers. They are the first servers installed in a new datacenter and are used to provision everything else. Check both the generated files and the associated Ansible tasks.

  1. Ansible does not even provide a line number when there is an error in a template. You may need to find the problem by bisecting.
    $ ansible --version
    ansible 2.10.8
    [ ]
    $ cat test.j2
    Hello   name  !
    $ ansible all -i localhost, \
    >  --connection=local \
    >  -m template \
    >  -a "src=test.j2 dest=test.txt"
    localhost   FAILED! =>  
        "changed": false,
        "msg": "AnsibleUndefinedVariable: 'name' is undefined"
     
    
  2. You may recognize the same concepts as in Hiera, the hierarchical key-value store from Puppet. At first, we were using Jerakia, a similar independent store exposing an HTTP REST interface. However, the lookup overhead is too large for our use. Jerikan implements the same functionality within a Python function.
  3. The list of available filters is mangled inside jerikan/jinja.py. This is a remain of the fact we do not maintain Jerikan as a standalone software.
  4. This is a bit confusing: we have a store() filter and a store() function. With Jinja2, filters and functions live in two different namespaces.
  5. We are using a fork with some modifications to be able to validate our configurations and exposing an HTTP service to reduce the time spent on each configuration check.
  6. There is a trend in network automation to automate a configuration subset, for example by having a playbook to create a new BGP session. We believe this is wrong: with time, your configuration will get out-of-sync with its expected state, notably hand-made changes will be left undetected.
  7. See ansible/roles/blade.linux/tasks/firewall.yaml and ansible/roles/blade.linux/tasks/interfaces.yaml. They are meant to be called when needed, using import_role.
  8. We also have some datacenters using BGP EVPN VXLAN at medium-scale using Juniper devices. As they are still in production today, we didn t include this feature but we may publish it in the future.
  9. In retrospect, this may not be a good idea unless you are pretty sure everything is uniform (number of switches for each layer, number of ports). This was not our case. We now think it is a better idea to assign a prefix to each device and write it in the source of truth.
  10. Non-linux based devices are upgraded and configured unattended. Cumulus Linux devices are automatically upgraded on install but the final configuration has to be pushed using Ansible: we didn t want to duplicate the configuration process using another tool.

20 May 2021

Jonathan McDowell: Losing control to Kubernetes

GMK NucBox Kubernetes is about giving up control. As someone who likes to understand what s going on that s made it hard for me to embrace it. I ve also mostly been able to ignore it, which has helped. However I m aware it s incredibly popular, and there s some infrastructure at work that uses it. While it s not my responsibility I always find having an actual implementation of something is useful in understanding it generally, so I decided it was time to dig in and learn something new. First up, I should say I understand the trade-off here about handing a bunch of decisions off to Kubernetes about the underlying platform allowing development/deployment to concentrate on a nice consistent environment. I get the analogy with the shipping container model where you can abstract out both sides knowing all you have to do is conform to the TEU API. In terms of the underlying concepts I ve got some virtualisation and container experience, so I m not coming at this as a complete newcomer. And I understand multi-site dynamically routed networks. That said, let s start with a basic goal. I d like to understand k8s (see, I can be cool and use the short name) enough to be comfortable with what s going on under the hood and be able to examine a running instance safely (i.e. enough confidence about pulling logs, probing state etc without fearing I might modify state). That ll mean when I come across such infrastructure I have enough tools to be able to hopefully learn from it. To do this I figure I ll need to build myself a cluster and deploy some things on it, then poke it. I ll start by doing so on bare metal; that removes variables around cloud providers and virtualisation and gives me an environment I know is isolated from everything else. I happen to have a GMK NucBox available, so I ll use that. As a first step I m aiming to get a single node cluster deployed running some sort of web accessible service that is visible from the rest of my network. That should mean I ve covered the basics of a Kubernetes install, a running service and actually making it accessible. Of course I m running Debian. I ve got a Bullseye (Debian 11) install - not yet released as stable, but in freeze and therefore not a moving target. I wanted to use packages from Debian as much as possible but it seems that the bits of Kubernetes available in main are mostly just building blocks and not a great starting point for someone new to Kubernetes. So to do the initial install I did the following:
# Install docker + nftables from Debian
apt install docker.io nftables
# Add the Kubernetes repo and signing key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg > /etc/apt/k8s.gpg
cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb [signed-by=/etc/apt/k8s.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt update
apt install kubelet kubeadm kubectl
That resulted in a 1.21.1-00 install, which is current at the time of writing. I then used kubeadm to create the cluster:
kubeadm init --apiserver-advertise-address 192.168.53.147 --apiserver-cert-extra-sans udon.mynetwork
The extra parameters were to make the API server externally accessible from the host. I don t know if that was a good idea or not at this stage kubeadm spat out a bunch of instructions but the key piece was about copying the credentials to my user account. So I did:
mkdir ~noodles/.kube
cp -i /etc/kubernetes/admin.conf ~noodles/.kube/config
chown -R noodles ~noodles/.kube/
I then was able to see my pod:
noodles@udon:~$ kubectl get nodes
NAME   STATUS     ROLES                  AGE     VERSION
udon   NotReady   control-plane,master   4m31s   v1.21.1
Ooooh. But why s it NotReady? Seems like it s a networking issue and I need to install a networking provider. The documentation on this is appalling. Flannel gets recommended as a simple option but then turns out to need a --pod-network-cidr option passed to kubeadm and I didn t feel like cleaning up and running again (I ve omitted all the false starts it took me to get to this point). Another pointer was to Weave so I decided to try that with the following magic runes:
mkdir -p /var/lib/weave
head -c 16 /dev/urandom   shasum -a 256   cut -d " " -f1 > /var/lib/weave/weave-passwd
kubectl create secret -n kube-system generic weave-passwd --from-file=/var/lib/weave/weave-passwd
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version   base64   tr -d '\n')&password-secret=weave-passwd&env.IPALLOC_RANGE=192.168.0.0/24"
(I believe what that s doing is the first 3 lines create a password and store it into the internal Kubernetes config so the weave pod can retrieve it. The final line then grabs a YAML config from Weaveworks to configure up weave. My intention is to delve deeper into what s going on here later; for now the primary purpose is to get up and running.) As I m running a single node cluster I then had to untaint my control node so I could use it as a worker node too:
kubectl taint nodes --all node-role.kubernetes.io/master-
And then:
noodles@udon:~$ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
udon   Ready    control-plane,master   15m   v1.21.1
Result. What s actually running? Nothing except the actual system stuff, so we need to ask for all namespaces:
noodles@udon:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-4nvrg       1/1     Running   0          18m
kube-system   coredns-558bd4d5db-flrfq       1/1     Running   0          18m
kube-system   etcd-udon                      1/1     Running   0          18m
kube-system   kube-apiserver-udon            1/1     Running   0          18m
kube-system   kube-controller-manager-udon   1/1     Running   0          18m
kube-system   kube-proxy-6d8kg               1/1     Running   0          18m
kube-system   kube-scheduler-udon            1/1     Running   0          18m
kube-system   weave-net-mchmg                2/2     Running   1          3m26s
These are all things I m going to have to learn about, but for now I ll nod and smile and pretend I understand. Now I want to actually deploy something to the cluster. I ended up with a simple HTTP echoserver (though it s not entirely clear that s actually the source for what I ended up pulling):
$ kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.10
deployment.apps/hello-node created
$ kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
hello-node-59bffcc9fd-8hkgb   1/1     Running   0          36s
$ kubectl expose deployment hello-node --type=NodePort --port=8080
$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node   NodePort    10.107.66.138   <none>        8080:31529/TCP   1m
Looks good. And to test locally:
curl http://10.107.66.138:8080/

Hostname: hello-node-59bffcc9fd-8hkgb
Pod Information:
	-no pod information available-
Server values:
	server_version=nginx: 1.13.3 - lua: 10008
Request Information:
	client_address=192.168.53.147
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://10.107.66.138:8080/
Request Headers:
	accept=*/*
	host=10.107.66.138:8080
	user-agent=curl/7.74.0
Request Body:
	-no body in request-
Neat. But my external network is 192.168.53.0/24 and that s a 10.* address so how do I actually make it visible to other hosts? What I seem to need is an Ingress Controller which provide some sort of proxy between the outside world and pods within the cluster. Let s pick nginx because at least I have some vague familiarity with that and it seems like it should be able to do a bunch of HTTP redirection to different pods depending on the incoming request.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml
I then want to expose the hello-node to the outside world and I finally had to write some YAML:
cat > hello-ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: udon.mynetwork
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-node
                port:
                  number: 8080
EOF
i.e. incoming requests to http://udon.mynetwork/ should go to the hello-node on port 8080. I applied this:
$ kubectl apply -f hello-ingress.yaml
ingress.networking.k8s.io/example-ingress created
$ kubectl get ingress
NAME              CLASS    HOSTS            ADDRESS   PORTS   AGE
example-ingress   <none>   udon.mynetwork             80      3m8s
No address? What have I missed? Let s check the nginx service, which apparently lives in the ingress-nginx namespace:
noodles@udon:~$ kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                    AGE
ingress-nginx-controller             LoadBalancer   10.96.9.41      <pending>     80:32740/TCP,443:30894/TCP 13h
ingress-nginx-controller-admission   ClusterIP      10.111.16.129   <none>        443/TCP                    13h
<pending> does not seem like something I want. Digging around it seems I need to configure the external IP. So I do:
kubectl patch svc ingress-nginx-controller -n ingress-nginx -p \
	' "spec":  "type": "LoadBalancer", "externalIPs":["192.168.53.147"] '
and things look happier:
noodles@udon:~$ kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                 AGE
ingress-nginx-controller             LoadBalancer   10.96.9.41      192.168.53.147   80:32740/TCP,443:30894/TCP   14h
ingress-nginx-controller-admission   ClusterIP      10.111.16.129   <none>           443/TCP                 14h
noodles@udon:~$ kubectl get ingress
NAME              CLASS    HOSTS           ADDRESS          PORTS   AGE
example-ingress   <none>   udon.mynetwork  192.168.53.147   80      14h
Let s try a curl from a remote host:
curl http://udon.mynetwork/

Hostname: hello-node-59bffcc9fd-8hkgb
Pod Information:
	-no pod information available-
Server values:
	server_version=nginx: 1.13.3 - lua: 10008
Request Information:
	client_address=192.168.0.5
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://udon.mynetwork:8080/
Request Headers:
	accept=*/*
	host=udon.mynetwork
	user-agent=curl/7.64.0
	x-forwarded-for=192.168.53.136
	x-forwarded-host=udon.mynetwork
	x-forwarded-port=80
	x-forwarded-proto=http
	x-real-ip=192.168.53.136
	x-request-id=6aaef8feaaa4c7d07c60b2d05c45f75c
	x-scheme=http
Request Body:
	-no body in request-
Ok, so that seems like success. I ve got a single node cluster running a single actual application pod (the echoserver) and exporting it to the outside world. That s enough to start poking under the hood. Which is for another post, as this one is already getting longer than I d like. I ll just leave some final thoughts of things I need to work out:

16 May 2021

Carl Chenet: How to save up to 500 /year switching from Mailchimp to Open Source Mailtrain and AWS SES

My newsletter Le Courrier du hacker (3,800 subscribers, 176 issues) is 3 years old and Mailchimp costs were becoming unbearable for a small project ($50 a month, $600 a year), with still limited revenues nowadays. Switching to the Open Source Mailtrain plugged to the AWS Simple Email Service (SES) will dramatically reduce the associated costs. First things first, thanks a lot to Pierre-Gilles Leymarie for his own article about switching to Mailtrain/SES. I owe him (and soon you too) so much. This article will be a step-by-step about how to set up Mailtrain/SES on a dedicated server running Linux. What s the purpose of this article? Mailchimp is more and more expensive following the growth of your newsletter subscribers and you need to leave it. You can use Mailtrain, a web app running on your own server and use the AWS SES service to send emails in an efficient way, avoiding to be flagged as a spammer by the other SMTP servers (very very common, you can try but you have been warned against  Prerequisites You will need the following prerequisites : Steps This is a fairly straightforward setup if you know what you re doing. In the other case, you may need the help of a professional sysadmin. You will need to complete the following steps in order to complete your setup: Configure AWS SES Verify your domain You need to configure the DKIM to certify that the emails sent are indeed from your own domain. DKIM is mandatory, it s the de-facto standard in the mail industry. Ask to verify your domain
Ask AWS SES to verify a domain
Generate the DKIM settings
Generate the DKIM settings
Use the DKIM settings
Now you have your DKIM settings and Amazon AWS is waiting for finding the TXT field in your DNS zone. Configure your DNS zone to include DKIM settings I can t be too specific for this section because it varies A LOT depending on your DNS provider. The keys is: as indicated by the previous image you have to create one TXT record and two CNAME records in your DNS zone. The names, the types and the values are indicated by AWS SES. If you don t understand what s going here, there is a high probabiliy you ll need a system administrator to apply these modifications and the next ones in this article. Am I okay for AWS SES ? As long as the word verified does not appear for your domain, as shown in the image below, something is wrong. Don t wait too long, you have a misconfiguration somewhere.
AWS SES pending verification
When your domain is verified, you ll also receive an email to inform you about the successful verification. SMTP settings The last step is generating your credentials to use the AWS SES SMTP server. IT is really straightforward, providing the STMP address to use, the port, and a pair of username/password credentials.
AWS SES SMTP settings and credentials
Just click on Create My SMTP Credentials and follow the instructions. Write the SMTP server address somewhere and store the file with credentials on your computer, we ll need them below. Configure your server As we said before, we need a baremetal server or a virtual machine running a recent Linux. Configure your MySQL/MariaDB database We create a user mailtrain having all rights on a new database mailtrain.
MariaDB [(none)]> create database mailtrain;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE USER 'mailtrain' IDENTIFIED BY 'V3rYD1fF1cUlTP4sSW0rd!';
Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON mailtrain.* TO 'mailtrain'@localhost IDENTIFIED BY 'V3rYD1fF1cUlTP4sSW0rd!';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
  Database            
+--------------------+
  information_schema  
  mailtrain           
  mysql               
  performance_schema  
+--------------------+
6 rows in set (0.00 sec)
MariaDB [(none)]> Bye
Configure your web server I use Nginx and I ll give you the complete setup for it, including generating Let s Encrypt. Configure Let s Encrypt You need to stop Nginx as root: systemctl stop nginx Then get the certificate only, I ll give the Nginx Vhost configuration: certbot certonly -d mailtrain.toto.com Install Mailtrain On your server create the following directory: mkdir -p /var/www/
cd /var/www
wget https://github.com/Mailtrain-org/mailtrain/archive/refs/tags/v1.24.1.tar.gz
tar zxvf v1.24.1.tar.gz
Modify the file /var/www/mailtrain/config/production.toml to use the MySQL settings:
[mysql]
host="localhost"
user="mailtrain"
password="V3rYD1ff1culT!"
database="mailtrain"
Now launch the Mailtrain process in a screen:
screen
NODE_ENV=production npm start
Now Mailtrain is launched and should be running. Yeah I know it s ugly to launch like this (root process in a screen, etc) you can improve security with the following commands:
groupadd mailtrain
useradd -g mailtrain
chown -R mailtrain:mailtrain /var/www/mailtrain 
Now create the following file in /etc/systemd/system/mailtrain.service
[Unit]
 Description=mailtrain
 After=network.target
[Service]
 Type=simple
 User=mailtrain
 WorkingDirectory=/var/www/mailtrain/
 Environment="NODE_ENV=production"
 Environment="PORT=3000"
 ExecStart=/usr/bin/npm run start
 TimeoutSec=15
 Restart=always
[Install]
 WantedBy=multi-user.target
To register the following systemd unit and to launch the new Mailtrain daemon, use the following commands (do not forget to kill your screen session if you used it before):
systemctl daemon-reload
systemctl start mailtrain.service
Now Mailtrain is running under the classic user mailtrain of the mailtrain system group. Configure the Nginx Vhost configuration for your domain Here is my configuration for the Mailtrain Nginx Vhost:
map $http_upgrade $connection_upgrade  
  default upgrade;
  ''      close;
 
server  
  listen 80; 
  listen [::]:80;
  server_name mailtrain.toto.com;
  return 301 https://$host$request_uri;
 
server  
  listen 443 ssl;
  listen [::]:443 ssl;
  server_name mailtrain.toto.com;
  access_log /var/log/nginx/mailtrain.toto.com.access.log;
  error_log /var/log/nginx/mailtrain.toto.com.error.log;
  ssl_protocols TLSv1.2;
  ssl_ciphers EECDH+AESGCM:EECDH+AES;
  ssl_ecdh_curve prime256v1;
  ssl_prefer_server_ciphers on; 
  ssl_session_cache shared:SSL:10m;
  ssl_certificate     /etc/letsencrypt/live/mailtrain.toto.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/mailtrain.toto.com/privkey.pem;
  keepalive_timeout    70; 
  sendfile             on;
  client_max_body_size 0;
  root /var/www/mailtrain;
  location ~ /\.well-known\/acme-challenge  
    allow all;
   
  gzip on; 
  gzip_disable "msie6";
  gzip_vary on; 
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k; 
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
  add_header Strict-Transport-Security "max-age=31536000";
  location /   
    try_files $uri @proxy;
   
  location @proxy  
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_pass http://127.0.0.1:3000;
   
 
Now Nginx is ready. Just start it:
systemctl start nginx
This Nginx vhost will redirect all http requests coming to the Mailtrain process running on the 3000 port. Now it s time to setup Mailtrain! Setup Mailtrain You should be able to access your Mailtrain at https://mailtrain.toto.com Mailtrain is quite simple to configure, Here is my mailer setup. Mailtrain just forwards emails to AWS SES. We only have to plug Mailtrain to AWS SES.
Mailtrain mailer setup
The hostname is provided by AWS SES in the STMP Settings section. Use the 465 port and USE TLS option. Next is providing your AWS SES username and password you generated above and stored somewhere on your computer. One of the issues I encountered is the AWS SES rate limit. Send too many emails too fast will get you flagged as a spammer. So I had to throttle Mailtrain. Because I m a lazy man, I asked Pierre-Gilles Leymarie his setup. Quite easier than determining myself the good one. Here is my setup. Works fine for my soon-to-be 4k subscribers. The idea is: if your AWS SES lets you know you send too fast then just slow down.
Mailtrain to throttle sending emails to AWS SES
Conclusion That s it! You re ready! Almost. You need an HTML template for your newsletter and a list of subscribers. Buf if you re not new in the newsletter field, fleeing Mailchimp because of their expensive prices, you should have them both already. After sending almost ten issues with this setup, I m really happy with it. Open/click rates are the same. When leaving Mailchimp, do not leave any list of subscribers because they ll charge you $8 for a 0 to 500 contacts, that s crazy expensive! About the author The post How to save up to 500 /year switching from Mailchimp to Open Source Mailtrain and AWS SES appeared first on Carl Chenet's Blog.

Vincent Fourmond: Tutorial: analyze redox inactivations/reactivations

Redox-dependent inactivations are actually rather common in the field of metalloenzymes, and electrochemistry can be an extremely powerful tool to study them, providing one can analyze the data quantitatively. The point of this point is to teach the reader how to do so using QSoas. For more general information about redox inactivations and how to study them using electrochemical techniques, the reader is invited to read the review del Barrio and Fourmond, ChemElectroChem 2019. This post is a tutorial to learn the analysis of data coming from the study of the redox-dependent substrate inhibition of periplasmic nitrate reductase NapAB, which has the advantage of being relatively simple. The whole processed is discussed in Jacques et al, BBA, 2014. What you need to know in order to follow this tutorial is the following: You can download the data files from the GitHub repository. Before fitting the data to determine the values of the rate constants at the potentials of the experiment, we will first subtract the background current, assuming that the respective contributions of faradaic and non-faradaic currents is additive. Start QSoas, go to the directory where you saved the files, and load both the data file and the blank file thus:
QSoas> cd
QSoas> load 27.oxw
QSoas> load 27-blanc.oxw
QSoas> S 1 0
(after the first command, you have to manually select the directory in which you downloaded the data files). The S 1 0 command just subtracts the dataset 1 (the first loaded) from the dataset 0 (the last loaded), see more there. blanc is the French for blank... Then, we remove a bit of the beginning and the end of the data, corresponding to one half of the steps at \(E_0\), which we don't exploit much here (they are essentially only used to make sure that the irreversible loss is taken care of properly). This is done using strip-if:
QSoas> strip-if x<30 x>300
Then, we can fit ! The fit used is called fit-linear-kinetic-system, which is used to fit kinetic models with only linear reactions (like here) and steps which change the values of the rate constants but do not instantly change the concentrations. The specific command to fit the data is:
QSoas> fit-linear-kinetic-system /species=2 /steps=0,1,2,1,0
The /species=2 indicates that there are two species (A and I). The /steps=0,1,2,1,0 indicates that there are 5 steps, with three different conditions (0 to 2) in order 0,1,2,1,0. This fits needs a bit of setup before getting started. The species are numbered, 1 and 2, and the conditions (potentials) are indicated by #0, #1 and #2 suffixes. For the sake of simplicity, you can also simply load the starting-parameters.params parameters to have all setup the correct way. Then, just hit Fit, enjoy this moment when QSoas works and you don't have to... The screen should now look like this:
Now, it's done ! The fit is actually pretty good, and you can read the values of the inactivation and reactivation rate constants from the fit parameters. You can train also on the 21.oxw and 21-blanc.oxw files. Usually, re-loading the best fit parameters from other potentials as starting parameters work really well. Gathering the results of several fits into a real curve of rate constants as a function of potentials is left as an exercise for the reader (or maybe a later post), although you may find these series of posts useful in this context !
About QSoas QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050 5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

26 February 2021

Ritesh Raj Sarraf: Wayland KDE X11

KDE Impressions These days, I often hear a lot about Wayland. And how much of effort is being put into it; not just by the Embedded world but also the usual Desktop systems, namely KDE and GNOME. In recent past, I switched back to KDE and have been (very) happy about the switch. Even though the KDE 4 (and initial KDE 5) debacle had burnt many, coming back to a usable KDE desktop is always a delight. It makes me feel home with the elegance, while at the same time the flexibility, it provides. It feels so nice to draft this blog article from Kwrite + VI Input Mode Thanks to the great work of the Debian KDE Team, but Norbert Preining in particular, who has helped bring very up-to-date KDE packages into Debian. Right now, I m on a Plamsa 5.21.1 desktop, which is recent by all standards.

Wayland Almost all the places in the Linux world these days are busy with integrating Wayland as the primary display service. Not sure what the current status on the GNOME side is but I definitely keep trying KDE + Wayland with every release. I keep trying with every release because it still is not prime for daily use. And it makes me get back to X11, no matter how dated some may call. Fact is, X11 still shines to me as an end-user. Glitches with Wayland still are (Based on this week s test on Plasma 5.21.1):
  • Horrible performance compared to X11
  • Very crashy, especially when hotplugging secondary display. Plasma would just crash. X11 is very resilient to such things, part of the reason I can think is the age of the codebase.
  • Many many applications still need to be fixed for Wayland. Or Wayland needs to accomodate them in some way. XWayland does not really feel like the answer.
And while KDE keeps insisting users to switch to Wayland, as that s where all the new enhancements and fixes are put in, someone like me still needs to stick to X11 for the time being. So to get my shiny new LG 27" 4K Monitor (3840x2160 60.00*+) to work without too much glitch, I had to live with an alias:
$ alias   grep xrandr
alias rrs_xrandr_lg='xrandr --output DP-1 --mode 3840x2160 --scale .75x.75'
18:31                    

Plasma 5.21 On the brighter side, the Plasma 5.21.1 release brings some nice enhancements in other areas.
  • I m now able to make use of tighter integration with systemd/cgroups, with better organization and management of processes overall.
  • The new Plasma theme, Breeze Twilight, is a good blend of Light + Dark.
I also appreciate the work put in by Michail Vourlakos. The KDE project is lucky to have a developer/designer like him. His vision and work into the KDE desktop is well beyond a writing by me.
$ usystemctl status plasma-plasmashell.service 
  plasma-plasmashell.service - KDE Plasma Workspace
     Loaded: loaded (/usr/lib/systemd/user/plasma-plasmashell.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-02-26 18:34:23 IST; 13s ago
   Main PID: 501806 (plasmashell)
      Tasks: 21 (limit: 18821)
     Memory: 759.8M
        CPU: 13.706s
     CGroup: /user.slice/user-1000.slice/user@1000.service/session.slice/plasma-plasmashell.service
              501806 /usr/bin/plasmashell --no-respawn
             
Feb 26 18:35:00 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:21 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:49 priyasi plasmashell[501806]: qml: recreating buttons
Feb 26 18:35:57 priyasi plasmashell[501806]: qml: recreating buttons
18:36                  

OBS - Open Build Service I should also thank the OpenSUSE folks for the OBS work. It has enabled the close equivalent (or better, in my experience) of PPAs for Debian. And that is what has enabled developers like Norbert to easily and quickly be able to deliver the entire KDE suite.

OBS - Some detail Christian asked for some more details on the OBS side of things, of my view. I m updating this article with it because the comment system may not always be reliable and I hate losing content. Having been using OBS myself, and also others in the Debian community who are making use of it, I surely think we as project should consider making use of OBS. Given that OBS is Free Software, it is a perfect fit for Debian. Gitlab is another example of what we ve made available in Debian. OBS is divided into multiple parts
  • OBS Server
  • OBS DoD service
  • OBS Publisher
  • OBS Workers
  • OBS Warden
  • OBS Rep Server
For every Debian release I care about, I add an OBS project per release. So I have OBS projects for: Sid, Bullseye, Buster, Jessie. Now, say you have a package, foo . You prep your package and enable all the releases that you want to build the package for. So the same package gets built, in separate clean environments, for every release I mentioned as an example above. You don t have to manually trigger the build for every release/every architcture. You should add the release (as projects) in OBS, set their supported architectures, and then add those enabled release projets as bits to your package. Every build involves:
  • Creating a new chroot for each build
  • Building the package
Builds can be scattered across multiple hosts, known as workers in OBS terminology. Your workers are independent machine entities, supporting different architectures. The machines can be Bare-Metal ones, VMs, even containers. So this allows for very nice scale-in and scale-out. There may be auto-scaling too but that is something worth investigating. Think of things like cross architecture builds. Let s assume the cloud vendors decide to donate resources to the Debian project. We could enable OBS worker instances on the respective clouds (different architectures) and plug them into the master OBS instance that Debian hosts. Fully distributed. Similarly, big hardware vendors willing to donate compute resources could house them in their premises and Debian could just easily establish a connection to them. All of this just a TCP connection away. So when I look at the features of OBS, from the point of view of Debian, I like it more. Extensibility won t be an issue. Supporting a new Debian release would just be a matter of bootstrapping the Debian release as a project in OBS, and then all done. The single effort of setting of the target release project is a one time job, and then all can leverage it. The PPA was a long craved feature missing in Debian, in my opinion. OBS allows to not just fulfil that gap but also extend it in a very easy way. Andrew Lee had put in a nice video presentation about the same @ Debconf 20

13 January 2021

Antoine Beaupr : New phone: Pixel 4a

I'm sorry to announce that I gave up on the Fairphone series and switched to a Google Phone (Pixel 4a) running CalyxOS.

Problems in fairy land My fairphone2, even if it is less than two years old, is having major problems:
  • from time to time, the screen flickers and loses "touch" until I squeeze it back together
  • the camera similarly disconnects regularly
  • even when it works, the camera is... pretty bad: low light is basically unusable, it's slow and grainy
  • the battery can barely keep up for one day
  • the cellular coverage is very poor, in Canada: I lose signal at the grocery store and in the middle of my house...
Some of those problems are known: the Fairphone 2 is old now. It was probably old even when I got it. But I can't help but feel a little sad to let it go: the entire point of that device was to make it easy to fix. But alas, because it's sold only in Europe, local stores don't carry replacement parts. To be fair, Fairphone did offer to fix the device, but with a 2 weeks turnaround, I had to get another phone anyways. I did actually try to buy a fairphone3, from Clove. But they did some crazy validation routine. By email, they asked me to provide a photo copy of a driver's license and the credit card, arguing they need to do this to combat fraud. I found that totally unacceptable and asked them to cancel my order. And because I'm not sure the FP3 will fix the coverage issues, I decided to just give up on Fairphone until they officially ship to the Americas.

Do no evil, do not pass go, do not collect 200$ So I got a Google phone, specifically a Pixel 4a. It's a nice device, all small and shiny, but it's "plasticky" - I would have prefered metal, but it seems you need to pay much, much more to get that (in the Pixel 5). In any case, it's certainly a better form factor than the Fairphone 2: even though the screen is bigger, the device itself is actually smaller and thinner, which feels great. The OLED screen is beautiful, awesome contrast and everything, and preliminary tests show that the camera is much better than the one on the Fairphone 2. (The be fair, again, that is another thing the FP3 improved significantly. And that is with the stock Camera app from CalyxOS/AOSP, so not as good as the Google Camera app, which does AI stuff.)

CalyxOS: success The Pixel 4a not not supported by LineageOS: it seems every time I pick a device in that list, I manage to miss the right device by one (I bought a Samsung S9 before, which is also unsupported, even though the S8 is). But thankfully, it is supported by CalyxOS. That install was a breeze: I was hesitant in playing again with installing a custom Android firmware on a phone after fighting with this quite a bit in the past (e.g. htc-one-s, lg-g3-d852). But it turns out their install instructions, mostly using a AOSP alliance device-flasher works absolutely great. It assumes you know about the commandline, and it does require to basically curl sudo (because you need to download their binary and run it as root), but it Just. Works. It reminded me of how great it was to get the Fairphone with TWRP preinstalled... Oh, and kudos to the people in #calyxos on Freenode: awesome tech support, super nice folks. An amazing improvement over the ambiance in #lineageos! :)

Migrating data Unfortunately, migrating the data was the usual pain in the back. This should improve the next time I do this: CalyxOS ships with seedvault, a secure backup system for Android 10 (or 9?) and later which backs up everything (including settings!) with encryption. Apparently it works great, and CalyxOS is also working on a migration system to switch phones. But, obviously, I couldn't use that on the Fairphone 2 running Android 7... So I had to, again, improvised. The first step was to install Syncthing, to have an easy way to copy data around. That's easily done through F-Droid, already bundled with CalyxOS (including the privileged extension!). Pair the devices and boom, a magic portal to copy stuff over. The other early step I took was to copy apps over using the F-Droid "find nearby" functionality. It's a bit quirky, but really helps in copying a bunch of APKs over. Then I setup a temporary keepassxc password vault on the Syncthing share so that I could easily copy-paste passwords into apps. I used to do this in a text file in Syncthing, but copy-pasting in the text file is much harder than in KeePassDX. (I just picked one, maybe KeePassDroid is better? I don't know.) Do keep a copy of the URL of the service to reduce typing as well. Then the following apps required special tweaks:
  • AntennaPod has an import/export feature: export on one end, into the Syncthing share, then import on the other. then go to the queue and select all episodes and download
  • the Signal "chat backup" does copy the secret key around, so you don't get the "security number change" warning (even if it prompts you to re-register) - external devices need to be relinked though
  • AnkiDroid, DSub, Nextcloud, and Wallabag required copy-pasting passwords
I tried to sync contacts with DAVx5 but that didn't work so well: the account was setup correctly, but contacts didn't show up. There's probably just this one thing I need to do to fix this, but since I don't really need sync'd contact, it was easier to export a VCF file to Syncthing and import again.

Known problems One problem with CalyxOS I found is that the fragile little microg tweaks didn't seem to work well enough for Signal. That was unexpected so they encouraged me to file that as a bug. The other "issue" is that the bootloader is locked, which makes it impossible to have "root" on the device. That's rather unfortunate: I often need root to debug things on Android. In particular, it made it difficult to restore data from OSMand (see below). But I guess that most things just work out of the box now, so I don't really need it and appreciate the extra security. Locking the bootloader means full cryptographic verification of the phone, so that's a good feature to have! OSMand still doesn't have a good import/export story. I ended up sharing the Android/data/net.osmand.plus/files directory and importing waypoints, favorites and tracks by hand. Even though maps are actually in there, it's not possible for Syncthing to write directly to the same directory on the new phone, "thanks" to the new permission system in Android which forbids this kind of inter-app messing around. Tracks are particularly a problem: my older OSMand setup had all those folders neatly sorting those tracks by month. This makes it really annoying to track every file manually and copy it over. I have mostly given up on that for now, unfortunately. And I'll still need to reconfigure profiles and maps and everything by hand. Sigh. I guess that's a good clearinghouse for my old tracks I never use... Update: turns out setting storage to "shared" fixed the issue, see comments below!

Conclusion Overall, CalyxOS seems like a good Android firmware. The install is smooth and the resulting install seems solid. The above problems are mostly annoyances and I'm very happy with the experience so far, although I've only been using it for a few hours so this is very preliminary.

10 December 2020

Jonathan Carter: CentOS Stream, or Debian?

It s the end of CentOS as we know it Earlier this week, the CentOS project announced the shift to CentOS stream. In a nutshell, this means that they will discontinue being a close clone of RHEL along with security updates, and instead it will serve as a development branch of RHEL. As you can probably imagine (or gleam from the comments in that post I referenced), a lot of people are unhappy about this. One particular quote got my attention this morning while catching up on this week s edition of Linux Weekly News, under the distributions quotes section:

I have been doing this for 17 years and CentOS is basically my life s work. This was (for me personally) a heart wrenching decision. However, i see no other decision as a possibility. If there was, it would have been made.Johnny Hughes
I feel really sorry for this person and can empathize, I ve been in similar situations in my life before where I ve poured all my love and energy into something and then due to some corporate or organisational decisions (and usually poor ones), the project got discontinued and all that work that went into it vanishes into the ether. Also, 17 years is really long to be contributing to any one project so I can imagine that this must have been especially gutting.

Throw me a freakin bone here I m also somewhat skeptical of how successful CentOS Stream will really be in any form of a community project. It seems that Red Hat is expecting that volunteers should contribute to their product development for free, and then when these contributors actually want to use that resulting product, they re expected to pay a corporate subscription fee to do so. This seems like a very lop-sided relationship to me, and I m not sure it will be sustainable in the long term. In Red Hat s announcement of CentOS Stream, they kind of throw the community a bone by saying In the first half of 2021, we plan to introduce low- or no-cost programs for a variety of use cases - it seems likely that this will just be for experimental purposes similar to the Windows Insider program and won t be of much use for production users at all. Red Hat does point out that their Universal Base Image (UBI) is free to use and that users could just use that on any system in a container, but this doesn t add much comfort to the individuals and organisations who have contributed huge amounts of time and effort to CentOS over the years who rely on a stable, general-purpose Linux system that can be installed on bare metal.

Way forward for CentOS users Where to from here? I suppose CentOS users could start coughing up for RHEL subscriptions. For many CentOS use cases that won t make much sense. They could move to another distribution, or fork/restart CentOS. The latter is already happening. One of the original founders of the CentOS project, Gregory Kurtzer, is now working on Rocky Linux, which aims to be a new free system built from the RHEL sources. Some people from Red Hat and Canonical are often a bit surprised or skeptical when I point out to them that binary licenses are also important. This whole saga is yet another data point, but it proves that yet again. If Red Hat had from the beginning released RHEL with free sources and unobfuscated patches, then none of this would ve been necessary in the first place. And while I wish Rocky Linux all the success it aims to achieve, I do not think that working for free on a system that ultimately supports Red Hat s selfish eco-system is really productive or helpful. The fact is, Debian is already a free enterprise-scale system already used by huge organisations like Google and many others, which has stable releases, LTS support and ELTS offerings from external organisations if someone really needs it. And while RHEL clones have come and gone through the years, Debian s mission and contract to its users is something that stays consistent and I believe Debian and its ideals will be around for as long as people need Unixy operating systems to run anywhere (i.e. a very long time). While we sometimes fall short of some of our technical goals in Debian, and while we don t always agree on everything, we do tend to make great long-term progress, and usually in the right direction. We ve proved that our method of building a system together is sustainable, that we can do so reliably and timely and that we can collectively support it. From there on it can only get even better when we join forces and work together, because when either individuals or organisations contribute to Debian, they can use the end result for both private or commercial purposes without having to pay any fee or be encumbered by legal gotchas. Don t get caught by greedy corporate motivations that will result in you losing years of your life s work for absolutely no good reason. Make your time and effort count and either contribute to Debian or give your employees time to do so on company time. Many already do and reap the rewards of this, and don t look back. While Debian is a very container and virtualization friendly system, we ve managed to remain a good general-purpose operating system that manages to span use cases so vast that I d have to use a blog post longer than this one just to cover them. And while learning a whole new set of package build chain, package manager and new organisational culture and so on can be uhm, really rocky at the start, I d say that it s a good investment with Debian and unlikely to be time that you ll ever felt was wasted. As Debian project leader, I m personally available to help answer any questions that someone might have if they are interested in coming over to Debian. Feel free to mail leader_AT_debian.org (replace _AT_ with @) or find me on the oftc IRC network with the nick highvoltage. I believe that together, we can make Debian the de facto free enterprise system, and that it would be to the benefit of all its corporate users, instead of tilting all the benefit to just one or two corporations who certainly don t have your best interests in mind.

Next.

Previous.