Search Results: "sre"

18 May 2022

Reproducible Builds: Supporter spotlight: Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC and the Google Open Source Security Team (GOSST). Today, however, we will be talking with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix.
Chris Lamb: Hi Jan, thanks for taking the time to talk with us today. First, could you briefly tell me about yourself? Jan: Thanks for the chat; it s been a while! Well, I ve always been trying to find something new and interesting that is just asking to be created but is mostly being overlooked. That s how I came to work on GNU Guix and create GNU Mes to address the bootstrapping problem that we have in free software. It s also why I have been working on releasing Dezyne, a programming language and set of tools to specify and formally verify concurrent software systems as free software. Briefly summarised, compilers are often written in the language they are compiling. This creates a chicken-and-egg problem which leads users and distributors to rely on opaque, pre-built binaries of those compilers that they use to build newer versions of the compiler. To gain trust in our computing platforms, we need to be able to tell how each part was produced from source, and opaque binaries are a threat to user security and user freedom since they are not auditable. The goal of bootstrappability (and the bootstrappable.org project in particular) is to minimise the amount of these bootstrap binaries. Anyway, after studying Physics at Eindhoven University of Technology (TU/e), I worked for digicash.com, a startup trying to create a digital and anonymous payment system sadly, however, a traditional account-based system won. Separate to this, as there was no software (either free or proprietary) to automatically create beautiful music notation, together with Han-Wen Nienhuys, I created GNU LilyPond. Ten years ago, I took the initiative to co-found a democratic school in Eindhoven based on the principles of sociocracy. And last Christmas I finally went vegan, after being mostly vegetarian for about 20 years!
Chris: For those who have not heard of it before, what is GNU Guix? What are the key differences between Guix and other Linux distributions? Jan: GNU Guix is both a package manager and a full-fledged GNU/Linux distribution. In both forms, it provides state-of-the-art package management features such as transactional upgrades and package roll-backs, hermetical-sealed build environments, unprivileged package management as well as per-user profiles. One obvious difference is that Guix forgoes the usual Filesystem Hierarchy Standard (ie. /usr, /lib, etc.), but there are other significant differences too, such as Guix being scriptable using Guile/Scheme, as well as Guix s dedication and focus on free software.
Chris: How does GNU Guix relate to GNU Mes? Or, rather, what problem is Mes attempting to solve? Jan: GNU Mes was created to address the security concerns that arise from bootstrapping an operating system such as Guix. Even if this process entirely involves free software (i.e. the source code is, at least, available), this commonly uses large and unauditable binary blobs. Mes is a Scheme interpreter written in a simple subset of C and a C compiler written in Scheme, and it comes with a small, bootstrappable C library. Twice, the Mes bootstrap has halved the size of opaque binaries that were needed to bootstrap GNU Guix. These reductions were achieved by first replacing GNU Binutils, GNU GCC and the GNU C Library with Mes, and then replacing Unix utilities such as awk, bash, coreutils, grep sed, etc., by Gash and Gash-Utils. The final goal of Mes is to help create a full-source bootstrap for any interested UNIX-like operating system.
Chris: What is the current status of Mes? Jan: Mes supports all that is needed from R5RS and GNU Guile to run MesCC with Nyacc, the C parser written for Guile, for 32-bit x86 and ARM. The next step for Mes would be more compatible with Guile, e.g., have guile-module support and support running Gash and Gash Utils. In working to create a full-source bootstrap, I have disregarded the kernel and Guix build system for now, but otherwise, all packages should be built from source, and obviously, no binary blobs should go in. We still need a Guile binary to execute some scripts, and it will take at least another one to two years to remove that binary. I m using the 80/20 approach, cutting corners initially to get something working and useful early. Another metric would be how many architectures we have. We are quite a way with ARM, tinycc now works, but there are still problems with GCC and Glibc. RISC-V is coming, too, which could be another metric. Someone has looked into picking up NixOS this summer. How many distros do anything about reproducibility or bootstrappability? The bootstrappability community is so small that we don t need metrics, sadly. The number of bytes of binary seed is a nice metric, but running the whole thing on a full-fledged Linux system is tough to put into a metric. Also, it is worth noting that I m developing on a modern Intel machine (ie. a platform with a management engine), that s another key component that doesn t have metrics.
Chris: From your perspective as a Mes/Guix user and developer, what does reproducibility mean to you? Are there any related projects? Jan: From my perspective, I m more into the problem of bootstrapping, and reproducibility is a prerequisite for bootstrappability. Reproducibility clearly takes a lot of effort to achieve, however. It s relatively easy to install some Linux distribution and be happy, but if you look at communities that really care about security, they are investing in reproducibility and other ways of improving the security of their supply chain. Projects I believe are complementary to Guix and Mes include NixOS, Debian and on the hardware side the RISC-V platform shares many of our core principles and goals.
Chris: Well, what are these principles and goals? Jan: Reproducibility and bootstrappability often feel like the next step in the frontier of free software. If you have all the sources and you can t reproduce a binary, that just doesn t feel right anymore. We should start to desire (and demand) transparent, elegant and auditable software stacks. To a certain extent, that s always been a low-level intent since the beginning of free software, but something clearly got lost along the way. On the other hand, if you look at the NPM or Rust ecosystems, we see a world where people directly install binaries. As they are not as supportive of copyleft as the rest of the free software community, you can see that movement and people in our area are doing more as a response to that so that what we have continues to remain free, and to prevent us from falling asleep and waking up in a couple of years and see, for example, Rust in the Linux kernel and (more importantly) we require big binary blobs to use our systems. It s an excellent time to advance right now, so we should get a foothold in and ensure we don t lose any more.
Chris: What would be your ultimate reproducibility goal? And what would the key steps or milestones be to reach that? Jan: The ultimate goal would be to have a system built with open hardware, with all software on it fully bootstrapped from its source. This bootstrap path should be easy to replicate and straightforward to inspect and audit. All fully reproducible, of course! In essence, we would have solved the supply chain security problem. Our biggest challenge is ignorance. There is much unawareness about the importance of what we are doing. As it is rather technical and doesn t really affect everyday computer use, that is not surprising. This unawareness can be a great force driving us in the opposite direction. Think of Rust being allowed in the Linux kernel, or Python being required to build a recent GNU C library (glibc). Also, the fact that companies like Google/Apple still want to play us vs them , not willing to to support GPL software. Not ready yet to truly support user freedom. Take the infamous log4j bug everyone is using open source these days, but nobody wants to take responsibility and help develop or nurture the community. Not ecosystem , as that s how it s being approached right now: live and let live/die: see what happens without taking any responsibility. We are growing and we are strong and we can do a lot but if we have to work against those powers, it can become problematic. So, let s spread our great message and get more people involved!
Chris: What has been your biggest win? Jan: From a technical point of view, the full-source bootstrap has have been our biggest win. A talk by Carl Dong at the 2019 Breaking Bitcoin conference stated that connecting Jeremiah Orian s Stage0 project to Mes would be the holy grail of bootstrapping, and we recently managed to achieve just that: in other words, starting from hex0, 357-byte binary, we can now build the entire Guix system. This past year we have not made significant visible progress, however, as our funding was unfortunately not there. The Stage0 project has advanced in RISC-V. A month ago, though, I secured NLnet funding for another year, and thanks to NLnet, Ekaitz Zarraga and Timothy Sample will work on GNU Mes and the Guix bootstrap as well. Separate to this, the bootstrappable community has grown a lot from two people it was six years ago: there are now currently over 100 people in the #bootstrappable IRC channel, for example. The enlarged community is possibly an even more important win going forward.
Chris: How realistic is a 100% bootstrappable toolchain? And from someone who has been working in this area for a while, is solving Trusting Trust) actually feasible in reality? Jan: Two answers: Yes and no, it really depends on your definition. One great thing is that the whole Stage0 project can also run on the Knight virtual machine, a hardware platform that was designed, I think, in the 1970s. I believe we can and must do better than we are doing today, and that there s a lot of value in it. The core issue is not the trust; we can probably all trust each other. On the other hand, we don t want to trust each other or even ourselves. I am not, personally, going to inspect my RISC-V laptop, and other people create the hardware and do probably not want to inspect the software. The answer comes back to being conscientious and doing what is right. Inserting GCC as a binary blob is not right. I think we can do better, and that s what I d like to do. The security angle is interesting, but I don t like the paranoid part of that; I like the beauty of what we are creating together and stepwise improving on that.
Chris: Thanks for taking the time to talk to us today. If someone wanted to get in touch or learn more about GNU Guix or Mes, where might someone go? Jan: Sure! First, check out: I m also on Twitter (@janneke_gnu) and on octodon.social (@janneke@octodon.social).
Chris: Thanks for taking the time to talk to us today. Jan: No problem. :)


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

15 March 2022

Kunal Mehta: How to mirror the Russian Wikipedia with Debian and Kiwix

It has been reported that the Russian government has threatened to block access to Wikipedia for documenting narratives that do not agree with the official position of the Russian government. One of the anti-censorship strategies I've been working on is Kiwix, an offline Wikipedia reader (and plenty of other content too). Kiwix is free and open source software developed by a great community of people that I really enjoy working with. With threats of censorship, traffic to Kiwix has increased fifty-fold, with users from Russia accounting for 40% of new downloads! You can download copies of every language of Wikipedia for offline reading and distribution, as well as hosting your own read-only mirror, which I'm going to explain today. Disclaimer: depending on where you live it may be illegal or get you in trouble with the authorities to rehost Wikipedia content, please be aware of your digital and physical safety before proceeding. With that out of the way, let's get started. You'll need a Debian (or Ubuntu) server with at least 30GB of free disk space. You'll also want to have a webserver like Apache or nginx installed (I'll share the Apache config here). First, we need to download the latest copy of the Russian Wikipedia.
$ wget 'https://download.kiwix.org/zim/wikipedia/wikipedia_ru_all_maxi_2022-03.zim'
If the download is interrupted or fails, you can use wget -c $url to resume it. Next let's install kiwix-serve and try it out. If you're using Ubuntu, I strongly recommend enabling our Kiwix PPA first.
$ sudo apt update
$ sudo apt install kiwix-tools
$ kiwix-serve -p 3004 wikipedia_ru_all_maxi_2022-03.zim
At this point you should be able to visit http://yourserver.com:3004/ and see the Russian Wikipedia. Awesome! You can use any available port, I just picked 3004. Now let's use systemd to daemonize it so it runs in the background. Create /etc/systemd/system/kiwix-ru-wp.service with the following:
[Unit]
Description=Kiwix Russian Wikipedia
[Service]
Type=simple
User=www-data
ExecStart=/usr/bin/kiwix-serve -p 3004 /path/to/wikipedia_ru_all_maxi_2022-03.zim
Restart=always
[Install]
WantedBy=multi-user.target
Now let's start it and enable it at boot:
$ sudo systemctl start kiwix-ru-wp
$ sudo systemctl enable kiwix-ru-wp
Since we want to expose this on the public internet, we should put it behind a more established webserver and configure HTTPS. Here's the Apache httpd configuration I used:
<VirtualHost *:80>
        ServerName ru-wp.yourserver.com
        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/html
        ErrorLog $ APACHE_LOG_DIR /error.log
        CustomLog $ APACHE_LOG_DIR /access.log combined
        <Proxy *>
                Require all granted
        </Proxy>
        ProxyPass / http://127.0.0.1:3004/
        ProxyPassReverse / http://127.0.0.1:3004/
</VirtualHost>
Put that in /etc/apache2/sites-available/kiwix-ru-wp.conf and run:
$ sudo a2ensite kiwix-ru-wp
$ sudo systemctl reload apache2
Finally, I used certbot to enable HTTPS on that subdomain and redirect all HTTP traffic over to HTTPS. This is an interactive process that is well documented so I'm not going to go into it in detail. You can see my mirror of the Russian Wikipedia, following these instructions, at https://ru-wp.legoktm.com/. Anyone is welcome to use it or distribute the link, though I am not committing to running it long-term. This is certainly not a perfect anti-censorship solution, the copy of Wikipedia that Kiwix provides became out of date the moment it was created, and the setup described here will require you to manually update the service when the new copy is available next month. Finally, if you have some extra bandwith, you can also help seed this as a torrent.

31 January 2022

Russ Allbery: Review: The Story of the Treasure Seekers

Review: The Story of the Treasure Seekers, by E. Nesbit
Publisher: Amazon
Copyright: 1899
Printing: May 2012
ASIN: B0082ZBXSI
Format: Kindle
Pages: 136
The Story of the Treasure Seekers was originally published in 1899 and is no longer covered by copyright. I read the free Amazon Kindle version because it was convenient. My guess is that Amazon is republishing the Project Gutenberg version, but they only credit "a community of volunteers." There are six Bastable children: Dora, Oswald, Dicky, the twins Alice and Noel, and Horace Octavius (H.O.), the youngest. Their mother is dead and the family's finances have suffered in the wake of her death (or, as the first-person narrator puts it, "the fortunes of the ancient House of Bastable were really fallen"), which means that their father works long hours and is very absorbed with his business. That leaves the six kids largely to fend for themselves, since they can't afford school. Clearly the solution is to find treasure. This is a fix-up novel constructed from short stories that were originally published in various periodicals, reordered and occasionally rewritten for the collected publication. To be honest, calling it a fix-up novel is generous; there are some references to previous events, but the first fourteen chapters can mostly stand alone. The last two chapters are closely related and provide an ending. More on that in a moment. What grabs the reader's attention from the first paragraph is the writing style:
This is the story of the different ways we looked for treasure, and I think when you have read it you will see that we were not lazy about the looking. There are some things I must tell before I begin to tell about the treasure-seeking, because I have read books myself, and I know how beastly it is when a story begins, "Alas!" said Hildegarde with a deep sigh, "we must look our last on this ancestral home" and then some one else says something and you don't know for pages and pages where the home is, or who Hildegarde is, or anything about it.
The first-person narrator of The Story of the Treasure Seekers is one of the six kids.
It is one of us that tells this story but I shall not tell you which: only at the very end perhaps I will.
The narrator then goes on to elaborately praise one of the kids, occasionally accidentally uses "I" instead of their name, and then remembers and tries to hide who is telling the story again. It's beautifully done and had me snickering throughout the book. It's not much of a mystery (you will figure out who is telling the story very quickly), but Nesbit captures the writing style of a kid astonishingly well without making the story poorly written. Descriptions of events have a headlong style that captures a child's sense of adventure and heedless immortality mixed with quiet observations that remind the reader that kids don't miss as much as people think they do. I think the most skillful part of this book is the way Nesbit captures a kid's disregard of literary convention. The narrator in a book written by an adult tends to fit into a standard choice of story-telling style and follow it consistently. Even first-person narrators who break some of those rules feel like intentionally constructed characters. The Story of the Treasure Seekers is instead half "kid telling a story" and half "kid trying to emulate the way stories are told in books" and tends to veer wildly between the two when the narrator gets excited, as if they're vaguely aware of the conventions they're supposed to be following but are murky on the specifics. It feels exactly like the sort of book a smart and well-read kid would write (with extensive help from an editor). The other thing that Nesbit handles exceptionally well is the dynamic between the six kids. This is a collection of fairly short stories, so there isn't a lot of room for characterization. The kids are mostly sketched out with one or two memorable quirks. But Nesbit puts a lot of effort into the dynamics that arise between the children in a tight-knit family, properly making the group of kids as a whole and in various combinations a sort of character in their own right. Never for a moment does either the reader or the kids forget that they have siblings. Most adventures involve some process of sorting out who is going to come along and who is going to do other things, and there's a constant but unobtrusive background rhythm of bickering, making up, supporting each other, being frustrated by each other, and getting exasperated at each other's quirks. It's one of the better-written sibling dynamics that I've read. I somehow managed to miss Nesbit entirely as a kid, probably because she didn't write long series and child me was strongly biased towards books that were part of long series. (One book was at most a pleasant few hours; there needed to be a whole series attached to get any reasonable amount of reading out of the world.) This was nonetheless a fun bit of nostalgia because it was so much like the books I did read: kids finding adventures and making things up, getting into various trouble but getting out of it by being honest and kind, and only occasional and spotty adult supervision. Reading as an adult, I can see the touches of melancholy of loss that Nesbit embeds into this quest for riches, but part of the appeal of the stories is that the kids determinedly refuse to talk about it except as a problem to be solved. Nesbit was a rather famous progressive, but this is still a book of its time, which means there's one instance of the n-word and the kids have grown up playing the very racist version of cowboys and indians. The narrator also does a lot of stereotyping of boys and girls, although Nesbit undermines that a bit by making Alice a tomboy. I found all of this easier to ignore because the story is narrated by one of the kids who doesn't know any better, but your mileage may vary. I am always entertained by how anyone worth writing about in a British children's novel of this era has servants. You know the Bastables have fallen upon hard times because they only have one servant. The kids don't have much respect for Eliza, which I found a bit off-putting, and I wondered what this world looks like from her perspective. She clearly did a lot of the work of raising these motherless kids, but the kids view her as the hired help or an obstacle to be avoided, and there's not a lot of gratitude present. As the stories unfold, it becomes more and more clear that there's a quiet conspiracy of surrounding adults to watch out for these kids, which the kids never notice. This says good things about society, but it does undermine the adventures a little, and by the end of the book the sameness of the stories was wearing a bit thin. The high point of the book is probably chapter eight, in which the kids make their own newspaper, the entirety of which is reproduced in the book and is a note-perfect recreation of what an enterprising group of kids would come up with. In the last two stories, Nesbit tacks on an ending that was probably obligatory, but which I thought undermined some of the emotional subtext of the rest of the book. I'm not sure how else one could have put an ending on this book, but the ending she chose emphasized the degree to which the adventures really were just play, and the kids are rewarded in these stories for their ethics and their circumstances rather than for anything they concretely do. It's a bit unsatisfying. This is mostly a nostalgia read, but I'm glad I read it. If this book was not part of your childhood, it's worth reading if only for how well Nesbit captures a child's narrative voice. Rating: 7 out of 10

1 January 2022

Caleb Adepitan: Everybody Struggles

I often get intimidated by the brilliance of others by just looking at them from a distance seeing how radiantly they gleam in their knowledge. I often feel like I'm not doing enough or I'm just meant to be a mediocre and there's nothing I can do to be more than that. I often feel like everyone except me is a genius, and are impeccable unlike me. But maybe I'm just at a point these people had previously been at, and have now gone past. Maybe it's my time to steer the wheel they once steered, or maybe they aren't even what they seem to me to be and it's all just a misrepresentation cooked up by an imperfect mind. Hmm...it has to be that. A perfect portrait painted by an imperfect mind which when found out renders every stroke in the portrait imperfect. It has to be that or all. burst.gif No one starts having it all figured out. A king is born in a day, but a king is not nutured in a day. We all have something we struggle with, it's just different all across. And because their current struggle isn't my current struggle and mine not theirs doesn't make them immuned to the system. Most times all that gleaming in thier knowledge is a collective thing and you only notice when you take a closer look; that there are more heads to the knowledge and brilliance than you think. We mostly see what people know and seldom see what they don't know. If it was the other way around, we'll see how ridiculous the things these highly esteemed people don't know are. What matters most is that they keep making it through their struggles. No man is an island! The sooner we realize this the better. We can always take those things we struggle with to others and work collectively on them, learn from a lot other experiences and gleam collectively. It's like a tug of war. When what is on the other end is more than we can pull off, we invite many more hands to pull it off.

19 December 2021

Russ Allbery: Review: Raybearer

Review: Raybearer, by Jordan Ifueko
Series: Raybearer #1
Publisher: Amulet Books
Copyright: 2020
ISBN: 1-68335-719-1
Format: Kindle
Pages: 308
Tarisai was raised alone in Bhekina House by an array of servants and tutors who were not allowed to touch her. Glimpses of the world were fleeting and sometimes ended by nailed-shut windows. Her life revolved around her rarely-seen mother, The Lady, who treats her with deep affection but rarely offers a word of praise, instead only pushing her to study harder. The servants whispered behind her back (but still in her hearing) that she was not human. At the age of seven, in a child's attempt to locate her absent mother by sneaking out of the house, she finds her father and is told a piece of the truth: she is the daughter of the Lady and a captive ehru, a djinn. At the age of eleven she's sent with two guardians to Oluwan City, the capital of Aritsar, to enter a competition she knows nothing about, for reasons no one has ever explained. Raybearer is a young adult fantasy novel, the first of a duology. Like a lot of young adult novels, it is a coming of age story that follows Tarisai from the end of her highly manipulated childhood through her introduction to a world she was carefully never taught about. Like a lot of young adult fantasy novels, Tarisai has some unusual abilities. What those are, and why she has them, is perhaps less obvious than it may appear at first. Unlike a lot of young adult fantasy novels, Raybearer is not set in a facsimile of Western Europe, the structure of gods and religion is not obviously derived from Christianity or Greek or Norse mythology, and neither Tarisai nor most of the characters of this story are white. Some of the characters are; Ifueko draws from a grab bag of cultures that does include European as well as African, Middle Eastern, and Asian. But the food, the physical descriptions, the landscape, and the hair and hair styles feel primarily African not in the sense of specific identifiable regions, but in the same way that most fantasy feels European even if the map isn't recognizable. That gives this story a freshness that I found delightful. The mythology of this world shares some similarities to standard fantasy tropes, including a bargain with the underworld that plays a similar role to fae bargains in some European fantasy, but it also goes in different directions and finds atypical balances, which gave the story room to catch me by surprise. The magical center of this book (and series), which Tarisai is carefully not told about until the story starts, is a system for anointing and protecting the emperor: selection of people who swear loyalty to him and each other and become his innermost circle, and thereby grant him magical protection. The emperor himself is the Raybearer, possessing an artifact that makes him invulnerable to one form of death for each member of his council he anoints. At eleven council members, he becomes invulnerable to anything but old age, or an attack from one of the council themselves. As the reader learns early in the book, that last part is important. Tarisai is an assassin; her mother's goal is for her to be selected as a member of the council for the prince, who will become the next emperor. But there is rather more to this system of magic than it may first appear, in a way that adds good depth to the mythology. And there is quite a bit more to Tarisai herself than anyone expects. Tarisai as a protagonist follows a more typical young adult pattern, but it's a formula that works for me. Her upbringing isolated from any other children has left her craving connection, but it also made her self-reliant, stubborn, and good at keeping her own counsel. One of the things that I loved about this book is that she's not thrown into a nest of vipers and cynical politics. Some of that is happening in the background, but the first step of her mother's plan is for her to earn the trust of the prince in a competition with other potential council members, all of whom are, well, kids. They fight (some), but they also make friends, helped along by the goal and requirement that they join a cooperative council or be sent home. That gives the plot a more collaborative and social feel than one would otherwise expect from the setup. Ifueko does a great job juggling a challenging cast size by focusing on a few kids with whom Tarisai strikes up a friendship but giving the others distinct-enough personalities that their presence is still felt in the story. There are two character dynamics that stand out: Tarisai's relationship with Prince Ekundayo, and her friendship with Sanjeet. The first carries much of the weight of the plot, of course; Tarisai is supposed to gain his trust and then kill him, and the reader will be unsurprised that this takes twists and turns no one expected. But Ifueko, refreshingly, does not reach for the stock plot development of a romance to complicate matters, even though many of the characters expect that. To the contrary, this is a rare story that at least hints at an acknowledgment that some people are not interested in romance at all, and there are other forms that mutual respect can take. Tarisai's relationship with Sanjeet is a different type of depth: two kids with very different histories finding a common understanding in the ways that they were both abused, and create space for each other. It's a great friendship that includes some deeply touching moments. It took me a bit to get into this book, but once Tarisai starts finding her feet and navigating her new relationships, I was engrossed. The story takes a sharp and nasty turn that was hard to read, but Ifueko chooses to turn it into a story of resiliency rather than survival, which makes it much easier to read than it could have been. She also pulls off the kind of plot that complicates and deepens the motives of the obvious villains in a way that gives the story much greater heft, but without disregarding the damage that they have done. I think the plot did fall apart a bit at the end of the book, with too much quick travel and world-building revelations at the cost of development of the relationships that were otherwise at the center of the book, but I'm hoping the sequel will pull those threads back together. And it's so refreshing to read a fantasy novel of this type with a different setting. It's not perfect: Ifueko falls back on Planet of the Hats regional characterization in a few places, and Songland is so obviously Korea that it felt jarring and out of place. Christianity also snuck its nose into the world-building tent near the end in ways that bugged me a bit, although it was subtle enough that I think most readers won't notice. But compared to most fantasy settings, it feels original and fresh. More of this! Ifueko starts this book with a wonderfully memorable dedication:
For the kid scanning fairy tales for a hero with a face like theirs. And for the girls whose stories we compressed into pities and wonders, triumphs and cautions, without asking, even once, for their names.
I think she was successful on both parts of that promise, and it makes for some great reading. Recommended. Followed by Redemptor. Rating: 8 out of 10

15 November 2021

Vincent Bernat: Git as a source of truth for network automation

The first step when automating a network is to build the source of truth. A source of truth is a repository of data that provides the intended state: the list of devices, the IP addresses, the network protocols settings, the time servers, etc. A popular choice is NetBox. Its documentation highlights its usage as a source of truth:
NetBox intends to represent the desired state of a network versus its operational state. As such, automated import of live network state is strongly discouraged. All data created in NetBox should first be vetted by a human to ensure its integrity. NetBox can then be used to populate monitoring and provisioning systems with a high degree of confidence.
When introducing Jerikan, a common feedback we got was: you should use NetBox for this. Indeed, Jerikan s source of truth is a bunch of YAML files versioned with Git.

Why Git? If we look at how things are done with servers and services, in a datacenter or in the cloud, we are likely to find users of Terraform, a tool turning declarative configuration files into infrastructure. Declarative configuration management tools like Salt, Puppet,1 or Ansible take care of server configuration. NixOS is an alternative: it combines package management and configuration management with a functional language to build virtual machines and containers. When using a Kubernetes cluster, people use Kustomize or Helm, two other declarative configuration management tools. Tapped together, these tools implement the infrastructure as code paradigm.
Infrastructure as code is an approach to infrastructure automation based on practices from software development. It emphasizes consistent, repeatable routines for provisioning and changing systems and their configuration. You make changes to code, then use automation to test and apply those changes to your systems. Kief Morris, Infrastructure as Code, O Reilly.
A version control system is a central tool for infrastructure as code. The usual candidate is Git with a source code management system like GitLab or GitHub. You get:
Traceability and visibility
Git keeps a log of all changes: what, who, why, and when. With a bit of discipline, each change is explained and self-contained. It becomes part of the infrastructure documentation. When the support team complains about a degraded experience for some customers over the last two months or so, you quickly discover this may be related to a change to an incoming policy in New York.
Rolling back
If a change is defective, it can be reverted quickly, safely, and without much effort, even if other changes happened in the meantime. The policy change at the origin of the problem spanned over three routers. Reverting this specific change and deploying the configuration let you solve the situation until you find a better fix.
Branching, reviewing, merging
When working on a new feature or refactoring some part of the infrastructure, a team member creates a branch and works on their change without interfering with the work of other members. Once the branch is ready, a pull request is created and the change is ready to be reviewed by the other team members before merging. You discover the issue was related to diverting traffic through an IX where one ISP was connected without enough capacity. You propose and discuss a fix that includes a change of the schema and the templates used to declare policies to be able to handle this case.
Continuous integration
For each change, automated tests are triggered. They can detect problems and give more details on the effect of a change. Branches can be deployed to a test infrastructure where regression tests are executed. The results can be synthesized as a comment in the pull request to help the review. You check your proposed change does not modify the other existing policies.

Why not NetBox? NetBox does not share these features. It is a database with a REST and a GraphQL API. Traceability is limited: changes are not grouped into a transaction and they are not documented. You cannot fork the database. Usually, there is one staging database to test modifications before applying them to the production database. It does not scale well and reviews are difficult. Applying the same change to the production database can be hazardous. Rolling back a change is non-trivial.

Update (2021-11) Nautobot, a fork of NetBox, will soon address this point by using Dolt, an SQL database engine allowing you to clone, branch, and merge, like a Git repository. Dolt is compatible with MySQL clients. See Nautobots, Roll Back! for a preview of this feature.

Moreover, NetBox is not usually the single source of truth. It contains your hardware inventory, the IP addresses, and some topology information. However, this is not the place you put authorized SSH keys, syslog servers, or the BGP configuration. If you also use Ansible, this information ends in its inventory. The source of truth is therefore fragmented between several tools with different workflows. Since NetBox 2.7, you can append additional data with configuration contexts. This mitigates this point. The data is arranged hierarchically but the hierarchy cannot be customized.2 Nautobot can manage configuration contexts in a Git repository, while still allowing to use of the API to fetch them. You get some additional perks, thanks to Git, but the remaining data is still in a database with a different lifecycle. Lastly, the schema used by NetBox may not fit your needs and you cannot tweak it. For example, you may have a rule to compute the IPv6 address from the IPv4 address for dual-stack interfaces. Such a relationship cannot be easily expressed and enforced in NetBox. When changing the IPv4 address, you may forget the IPv6 address. The source of truth should only contain the IPv4 address but you also want the IPv6 address in NetBox because this is your IPAM and you need it to update your DNS entries.

Why not Git? There are some limitations when putting your source of truth in Git:
  1. If you want to expose a web interface to allow an external team to request a change, it is more difficult to do it with Git than with a database. Out-of-the-box, NetBox provides a nice web interface and a permission system. You can also write your own web interface and interact with NetBox through its API.
  2. YAML files are more difficult to query in different ways. For example, looking for a free IP address is complex if they are scattered in multiple places.
In my opinion, in most cases, you are better off putting the source of truth in Git instead of NetBox. You get a lot of perks by doing that and you can still use NetBox as a read-only view, usable by other tools. We do that with an Ansible module. In the remaining cases, Git could still fit the bill. Read-only access control can be done through submodules. Pull requests can restrict write access: a bot can check the changes only modify allowed files before auto-merging. This still requires some Git knowledge, but many teams are now comfortable using Git, thanks to its ubiquity.

  1. Wikimedia manages its infrastructure with Puppet. They publish everything on GitHub. Creative Commons uses Salt. They also publish everything on GitHub. Thanks to them for doing that! I wish I could provide more real-life examples.
  2. Being able to customize the hierarchy is key to avoiding repetition in the data. For example, if switches are paired together, some data should be attached to them as a group and not duplicated on each of them. Tags can be used to partially work around this issue but you lose the hierarchical aspect.

23 October 2021

Arthur Diniz: Dropdown for GitHub workflows input parameters

Dropdown for GitHub workflows input parameters Sometimes when we look at CI/CD tools embedded within git-based software repository manager like GitHub, GitLab or Bitbucket, we ran into a lack of some features. This time me and my DevOps/SRE team were facing a pain of not being able to have the option to create drop-downs within GitHub workflows using input parameters. Although this functionality is already available on other platforms such as Bitbucket, the specific client we were working on stored the code inside GitHub. At first I thought that someone has already solved this problem somehow, but doing an extensive search on the internet I found several angry GitHub users opening requests within the Support Community and even in the stack overflow. comment-1 comment-2 comment-3 comment-5 comment-4 So I decided to create a solution for this, always thinking about simplicity and in a way that makes it easy to get this missing functionality. I started by creating an input array pattern using commas and using a tag (the selector) e.g brackets as the default value marker. Here is an example of what an input string would look like:
name: gh-action-dropdown-list-input
on:
  workflow_dispatch:
    inputs:
      environment:
        description: 'Environment'
        required: true
        default: 'dev,staging,[uat],prod'
Now the final question that would turn out to be the most complicated to deal with. How can I change the GitHub Actions interface to replace the input pattern we created earlier to a dropdown? The simplest answer I thought was to create a chrome and firefox extension that would do all this logic behind the scenes and replace the HTML input element with the selected tag containing the array values and leaving the tag value (selector) always as the default. All code was developed in pure JavaScript, open-source licensed under Apache 2.0 and available at https://github.com/arthurbdiniz/gh-action-dropdown-list-input.

Install extension Once installed, the extension is ready to use and the final result we see is the Actions interface with drop-downs. :) showcase-1 showcase-2

Configuring selectors Go to the top right corner of the browser you are using and click on the extension logo. A screen will popup with tag options. Choose the right tags for you and save it.
This action might require reloading the GitHub workflow tab. config

Have fun using drop-downs inside GitHub. If you liked this project please share this post and if possible star within the repository. Also feel free to connect with me on LinkedIn: https://www.linkedin.com/in/arthurbdiniz

References
  • https://github.community/t/can-workflow-dispatch-input-be-option-list/127338
  • https://stackoverflow.com/questions/69296314/dropdown-for-github-workflows-input-parameters

27 September 2021

Wouter Verhelst: SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another. This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy. As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself. The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all). Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually. I'll upload it to Debian soon, too.

15 September 2021

Ian Jackson: Get source to Debian packages only via dgit; "official" git links are beartraps

tl;dr dgit clone sourcepackage gets you the source code, as a git tree, in ./sourcepackage. cd into it and dpkg-buildpackage -uc -b. Do not use: "VCS" links on official Debian web pages like tracker.debian.org; "debcheckout"; searching Debian's gitlab (salsa.debian.org). These are good for Debian experts only. If you use Debian's "official" source git repo links you can easily build a package without Debian's patches applied.[1] This can even mean missing security patches. Or maybe it can't even be built in a normal way (or at all). OMG WTF BBQ, why? It's complicated. There is History. Debian's "most-official" centralised source repository is still the Debian Archive, which is a system based on tarballs and patches. I invented the Debian source package format in 1992/3 and it has been souped up since, but it's still tarballs and patches. This system is, of course, obsolete, now that we have modern version control systems, especially git. Maintainers of Debian packages have invented ways of using git anyway, of course. But this is not standardised. There is a bewildering array of approaches. The most common approach is to maintain git tree containing a pile of *.patch files, which are then often maintained using quilt. Yes, really, many Debian people are still using quilt, despite having git! There is machinery for converting this git tree containing a series of patches, to an "official" source package. If you don't use that machinery, and just build from git, nothing applies the patches. [1] This post was prompted by a conversation with a friend who had wanted to build a Debian package, and didn't know to use dgit. They had got the source from salsa via a link on tracker.d.o, and built .debs without Debian's patches. This not a theoretical unsoundness, but a very real practical risk. Future is not very bright In 2013 at the Debconf in Vaumarcus, Joey Hess, myself, and others, came up with a plan to try to improve this which we thought would be deployable. (Previous attempts had failed.) Crucially, this transition plan does not force change onto any of Debian's many packaging teams, nor onto people doing cross-package maintenance work. I worked on this for quite a while, and at a technical level it is a resounding success. Unfortunately there is a big limitation. At the current stage of the transition, to work at its best, this replacement scheme hopes that maintainers who update a package will use a new upload tool. The new tool fits into their existing Debian git packaging workflow and has some benefits, but it does make things more complicated rather than less (like any transition plan must, during the transitional phase). When maintainers don't use this new tool, the standardised git branch seen by users is a compatibility stub generated from the tarballs-and-patches. So it has the right contents, but useless history. The next step is to allow a maintainer to update a package without dealing with tarballs-and-patches at all. This would be massively more convenient for the maintainer, so an easy sell. And of course it links the tarballs-and-patches to the git history in a proper machine-readable way. We held a "git packaging requirements-gathering session" at the Curitiba Debconf in 2019. I think the DPL's intent was to try to get input into the git workflow design problem. The session was a great success: my existing design was able to meet nearly everyone's needs and wants. The room was obviously keen to see progress. The next stage was to deploy tag2upload. I spoke to various key people at the Debconf and afterwards in 2019 and the code has been basically ready since then. Unfortunately, deployment of tag2upload is mired in politics. It was blocked by a key team because of unfounded security concerns; positive opinions from independent security experts within Debian were disregarded. Of course it is always hard to get a team to agree to something when it's part of a transition plan which treats their systems as an obsolete setup retained for compatibility. Current status If you don't know about Debian's git packaging practices (eg, you have no idea what "patches-unapplied packaging branch without .pc directory" means), and don't want want to learn about them, you must use dgit to obtain the source of Debian packages. There is a lot more information and detailed instructions in dgit-user(7). Hopefully either the maintainer did the best thing, or, if they didn't, you won't need to inspect the history. If you are a Debian maintainer, you should use dgit push-source to do your uploads. This will make sure that users of dgit will see a reasonable git history.
edited 2021-09-15 14:48 Z to fix a typo


comment count unavailable comments

9 September 2021

Bits from Debian: DebConf21 online closes

DebConf21 group photo - click to enlarge On Saturday 28 August 2021, the annual Debian Developers and Contributors Conference came to a close. DebConf21 has been held online for the second time, due to the coronavirus (COVID-19) disease pandemic. All of the sessions have been streamed, with a variety of ways of participating: via IRC messaging, online collaborative text documents, and video conferencing meeting rooms. With 740 registered attendees from more than 15 different countries and a total of over 70 event talks, discussion sessions, Birds of a Feather (BoF) gatherings and other activities, DebConf21 was a large success. The setup made for former online events involving Jitsi, OBS, Voctomix, SReview, nginx, Etherpad, a web-based frontend for voctomix has been improved and used for DebConf21 successfully. All components of the video infrastructure are free software, and configured through the Video Team's public ansible repository. The DebConf21 schedule included a wide variety of events, grouped in several tracks: The talks have been streamed using two rooms, and several of these activities have been held in different languages: Telugu, Portuguese, Malayalam, Kannada, Hindi, Marathi and English, allowing a more diverse audience to enjoy and participate. Between talks, the video stream has been showing the usual sponsors on the loop, but also some additional clips including photos from previous DebConfs, fun facts about Debian and short shout-out videos sent by attendees to communicate with their Debian friends. The Debian publicity team did the usual live coverage to encourage participation with micronews announcing the different events. The DebConf team also provided several mobile options to follow the schedule. For those who were not able to participate, most of the talks and sessions are already available through the Debian meetings archive website, and the remaining ones will appear in the following days. The DebConf21 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events. Next year, DebConf22 is planned to be held in Prizren, Kosovo, in July 2022. DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Community team) have been available to help so participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf21 website for more details on this. Debian thanks the commitment of numerous sponsors to support DebConf21, particularly our Platinum Sponsors: Lenovo, Infomaniak, Roche, Amazon Web Services (AWS) and Google. About Debian The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system. About DebConf DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/. About Lenovo As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world. About Infomaniak Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware). About Roche Roche is a major international pharmaceutical provider and research company dedicated to personalized healthcare. More than 100.000 employees worldwide work towards solving some of the greatest challenges for humanity using science and technology. Roche is strongly involved in publicly funded collaborative research projects with other industrial and academic partners and have supported DebConf since 2017. About Amazon Web Services (AWS) Amazon Web Services (AWS) is one of the world's most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally (in 77 Availability Zones within 24 geographic regions). AWS customers include the fastest-growing startups, largest enterprises and leading government agencies. About Google Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform. Contact Information For further information, please visit the DebConf21 web page at https://debconf21.debconf.org/ or send mail to press@debian.org.

10 August 2021

Shirish Agarwal: BBI, IP report, State Borders and Civil Aviation I

If I have seen further, it is by standing on the shoulders of Giants Issac Newton, 1675. Although it should be credited to 12th century Bernard of Chartres. You will know why I have shared this, probably at the beginning of Civil Aviation history itself.

Comments on the BBI court case which happened in Kenya, then and the subsequent appeal. I am not going to share much about the coverage of the BBI appeal as Gautam Bhatia has shared quite eloquently his observations, both on the initial case and the subsequent appeal which lasted 5 days in Kenya and was shown all around the world thanks to YouTube. One of the interesting points which stuck with me was that in Kenya, sign language is one of the official languages. And in fact, I was able to read quite a bit about the various sign languages which are there in Kenya. It just boggles the mind that there are countries that also give importance to such even though they are not as rich or as developed as we call developed economies. I probably might give more space and give more depth as it does carry some important judicial jurisprudence which is and which will be felt around the world. How does India react or doesn t is probably another matter altogether  But yes, it needs it own space, maybe after some more time. Report on Standing Committee on IP Regulation in India and the false promises. Again, I do not want to take much time in sharing details about what the report contains, as the report can be found here. I have uploaded it on WordPress, in case of an issue. An observation on the same subject can be found here. At least, to me and probably those who have been following the IP space as either using/working on free software or even IP would be aware that the issues shared have been known since 1994. And it does benefit the industry rather than the country. This way, the rent-seekers, and monopolists win. There is ample literature that shared how rich countries had weak regulation for decades and even centuries till it was advantageous for them to have strong IP. One can look at the history of Europe and the United States for it. We can also look at the history of our neighbor China, which for the last 5 decades has used some provision of IP and disregarded many others. But these words are of no use, as the policies done and shared are by the rich for the rich.

Fighting between two State Borders Ironically or because of it, two BJP ruled states Assam and Mizoram fought between themselves. In which 6 policemen died. While the history of the two states is complicated it becomes a bit more complicated when one goes back into Assam and ULFA history and comes to know that ULFA could not have become that powerful until and unless, the Marwaris, people of my clan had not given generous donations to them. They thought it was a good investment, which later would turn out to be untrue. Those who think ULFA has declined, or whatever, still don t have answers to this or this. Interestingly, both the Chief Ministers approached the Home Minister (Mr. Amit Shah) of BJP. Mr. Shah was supposed to be the Chanakya but in many instances, including this one, he decided to stay away. His statement was on the lines of you guys figure it out yourself. There is a poem that was shared by the late poet Rahat Indori. I am sharing the same below as an image and will attempt to put a rough translation.
kisi ke baap ka hindustan todi hain Rahat Indori
Poets, whether in India or elsewhere, are known to speak truth to power and are a bit of a rebel. This poem by Rahat Indori is provocatively titled Kisi ke baap ka Hindustan todi hai , It challenges the majoritarian idea that Hindustan/India only belongs to the majoritarian religion. He also challenges as well as asserts at the same time that every Indian citizen, regardless of whatever his or her religion might be, is an Indian and can assert India as his home. While the whole poem is compelling in itself, for me what hits home is in the second stanza

:Lagegi Aag to aayege ghat kayi zad me, Yaha pe sirf hamara makan todi hai The meaning is simple yet subtle, he uses Aag or Fire as a symbol of hate sharing that if hate spreads, it won t be his home alone that will be torched. If one wants to literally understand what he meant, I present to you the cult Russian movie No Escapes or Ogon as it is known in Russian. If one were to decipher why the Russian film doesn t talk about climate change, one has to view it from the prism of what their leader Vladimir Putin has said and done over the years. As can be seen even in there, the situation is far more complex than one imagines. Although, it is interesting to note that he decried Climate change as man-made till as late as last year and was on the side of Trump throughout his presidency. This was in 2017 as well as perhaps this. Interestingly, there was a change in tenor and note just a couple of weeks back, but that could be only politicking or much more. Statements that are not backed by legislation and application are usually just a whitewash. We would have to wait to see what concrete steps are taken by Putin, Kremlin, and their Duma before saying either way.

Civil Aviation and the broad structure Civil Aviation is a large topic and I would not be able to do justice to it all in one article/blog post. So, for e.g. I will not be getting into Aircraft (Boeing, Airbus, Comac etc., etc.) or the new electric aircraft as that will just make the blog post long. I will not be also talking about cargo or Visa or many such topics, as all of them actually would and do need their own space. So this would be much more limited to Airports and to some extent airlines, as one cannot survive without the other. The primary reason for doing this is there is and has been a lot of myth-making in India about Civil Aviation in general, whether it has to do with Civil Aviation history or whatever passes as of policy in India.

A little early history Man has always looked at the stars and envisaged himself or herself as a bird, flying with gay abandon. In fact, there have been many paintings, sculptors who imagined how we would fly. The Steam Engine itself was invented in 82 BCE. But the attempt to fly was done by a certain Monk called Brother Elmer of Malmesbury who attempted the same in 1010., shortly after the birth of the rudimentary steam engine The most famous of all would be Leonardo da Vinci for his amazing sketches of flying machines in 1493. There were a couple of books by Cyrano de Bergerac, apparently wrote two books, both sadly published after his death. Interestingly, you can find both the book and the gentleman in the Project Gutenberg archives. How much of M/s Cyrano s exploits were his own and how much embellished by M/S Curtis, maybe a friend, a lover who knows, but it does give the air of the swashbuckling adventurer of the time which many men aspired to in that time. So, why not an author???

L Autre Monde: ou les tats et Empires de la Lune (Comical History of the States and Empires of the Moon) and Les tats et Empires du Soleil (The States and Empires of the Sun). These two French books apparently had a lot of references to flying machines. Both of them were authored by Cyrano de Bergerac. Both of these were sadly published after his death, one apparently in 1656 and the other one a couple of years later. By the 17th century, while it had become easy to know and measure the latitude, measuring longitude was a problem. In fact, it can be argued and probably successfully that India wouldn t have been under British rule or UK wouldn t have been a naval superpower if it hadn t solved the longitudinal problem. Over the years, the British Royal Navy suffered many blows, one of the most famous or infamous among them might be the Scilly naval disaster of 1707 which led to the death of 2000 odd British Royal naval personnel and led to Queen Anne, who was ruling over England at that time via Parliament and called it the Longitude Act which basically was an open competition for anybody to fix the problem and carried the prize money of 20,000. While nobody could claim the whole prize, many did get smaller amounts depending upon the achievements. The best and the nearest who came was John Harrison who made the first sea-watch and with modifications, over the years it became miniaturized to a pocket-sized Marine chronometer although, I doubt the ones used today look anything in those days. But if that had not been invented, we surely would have been freed long ago. The assumption being that the East India Company would have dashed onto rocks so many times, that the whole exercise would have been futile. The downside of it is that maritime trade routes that are being used today and the commerce would not have been. Neither would have aircraft or space for that matter, or at the very least delayed by how many years or decades, nobody knows. If one wants to read about the Longitudinal problem, one can get the famous book Longitude .

In many mythologies, including Indian and Arabian tales, in which we had the flying carpet which would let its passengers go from one place to the next. Then there is also mention of Pushpak Vimana in ancient texts, but those secrets remain secrets. Think how much foreign exchange India could make by both using it and exporting the same worldwide. And I m being serious. There are many who believe in it, but sadly, the ones who know the secret don t seem to want India s progress. Just think of the carbon credits that India could have, which itself would make India a superpower. And I m being serious.

Western Ideas and Implementation. Even in the late and early 18th century, there were many machines that were designed to have controlled flight, but it was only the Wright Flyer that was able to demonstrate a controlled flight in 1903. The ones who came pretty close to what the Wrights achieved were the people by the name of Cayley and Langley. They actually studied what the pioneers had done. They looked at what Otto Lilienthal had done, as he had done a lot of hang-gliding and put a lot of literature in the public domain then.

Furthermore, they also consulted Octave Chanute. The whole system and history of the same are a bit complicated, but it does give a window to what happened then. So, it won t be wrong to say that whatever the Wright Brothers accomplished would probably not have been possible or would have taken years or maybe even decades if that literature and experiments, drawings, etc. in the commons were not available. So, while they did experimentation, they also looked at what other people were doing and had done which was in public domain/commons.

They also did a lot of testing, which gave them new insights. Even the propulsion system they used in the 1903 flight was a design by Nicolaus Otto. In fact, the Aircraft would not have been born if the Chinese had not invented kites in the early sixth century A.D. One also has to credit Issac Newton because of the three laws of motion, again without which none of the above could have happened. What is credited to the Wilbur brothers is not just they made the Kitty Hawk, they also made it commercial as they sold it and variations of the design to the American Air Force and also made a pilot school where pilots were trained for warfighting. 119 odd pilots came out of that school. The Wrights thought that air supremacy would end the war early, but this turned out to be a false hope.

Competition and Those Magnificent Men and their flying machines One of the first competitions to unlock creativity was the English Channel crossing offer made by Daily Mail. This was successfully done by the Frenchman Louis Bl riot. You can read his account here. There were quite a few competitions before World War 1 broke out. There is a beautiful, humorous movie that does dedicate itself to imagining how things would have gone in that time. In fact, there have been two movies, this one and an earlier movie called Sky Riders made many a youth dream. The other movie sadly is not yet in the public domain, and when it will be nobody knows, but if you see it or even read it, it gives you goosebumps.

World War 1 and Improvements to Aircraft World War 1 is remembered as the Great War or the War to end all wars in an attempt at irony. It did a lot of destruction of both people and property, and in fact, laid the foundation of World War 2. At the same time, if World War 1 hadn t happened then Airpower, Plane technology would have taken decades. Even medicine and medical techniques became revolutionary due to World War 1. In order to be brief, I am not sharing much about World War 1 otherwise that itself would become its own blog post. And while it had its heroes and villains who, when, why could be tackled perhaps another time.

The Guggenheim Family and the birth of Civil Aviation If one has to credit one family for the birth of the Civil Aviation, it has to be the Guggenheim family. Again, I would not like to dwell much as much of their contribution has already been noted here. There are quite a few things still that need to be said and pointed out. First and foremost is the fact that they made lessons about flying from grade school to college and afterward till college and beyond which were in the syllabus, whereas in the Indian schooling system, there is nothing like that to date. Here, in India, even in Engineering courses, you don t have much info. Unless until you go for professional Aviation or Aeronautical courses and most of these courses cost a bomb so either the very rich or the very determined (with loans) only go for that, at least that s what my friends have shared. And there is no guarantee you will get a job after that, especially in today s climate. Even their fund, grants, and prizes which were given to people for various people so that improvements could be made to the United States Civil Aviation. This, as shared in the report/blog post shared, was in response to what the younger child/brother saw as Europe having a large advantage both in Military and Civil Aviation. They also made several grants in several Universities which would not only do notable work during their lifetime but carry on the legacy researching on different aspects of Aircraft. One point that should be noted is that Europe was far ahead even then of the U.S. which prompted the younger son. There had already been talks of civil/civilian flights on European routes, although much different from what either of us can imagine today. Even with everything that the U.S. had going for her and still has, Europe is the one which has better airports, better facilities, better everything than the U.S. has even today. If you look at the lists of the Airports for better value of money or facilities, you would find many Airports from Europe, some from Asia, and only a few from the U.S. even though they are some of the most frequent users of the service. But that debate and arguments I would have to leave for perhaps the next blog post as there is still a lot to be covered between the 1930s, 1950s, and today. The Guggenheims archives does a fantastic job of sharing part of the story till the 1950s, but there is also quite a bit which it doesn t. I will probably start from that in the next blog post and then carry on ahead. Lastly, before I wind up, I have to share why I felt the need to write, capture and share this part of Aviation history. The plain and simple reason being, many of the people I meet either on the web, on Twitter or even in real life, many of them are just unaware of how this whole thing came about. The unawareness in my fellow brothers and sisters is just shocking, overwhelming. At least, by sharing these articles, I at least would be able to guide them or at least let them know how it all came to be and where things are going and not just be so clueless. Till later.

29 June 2021

Louis-Philippe V ronneau: New Winter Bicycle

Although I'm about 5 months early and we're in the middle of a terrible heat wave where I live, I've just finished building my new winter bicycle! I've been riding all year round for about 8 years now and the winters in Montreal are harsh enough that a second winter-ready bicycle is required to have a fun and safe cold season. I'm saying "new bicycle" because a few months ago I totaled the frame of my previous winter bike. As you can see in the picture below (please disregard the salt crust), I broke the seat tube at the lug. I didn't notice it at first and while riding I kept hearing a "bang bang" sound coming from the frame when going over bumps. To my horror, I eventually realised the sound was coming from the seat tube hitting the bottom bracket shell! A large crack in my old bike frame I'm sad to see this Lejeune French frame go, but it was probably built in the 70's or the early 80's: it had a good life. Another great example of how a high quality steel frame can last a lifetime. Sparing no expense, I decided to replace it by a brand new Surly Cross-Check frameset. Surly is known for making great bike frames and the Cross-Check is a very versatile model. I'll finally be able to fit my Schwalbe Marathon Winter Plus tires at the front and the back with full length mud guards!!! Hard to ask for more. A picture of my new winter bike with CX Pro tires Summer has just started and I'm already looking forward to winter :)

Arturo Borrero Gonz lez: Last couple of talks

Logos In the last few months I presented several talks. Topics ranged from a round table on free software, to sharing some of my work as SRE in the Cloud Services team at the Wikimedia Foundation. For some of them the videos are now published, so I would like to write a reference here, mostly as a way to collect such events for my own record. Isn t that what a blog is all about, after all? Before you continue reading, let me mention that the two talks I ll reference were given in my native Spanish. The videos are hosted on YouTube and autogenerated subtitles should be available, with doubtful quality though. Also, there was at least one additional private talk that I m not allowed to comment on here today. I was invited to participate in a Docker community event called Kroquecon, which was aimed at pushing the spanish-speaking Kubernetes community around the world. The event name is a word play with Kubernetes , conference and croqueta , typical Spanish food. The talk happened on 2021-04-29, and I was part of a round table about free software, communities and how to join and participate in such projects. I commented on my experience in both the Debian project, Netfilter and my several years in Google Summer of Code (3 as student, 2 as mentor). Video of the event:

The other event was the CNCF-supported Kubernetes Community Days Spain (KCD Spain). During Kroquecon I was encouraged to submit a talk proposal for this event, to talk about something related to our use of Kubernetes in the Wikimedia Cloud Services team at the Wikimedia Foundation. The proposal was originally rejected. Then, a couple of weeks before the event itself, I was contacted by the organizers with a greenlight to give it because the other speaker couldn t make it. My coworker David Caro joined me in the presentation. It was titled Conoce Wikimedia Toolforge, plataforma basada en Kubernetes (or Meet Wikimedia Toolforge, Kubernetes-based platform ). We explained what Wikimedia Cloud Services is, focusing on Toolforge, and in particular how we use Kubernetes to enable the platform s most interesting use cases. We covered several interesting topics, including how we handle multi-tenancy, or the problems we had with the etcd & ceph combo. The slides we used are available. Video of the event:

EOF

14 June 2021

Sergio Durigan Junior: I am not on Freenode anymore

This is a quick public announcement to say that I am not on the Freenode IRC network anymore. My nickname (sergiodj), which was more than a decade old, has just been deleted (along with every other nickname and channel in that network) from their database today, 2021-06-14. For your safety, you should assume that everybody you knew at Freenode is not there either, even if you see their nicknames online. Do not trust without verifying. In fact, I would strongly encourage that you do not join Freenode anymore: their new policies are absolutely questionable and their disregard for their users is blatant. If you would like to chat with me, you can find me at OFTC (preferred) and Libera.

27 May 2021

Wouter Verhelst: SReview and pandemics

The pandemic was a bit of a mess for most FLOSS conferences. The two conferences that I help organize -- FOSDEM and DebConf -- are no exception. In both conferences, I do essentially the same work: as a member of both video teams, I manage the postprocessing of the video recordings of all the talks that happened at the respective conference(s). I do this by way of SReview, the online video review and transcode system that I wrote, which essentially crowdsources the manual work that needs to be done, and automates as much as possible of the workflow. The original version of SReview consisted of a database, a (very basic) Mojolicious-based webinterface, and a bunch of perl scripts which would build and execute ffmpeg command lines using string interpolation. As a quick hack that I needed to get working while writing it in my spare time in half a year, that approach was workable and resulted in successful postprocessing after FOSDEM 2017, and a significant improvement in time from the previous years. However, I did not end development with that, and since then I've replaced the string interpolation by an object oriented API for generating ffmpeg command lines, as well as modularized the webinterface. Additionally, I've had help reworking the user interface into a system that is somewhat easier to use than my original interface, and have slowly but surely added more features to the system so as to make it more flexible, as well as support more types of environments for the system to run in. One of the major issues that still remains with SReview is that the administrator's interface is pretty terrible. I had been planning on revamping that for 2020, but then massive amounts of people got sick, travel was banned, and both the conferences that I work on were converted to an online-only conference. These have some very specific requirements; e.g., both conferences allowed people to upload a prerecorded version of their talk, rather than doing the talk live; since preprocessing a video is, technically, very similar to postprocessing it, I adapted SReview to allow people to upload a video file that it would then validate (in terms of length, codec, and apparent resolution). This seems like easy to do, but I decided to implement this functionality so that it would also allow future use for in-person conferences, where occasionally a speaker requests that modifications would be made to the video file in a way that SReview is unable to do. This made it marginally more involved, but at least will mean that a feature which I had planned to implement some years down the line is now already implemented. The new feature works quite well, and I'm happy I've implemented it in the way that I have. In order for the "upload" processing and the "post-event" processing to not be confused, however, I decided to import the conference schedules twice: once as the conference itself, and once as a shadow version of that conference for the prerecordings. That way, I could track the progress through the system of the prerecording completely separately from the progress of the postprocessing of the video (which adds opening/closing credits, and transcodes to multiple variants of the same video). Schedule parsing was something that had not been implemented in a generic way yet, however; since that made doubling the schedule in that way rather complex, I decided to bite the bullet and (finally) implement schedule parsing in a generic way. Currently, schedule parsers exist for two formats (Pentabarf XML and the Wafer variant of that same format which is almost, but not quite, entirely the same). The API for that is quite flexible, and I'm happy with the way things have been implemented there. I've also implemented a set of "virtual" parsers, which allow mangling the schedule in various ways (by either filtering out talks that we don't want, or by generating the shadow version of the schedule that I talked about earlier). While the SReview settings have reasonable defaults, occasionally the output of SReview is not entirely acceptable, due to more complicated matters that then result in encoding artifacts. As a result, the DebConf video team has been doing a final review step, completely outside of SReview, to ensure that such encoding artifacts don't exist. That seemed suboptimal, so recently I've been working on integrating that into SReview as well. First tests have been run, and seem to be acceptable, but there's still a few loose ends to be finalized. As part of this, I've also reworked the way comments could be entered into the system. Previously the presence of a comment would signal that the video has some problems that an administrator needed to look at. Unfortunately, that was causing some confusion, with some people even thinking it's a good place to enter a "thank you for your work" style of comment... which it obviously isn't. Turning it into a "comment log" system instead fixes that, and also allows for better two-way communication between administrators and reviewers. Hopefully that'll improve things in that area as well. Finally, the audio normalization in SReview -- for which I've long used bs1770gain -- is having problems. First of all, bs1770gain will sometimes alter the timing of the video or audio file that it's passed, which is very problematic if I want to process it further. There is an ffmpeg loudnorm filter which implements the same algorithm, so that should make things easier to use. Secondly, the author of bs1770gain is a strange character that I'd rather not be involved with. Before I knew about the loudnorm filter I didn't really have a choice, but now I can just rip bs1770gain out and replace it by the loudnorm filter. That will fix various other bugs in SReview, too, because SReview relies on behaviour that isn't actually there (but which I didn't know at the time when I wrote it). All in all, the past year-and-a-bit has seen a lot of development for SReview, with multiple features being added and a number of long-standing problems being fixed. Now if only the pandemic would subside, allowing the whole "let's do everything online only" wave to cool down a bit, so that I can finally make time to implement the admin interface...

25 May 2021

Vincent Bernat: Jerikan+Ansible: a configuration management system for network

There are many resources for network automation with Ansible. Most of them only expose the first steps or limit themselves to a narrow scope. They give no clue on how to expand from that. Real network environments may be large, versatile, heterogeneous, and filled with exceptions. The lack of real-world examples for Ansible deployments, unlike Puppet and SaltStack, leads many teams to build brittle and incomplete automation solutions. We have released under an open-source license our attempt to tackle this problem: Here is a quick demo to configure a new peering:
This work is the collective effort of C dric Hasco t, Jean-Christophe Legatte, Lo c Pailhas, S bastien Hurtel, Tchadel Icard, and Vincent Bernat. We are the network team of Blade, a French company operating Shadow, a cloud-computing product. In May 2021, our company was bought by Octave Klaba and the infrastructure is being transferred to OVHcloud, saving Shadow as a product, but making our team redundant. Our network was around 800 devices, spanning over 10 datacenters with more than 2.5 Tbps of available egress bandwidth. The released material is therefore a substantial example of managing a medium-scale network using Ansible. We have left out the handling of our legacy datacenters to make the final result more readable while keeping enough material to not turn it into a trivial example.

Jerikan The first component is Jerikan. As input, it takes a list of devices, configuration data, templates, and validation scripts. It generates a set of configuration files for each device. Ansible could cover this task, but it has the following limitations:
  • it is slow;
  • errors are difficult to debug;1 and
  • the hierarchy to look up a variable is rigid.
Jerikan inputs and outputs
Jerikan inputs and outputs
If you want to follow the examples, you only need to have Docker and Docker Compose installed. Clone the repository and you are ready!

Source of truth We use YAML files, versioned with Git, as the single source of truth instead of using a database, like NetBox, or a mix of a database and text files. This provides many advantages:
  • anyone can use their preferred text editor;
  • the team prepares changes in branches;
  • the team reviews changes using merge requests;
  • the merge requests expose the changes to the generated configuration files;
  • rollback to a previous state is easy; and
  • it is fast.
The first file is devices.yaml. It contains the device list. The second file is classifier.yaml. It defines a scope for each device. A scope is a set of keys and values. It is used in templates and to look up data associated with a device.
$ ./run-jerikan scope to1-p1.sk1.blade-group.net
continent: apac
environment: prod
groups:
- tor
- tor-bgp
- tor-bgp-compute
host: to1-p1.sk1
location: sk1
member: '1'
model: dell-s4048
os: cumulus
pod: '1'
shorthost: to1-p1
The device name is matched against a list of regular expressions and the scope is extended by the result of each match. For to1-p1.sk1.blade-group.net, the following subset of classifier.yaml defines its scope:
matchers:
  - '^(([^.]*)\..*)\.blade-group\.net':
      environment: prod
      host: '\1'
      shorthost: '\2'
  - '\.(sk1)\.':
      location: '\1'
      continent: apac
  - '^to([12])-[as]?p(\d+)\.':
      member: '\1'
      pod: '\2'
  - '^to[12]-p\d+\.':
      groups:
        - tor
        - tor-bgp
        - tor-bgp-compute
  - '^to[12]-(p ap)\d+\.sk1\.':
      os: cumulus
      model: dell-s4048
The third file is searchpaths.py. It describes which directories to search for a variable. A Python function provides a list of paths to look up in data/ for a given scope. Here is a simplified version:2
def searchpaths(scope):
    paths = [
        "host/ scope[location] / scope[shorthost] ",
        "location/ scope[location] ",
        "os/ scope[os] - scope[model] ",
        "os/ scope[os] ",
        'common'
    ]
    for idx in range(len(paths)):
        try:
            paths[idx] = paths[idx].format(scope=scope)
        except KeyError:
            paths[idx] = None
    return [path for path in paths if path]
With this definition, the data for to1-p1.sk1.blade-group.net is looked up in the following paths:
$ ./run-jerikan scope to1-p1.sk1.blade-group.net
[ ]
Search paths:
  host/sk1/to1-p1
  location/sk1
  os/cumulus-dell-s4048
  os/cumulus
  common
Variables are scoped using a namespace that should be specified when doing a lookup. We use the following ones:
  • system for accounts, DNS, syslog servers,
  • topology for ports, interfaces, IP addresses, subnets,
  • bgp for BGP configuration
  • build for templates and validation scripts
  • apps for application variables
When looking up for a variable in a given namespace, Jerikan looks for a YAML file named after the namespace in each directory in the search paths. For example, if we look up a variable for to1-p1.sk1.blade-group.net in the bgp namespace, the following YAML files are processed: host/sk1/to1-p1/bgp.yaml, location/sk1/bgp.yaml, os/cumulus-dell-s4048/bgp.yaml, os/cumulus/bgp.yaml, and common/bgp.yaml. The search stops at the first match. The schema.yaml file allows us to override this behavior by asking to merge dictionaries and arrays across all matching files. Here is an excerpt of this file for the topology namespace:
system:
  users:
    merge: hash
  sampling:
    merge: hash
  ansible-vars:
    merge: hash
  netbox:
    merge: hash
The last feature of the source of truth is the ability to use Jinja2 templates for keys and values by prefixing them with ~ :
# In data/os/junos/system.yaml
netbox:
  manufacturer: Juniper
  model: "~  model upper  "
# In data/groups/tor-bgp-compute/system.yaml
netbox:
  role: net_tor_gpu_switch
Looking up for netbox in the system namespace for to1-p2.ussfo03.blade-group.net yields the following result:
$ ./run-jerikan scope to1-p2.ussfo03.blade-group.net
continent: us
environment: prod
groups:
- tor
- tor-bgp
- tor-bgp-compute
host: to1-p2.ussfo03
location: ussfo03
member: '1'
model: qfx5110-48s
os: junos
pod: '2'
shorthost: to1-p2
[ ]
Search paths:
[ ]
  groups/tor-bgp-compute
[ ]
  os/junos
  common
$ ./run-jerikan lookup to1-p2.ussfo03.blade-group.net system netbox
manufacturer: Juniper
model: QFX5110-48S
role: net_tor_gpu_switch
This also works for structured data:
# In groups/adm-gateway/topology.yaml
interface-rescue:
  address: "~  lookup('topology', 'addresses').rescue  "
  up:
    - "~ip route add default via   lookup('topology', 'addresses').rescue ipaddr('first_usable')   table rescue"
    - "~ip rule add from   lookup('topology', 'addresses').rescue ipaddr('address')   table rescue priority 10"
# In groups/adm-gateway-sk1/topology.yaml
interfaces:
  ens1f0: "~  lookup('topology', 'interface-rescue')  "
This yields the following result:
$ ./run-jerikan lookup gateway1.sk1.blade-group.net topology interfaces
[ ]
ens1f0:
  address: 121.78.242.10/29
  up:
  - ip route add default via 121.78.242.9 table rescue
  - ip rule add from 121.78.242.10 table rescue priority 10
When putting data in the source of truth, we use the following rules:
  1. Don t repeat yourself.
  2. Put the data in the most specific place without breaking the first rule.
  3. Use templates with parsimony, mostly to help with the previous rules.
  4. Restrict the data model to what is needed for your use case.
The first rule is important. For example, when specifying IP addresses for a point-to-point link, only specify one side and deduce the other value in the templates. The last rule means you do not need to mimic a BGP YANG model to specify BGP peers and policies:
peers:
  transit:
    cogent:
      asn: 174
      remote:
        - 38.140.30.233
        - 2001:550:2:B::1F9:1
      specific-import:
        - name: ATT-US
          as-path: ".*7018$"
          lp-delta: 50
  ix-sfmix:
    rs-sfmix:
      monitored: true
      asn: 63055
      remote:
        - 206.197.187.253
        - 206.197.187.254
        - 2001:504:30::ba06:3055:1
        - 2001:504:30::ba06:3055:2
    blizzard:
      asn: 57976
      remote:
        - 206.197.187.42
        - 2001:504:30::ba05:7976:1
      irr: AS-BLIZZARD

Templates The list of templates to compile for each device is stored in the source of truth, under the build namespace:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build templates
data.yaml: data.j2
config.txt: junos/main.j2
config-base.txt: junos/base.j2
config-irr.txt: junos/irr.j2
$ ./run-jerikan lookup to1-p1.ussfo03.blade-group.net build templates
data.yaml: data.j2
config.txt: cumulus/main.j2
frr.conf: cumulus/frr.j2
interfaces.conf: cumulus/interfaces.j2
ports.conf: cumulus/ports.j2
dhcpd.conf: cumulus/dhcp.j2
default-isc-dhcp: cumulus/default-isc-dhcp.j2
authorized_keys: cumulus/authorized-keys.j2
motd: linux/motd.j2
acl.rules: cumulus/acl.j2
rsyslog.conf: cumulus/rsyslog.conf.j2
Templates are using Jinja2. This is the same engine used in Ansible. Jerikan ships some custom filters but also reuse some of the useful filters from Ansible, notably ipaddr. Here is an excerpt of templates/junos/base.j2 to configure DNS and NTP servers on Juniper devices:
system  
  ntp  
 % for ntp in lookup("system", "ntp") % 
    server   ntp  ;
 % endfor % 
   
  name-server  
 % for dns in lookup("system", "dns") % 
      dns  ;
 % endfor % 
   
 
The equivalent template for Cisco IOS-XR is:
 % for dns in lookup('system', 'dns') % 
domain vrf VRF-MANAGEMENT name-server   dns  
 % endfor % 
!
 % for syslog in lookup('system', 'syslog') % 
logging   syslog   vrf VRF-MANAGEMENT
 % endfor % 
!
There are three helper functions provided:
  • devices() returns the list of devices matching a set of conditions on the scope. For example, devices("location==ussfo03", "groups==tor-bgp") returns the list of devices in San Francisco in the tor-bgp group. You can also omit the operator if you want the specified value to be equal to the one in the local scope. For example, devices("location") returns devices in the current location.
  • lookup() does a key lookup. It takes the namespace, the key, and optionally, a device name. If not provided, the current device is assumed.
  • scope() returns the scope of the provided device.
Here is how you would define iBGP sessions between edge devices in the same location:
 % for neighbor in devices("location", "groups==edge") if neighbor != device % 
   % for address in lookup("topology", "addresses", neighbor).loopback tolist % 
protocols bgp group IPV  address ipv  -EDGES-IBGP  
  neighbor   address    
    description "IPv  address ipv  : iBGP to   neighbor  ";
   
 
   % endfor % 
 % endfor % 
We also have a global key-value store to save information to be reused in another template or device. This is quite useful to automatically build DNS records. First, capture the IP address inserted into a template with store() as a filter:
interface Loopback0
 description 'Loopback:'
  % for address in lookup('topology', 'addresses').loopback tolist % 
 ipv  address ipv   address   address store('addresses', 'Loopback0') ipaddr('cidr')  
  % endfor % 
!
Then, reuse it later to build DNS records by iterating over store():4
 % for device, ip, interface in store('addresses') % 
   % set interface = interface replace('/', '-') replace('.', '-') replace(':', '-') % 
   % set name = ' . '.format(interface lower, device) % 
  name  . IN   'A' if ip ipv4 else 'AAAA'     ip ipaddr('address')  
 % endfor % 
Templates are compiled locally with ./run-jerikan build. The --limit argument restricts the devices to generate configuration files for. Build is not done in parallel because a template may depend on the data collected by another template. Currently, it takes 1 minute to compile around 3000 files spanning over 800 devices.
Jerikan outputs when building templates
Output of Jerikan after building configuration files for six devices
When an error occurs, a detailed traceback is displayed, including the template name, the line number and the value of all visible variables. This is a major time-saver compared to Ansible!
templates/opengear/config.j2:15: in top-level template code
    config.interfaces.  interface  .netmask   adddress   ipaddr("netmask")  
        continent  = 'us'
        device     = 'con1-ag2.ussfo03.blade-group.net'
        environment = 'prod'
        host       = 'con1-ag2.ussfo03'
        infos      =  'address': '172.30.24.19/21' 
        interface  = 'wan'
        location   = 'ussfo03'
        loop       = <LoopContext 1/2>
        member     = '2'
        model      = 'cm7132-2-dac'
        os         = 'opengear'
        shorthost  = 'con1-ag2'
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = JerkianUndefined, query = 'netmask', version = False, alias = 'ipaddr'
[ ]
        # Check if value is a list and parse each element
        if isinstance(value, (list, tuple, types.GeneratorType)):
            _ret = [ipaddr(element, str(query), version) for element in value]
            return [item for item in _ret if item]
>       elif not value or value is True:
E       jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'adddress'
We don t have general-purpose rules when writing templates. Like for the source of truth, there is no need to create generic templates able to produce any BGP configuration. There is a balance to be found between readability and avoiding duplication. Templates can become scary and complex: sometimes, it s better to write a filter or a function in jerikan/jinja.py. Mastering Jinja2 is a good investment. Take time to browse through our templates as some of them show interesting features.

Checks Optionally, each configuration file can be validated by a script in the checks/ directory. Jerikan looks up the key checks in the build namespace to know which checks to run:
$ ./run-jerikan lookup edge1.ussfo03.blade-group.net build checks
- description: Juniper configuration file syntax check
  script: checks/junoser
  cache:
    input: config.txt
    output: config-set.txt
- description: check YAML data
  script: checks/data.yaml
  cache: data.yaml
In the above example, checks/junoser is executed if there is a change to the generated config.txt file. It also outputs a transformed version of the configuration file which is easier to understand when using diff. Junoser checks a Junos configuration file using Juniper s XML schema definition for Netconf.5 On error, Jerikan displays:
jerikan/build.py:127: RuntimeError
-------------- Captured syntax check with Junoser call --------------
P: checks/junoser edge2.ussfo03.blade-group.net
C: /app/jerikan
O:
E: Invalid syntax:  set system syslog archive size 10m files 10 word-readable
S: 1

Integration into GitLab CI The next step is to compile the templates using a CI. As we are using GitLab, Jerikan ships with a .gitlab-ci.yml file. When we need to make a change, we create a dedicated branch and a merge request. GitLab compiles the templates using the same environment we use on our laptops and store them as an artifact.
Merge requests in GitLab for a change
Merge request to add a new port in USSFO03. The templates were compiled successfully but approval from another team member is still required to merge.
Before approving the merge request, another team member looks at the changes in data and templates but also the differences for the generated configuration files:
Differences for generated configuration files
The change configures a port on the Juniper device, adds records to DNS, and updates NetBox with the new IP addresses (not shown).

Ansible After Jerikan has built the configuration files, Ansible takes over. It is also packaged as a Docker image to avoid the trouble to maintain the right Python virtual environment and ensure everyone is using the same versions.

Inventory Jerikan has generated an inventory file. It contains all the managed devices, the variables defined for each of them and the groups converted to Ansible groups:
ob1-n1.sk1.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
ob2-n1.sk1.blade-group.net ansible_host=172.29.15.13 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
ob1-n1.ussfo03.blade-group.net ansible_host=172.29.15.12 ansible_user=blade ansible_connection=network_cli ansible_network_os=ios
none ansible_connection=local
[oob]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[os-ios]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[model-c2960s]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
[location-sk1]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
[location-ussfo03]
ob1-n1.ussfo03.blade-group.net
[in-sync]
ob1-n1.sk1.blade-group.net
ob2-n1.sk1.blade-group.net
ob1-n1.ussfo03.blade-group.net
none
in-sync is a special group for devices which configuration should match the golden configuration. Daily and unattended, Ansible should be able to push configurations to this group. The mid-term goal is to cover all devices. none is a special device for tasks not related to a specific host. This includes synchronizing NetBox, IRR objects, and the DNS, updating the RPKI, and building the geofeed files.

Playbook We use a single playbook for all devices. It is described in the ansible/playbooks/site.yaml file. Here is a shortened version:
- hosts: adm-gateway:!done
  strategy: mitogen_linear
  roles:
    - blade.linux
    - blade.adm-gateway
    - done
- hosts: os-linux:!done
  strategy: mitogen_linear
  roles:
    - blade.linux
    - done
- hosts: os-junos:!done
  gather_facts: false
  roles:
    - blade.junos
    - done
- hosts: os-opengear:!done
  gather_facts: false
  roles:
    - blade.opengear
    - done
- hosts: none:!done
  gather_facts: false
  roles:
    - blade.none
    - done
A host executes only one of the play. For example, a Junos device executes the blade.junos role. Once a play has been executed, the device is added to the done group and the other plays are skipped. The playbook can be executed with the configuration files generated by the GitLab CI using the ./run-ansible-gitlab command. This is a wrapper around Docker and the ansible-playbook command and it accepts the same arguments. To deploy the configuration on the edge devices for the SK1 datacenter in check mode, we use:
$ ./run-ansible-gitlab playbooks/site.yaml --limit='edge:&location-sk1' --diff --check
[ ]
PLAY RECAP *************************************************************
edge1.sk1.blade-group.net  : ok=6    changed=0    unreachable=0    failed=0    skipped=3    rescued=0    ignored=0
edge2.sk1.blade-group.net  : ok=5    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
We have some rules when writing roles:
  • --check must detect if a change is needed;
  • --diff must provide a visualization of the planned changes;
  • --check and --diff must not display anything if there is nothing to change;
  • writing a custom module tailored to our needs is a valid solution;
  • the whole device configuration is managed;6
  • secrets must be stored in Vault;
  • templates should be avoided as we have Jerikan for that; and
  • avoid duplication and reuse tasks.7
We avoid using collections from Ansible Galaxy, the exception being collections to connect and interact with vendor devices, like cisco.iosxr collection. The quality of Ansible Galaxy collections is quite random and it is an additional maintenance burden. It seems better to write roles tailored to our needs. The collections we use are in ci/ansible/ansible-galaxy.yaml. We use Mitogen to get a 10 speedup on Ansible executions on Linux hosts. We also have a few playbooks for operational purpose: upgrading the OS version, isolate an edge router, etc. We were also planning on how to add operational checks in roles: are all the BGP sessions up? They could have been used to validate a deployment and rollback if there is an issue. Currently, our playbooks are run from our laptops. To keep tabs, we are using ARA. A weekly dry-run on devices in the in-sync group also provides a dashboard on which devices we need to run Ansible on.

Configuration data and templates Jerikan ships with pre-populated data and templates matching the configuration of our USSFO03 and SK1 datacenters. They do not exist anymore but, we promise, all this was used in production back in the days!
Network architecture for Blade datacenter
The latest iteration of our network infrastructure for SK1, USSFO03, and future data centers. The production network is using BGPttH using a spine-leaf fabric. The out-of-band network is using a simple L2 design, using the spanning tree protocol, as well as a set of console servers.
Notably, you can find the configuration for:
our edge routers
Some are running on Junos, like edge2.ussfo03, the others on IOS-XR, like edge1.sk1. The implemented functionalities are similar in both cases and we could swap one for the other. It includes the BGP configuration for transits, peerings, and IX as well as the associated BGP policies. PeeringDB is queried to get the maximum number of prefixes to accept for peerings. bgpq3 and a containerized IRRd help to filter received routes. A firewall is added to protect the routing engine. Both IPv4 and IPv6 are configured.
our BGP-based fabric
BGP is used inside the datacenter8 and is extended on bare-metal hosts. The configuration is automatically derived from the device location and the port number.9 Top-of-the-rack devices are using passive BGP sessions for ports towards servers. They are also serving a provisioning network to let them boot using DHCP and PXE. They also act as a DHCP server. The design is multivendor. Some devices are running Cumulus Linux, like to1-p1.ussfo03, while some others are running Junos, like to1-p2.ussfo03.
our out-of-band fabric
We are using Cisco Catalyst 2960 switches to build an L2 out-of-band network. To provide redundancy and saving a few bucks on wiring, we build small loops and run the spanning-tree protocol. See ob1-p1.ussfo03. It is redundantly connected to our gateway servers. We also use OpenGear devices for console access. See con1-n1.ussfo03
our administrative gateways
These Linux servers have multiple purposes: SSH jump boxes, rescue connection, direct access to the out-of-band network, zero-touch provisioning of network devices,10 Internet access for management flows, centralization of the console servers using Conserver, and API for autoconfiguration of BGP sessions for bare-metal servers. They are the first servers installed in a new datacenter and are used to provision everything else. Check both the generated files and the associated Ansible tasks.

  1. Ansible does not even provide a line number when there is an error in a template. You may need to find the problem by bisecting.
    $ ansible --version
    ansible 2.10.8
    [ ]
    $ cat test.j2
    Hello   name  !
    $ ansible all -i localhost, \
    >  --connection=local \
    >  -m template \
    >  -a "src=test.j2 dest=test.txt"
    localhost   FAILED! =>  
        "changed": false,
        "msg": "AnsibleUndefinedVariable: 'name' is undefined"
     
    
  2. You may recognize the same concepts as in Hiera, the hierarchical key-value store from Puppet. At first, we were using Jerakia, a similar independent store exposing an HTTP REST interface. However, the lookup overhead is too large for our use. Jerikan implements the same functionality within a Python function.
  3. The list of available filters is mangled inside jerikan/jinja.py. This is a remain of the fact we do not maintain Jerikan as a standalone software.
  4. This is a bit confusing: we have a store() filter and a store() function. With Jinja2, filters and functions live in two different namespaces.
  5. We are using a fork with some modifications to be able to validate our configurations and exposing an HTTP service to reduce the time spent on each configuration check.
  6. There is a trend in network automation to automate a configuration subset, for example by having a playbook to create a new BGP session. We believe this is wrong: with time, your configuration will get out-of-sync with its expected state, notably hand-made changes will be left undetected.
  7. See ansible/roles/blade.linux/tasks/firewall.yaml and ansible/roles/blade.linux/tasks/interfaces.yaml. They are meant to be called when needed, using import_role.
  8. We also have some datacenters using BGP EVPN VXLAN at medium-scale using Juniper devices. As they are still in production today, we didn t include this feature but we may publish it in the future.
  9. In retrospect, this may not be a good idea unless you are pretty sure everything is uniform (number of switches for each layer, number of ports). This was not our case. We now think it is a better idea to assign a prefix to each device and write it in the source of truth.
  10. Non-linux based devices are upgraded and configured unattended. Cumulus Linux devices are automatically upgraded on install but the final configuration has to be pushed using Ansible: we didn t want to duplicate the configuration process using another tool.

4 May 2021

Steve Kemp: Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner. I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years. The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support. That then became a pass-plugin:
  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..
Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:
   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data
The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way. Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:
     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"
That's ideal, because now I can source that from within a shell:
   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve
Or I could directly execute the tool I want:
   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..
TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

Steve Kemp: Password store plugin: enve

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner. I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years. The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support. That then became a pass-plugin:
  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..
Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:
   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data
The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way. Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:
     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"
That's ideal, because now I can source that from within a shell:
   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve
Or I could directly execute the tool I want:
   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..
TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

28 March 2021

Norbert Preining: RMS, Debian, and the world

Too much has been written, a war of support letters is going on, Debian is heading head first like Lemmings into the same war. And then, there is this by the first female President of the American Civil Liberties Union (ACLU): Civil Rights Activist Nadine Strossen s Response To #CancelStallman:
I find it so odd that the strong zeal for revenge and punishment if someone says anything that is perceived to be sexist or racist or discriminatory comes from liberals and progressives. There are so many violations [in cases like Stallman s] of such fundamental principles to which progressives and liberals cling in general as to what is justice, what is fairness, what is due process.
One is proportionality: that the punishment should be proportional to the offense. Another one is restorative justice: that rather than retribution and punishment, we should seek to have the person constructively come to understand, repent, and make amends for an infraction. Liberals generally believe society to be too punitive, too harsh, not forgiving enough. They are certainly against the death penalty and other harsh punishments even for the most violent, the mass murderers. Progressives are right now advocating for the release of criminals, even murderers. To then have exactly the opposite attitude towards something that certainly is not committing physical violence against somebody, I don t understand the double standard!
Another cardinal principle is we shouldn t have any guilt by association. [To hold culpable] these board members who were affiliated with him and ostensibly didn t do enough to punish him for things that he said which by the way were completely separate from the Free Software Foundation is multiplying the problems of unwarranted punishment. It extends the punishment where the argument for responsibility and culpability becomes thinner and thinner to the vanishing point. That is also going to have an enormous adverse impact on the freedom of association, which is an important right protected in the U.S. by the First Amendment.
The Supreme Court has upheld freedom of association in cases involving organizations that were at the time highly controversial. It started with NAACP (National Association for the Advancement of Colored People) during the civil rights movement in the 1950s and 60s, but we have a case that s going to the Supreme Court right now regarding Black Lives Matter. The Supreme Court says even if one member of the group does commit a crime in both of those cases physical violence and assault that is not a justification for punishing other members of the group unless they specifically intended to participate in the particular punishable conduct.
Now, let s assume for the sake of argument, Stallman had an attitude that was objectively described as discriminatory on the basis on race and gender (and by the way I have seen nothing to indicate that), that he s an unrepentant misogynist, who really believes women are inferior. We are not going to correct those ideas, to enlighten him towards rejecting them and deciding to treat women as equals through a punitive approach! The only approach that could possibly work is an educational one! Engaging in speech, dialogue, discussion and leading him to re-examine his own ideas.
Even if I strongly disagree with a position or an idea, an expression of an idea, advocacy of an idea, and even if the vast majority of the public disagrees with the idea and finds it offensive, that is not a justification for suppressing the idea. And it s not a justification for taking away the equal rights of the person who espouses that idea including the right to continue holding a tenured position or other prominent position for which that person is qualified.
But a number of the ideas for which Richard Stallman has been attacked and punished are ideas that I as a feminist advocate of human rights find completely correct and positive from the perspective of women s equality and dignity! So for example, when he talks about the misuse and over use and flawed use of the term sexual assault, I completely agree with that critique! People are indiscriminantly using that term or synonyms to describe everything from the most appaulling violent abuse of helpless vulnerable victims (such as a rape of a minor) to any conduct or expression in the realm of gender or sexuality that they find unpleasant or disagreeable.
So we see the term sexual assault and sexual harrassment used for example, when a guy asks a woman out on a date and she doesn t find that an appealing invitation. Maybe he used poor judgement in asking her out, maybe he didn t, but in any case that is NOT sexual assault or harassment. To call it that is to really demean the huge horror and violence and predation that does exist when you are talking about violent sexual assault. People use the term sexual assault/ sexual harassment to refer to any comment about gender or sexuality issues that they disagree with or a joke that might not be in the best taste, again is that to be commended? No! But to condemn it and equate it with a violent sexual assault again is really denying and demeaning the actual suffering that people who are victims of sexual assault endure. It trivializes the serious infractions that are committed by people like Jeffrey Epstein and Harvey Weinstein. So that is one point that he made that I think is very important that I strongly agree with.
Secondly and relatedly, [Richard Stallman] never said that he endorse child pornography, which by definition the United States Supreme Court has defined it multiple times is the sexual exploitation of an actual minor. Coerced, forced, sexual activity by that minor, with that minor that happens to be filmed or photographed. That is the definition of child pornography. He never defends that! What the point he makes, a very important one, which the U.S. Supreme Court has also made, is mainly that we overuse and distort the term child pornography to refer to any depiction of any minor in any context that is even vaguely sexual.
So some people have not only denounced as child pornography but prosecuted and jailed loving devoted parents who committed the crime of taking a nude or semi-nude picture of their own child in a bathtub or their own child in a bathing suit. Again it is the hysteria that has totally refused to draw an absolutely critical distinction between actual violence and abuse, which is criminal and should be criminal, to any potentially sexual depiction of a minor. And I say potentially because I think if you look at a picture a parent has taken of a child in a bathtub and you see that as sexual, then I d say there s something in your perspective that might be questioned or challenged! But don t foist that upon the parent who is lovingly documenting their beloved child s life and activities without seeing anything sexual in that image.
This is a decision that involves line drawing. We tend to have this hysteria where once we hear terms like pedophilia of course you are going to condemn anything that could possibly have that label. Of course you would. But societies around the world throughout history various cultures and various religions and moral positions have disagreed about at what age do you respect the autonomy and individuality and freedom of choice of a young person around sexuality. And the U.S. Supreme Court held that in a case involving minors right to choose to have an abortion.
By the way, [contraception and abortion] is a realm of sexuality where liberals and progressives and feminists have been saying, Yes! If you re old enough to have sex. You should have the right to contraception and access to it. You should have the right to have an abortion. You shouldn t have to consult with your parents and have their permission or a judge s permission because you re sufficiently mature. And the Supreme Court sided in accord of that position. The U.S. Supreme Court said constitutional rights do not magically mature and spring into being only when someone happens to attain the state defined age of majority.
In other words the constitution doesn t prevent anyone from exercising rights, including Rights and sexual freedoms, freedom of choice and autonomy at a certain age! And so you can t have it both ways. You can t say well we re strongly in favor of minors having the right to decide what to do with their own bodies, to have an abortion what is in some people s minds murder but we re not going to trust them to decide to have sex with somewhat older than they are.
And I say somewhat older than they are because that s something where the law has also been subject to change. On all issues of when you obtain the age of majority, states differ on that widely and they choose different ages for different activities. When you re old enough to drive, to have sex with someone around your age, to have sex with someone much older than you. There is no magic objective answer to these questions. I think people need to take seriously the importance of sexual freedom and autonomy and that certainly includes women, feminists. They have to take seriously the question of respecting a young person s autonomy in that area.
There have been famous cases of 18 year olds who have gone to prison because they had consensual sex with their girlfriends who were a couple of years younger. A lot of people would not consider that pedophilia and yet under some strict laws and some absolute definitions it is. Romeo and Juliet laws make an exception to pedophilia laws when there is only a relatively small age difference. But what is relatively small? So to me, especially when he says he is re-examining his position, Stallman is just thinking through the very serious debate of how to be protective and respectful of young people. He is not being disrespectful, much less wishing harm upon young people, which seems to be what his detractors think he s doing.
Unfortunately, I don t think the Anti-Harassment Team of Debian and others of the usual group of warriors will ever read less understand what is written there. So sad.

27 March 2021

Neil Williams: Free and Open

A long time, no blog entries. What's prompted a new missive? Two things: All the time I've worked with software and computers, the lack of diversity in the contributors has irked me. I started my career in pharmaceutical sciences where the mix of people, at university, would appear to be genuinely diverse. It was in the workplace that the problems started, especially in the retail portion of community pharmacy with a particular gender imbalance between management and counter staff. Then I started to pick up programming. I gravitated to free software for the simple reason that I could tinker with the source code. To do a career change and learn how to program with a background in a completely alien branch of science, the only realistic route into the field is to have access to the raw material - source code. Without that, I would have been seeking a sponsor, in much the same way as other branches of science need grants or sponsors to purchase the equipment and facilities to get a foot in the door. All it took from me was some time, a willingness to learn and a means to work alongside those already involved. That's it. No complex titration equipment, flasks, centrifuges, pipettes, spectrographs, or petri dishes. It's hard to do pharmaceutical science from home. Software is so much more accessible - but only if the software itself is free. Software freedom removes the main barrier to entry. Software freedom enables the removal of other barriers too. Once the software is free, it becomes obvious that the compiler (indeed the entire toolchain) needs to be free, to turn the software into something other people can use. The same applies to interpreters, editors, kernels and the entire stack. I was in a different branch of science whilst all that was being created and I am very glad that Debian Woody was available as a free cover disc on a software magazine just at the time I was looking to pick up software programming. That should be that. The only next step to be good enough to write free software was the "means to work alongside those already involved". That, it turns out, is much more than just a machine running Debian. It's more than just having an ISP account and working email (not commonplace in 2003). It's working alongside the people - all the people. It was my first real exposure to the toxicity of some parts of many scientific and technical arenas. Where was the diversity? OK, maybe it was just early days and the other barriers (like an account with an ISP) were prohibitive for many parts of the world outside Europe & USA in 2003, so there were few people from other countries but the software world was massively dominated by the white Caucasian male. I'd been insulated by my degree course, and to a large extent by my university which also had courses which already had much more diverse intakes - optics and pharmacy, business and human resources. Relocating from the insular world of a little town in Wales to Birmingham was also key. Maybe things would improve as the technical barriers to internet connectivity were lowered. Sadly, no. The echo chamber of white Caucasian input has become more and more diluted as other countries have built the infrastructure to get the populace online. Debian has helped in this area, principally via DebConf. Yet only the ethnicity seemed to change, not the diversity. Recently, more is being done to at least make Debian more welcoming to those who are brave enough to increase the mix. Progress is much slower than the gains seen in the ethnicity mix, probably because that was a benefit of a technological change, not a societal change. The attitudes so prevalent in the late 20th century are becoming less prevalent amongst, and increasingly abhorrent to, the next generation of potential community members. Diversity must come or the pool of contributors will shrink to nil. Community members who cling to these attitudes are already dinosaurs and increasingly unwelcome. This is a necessary step to retain access to new contributors as existing contributors age. To be able to increase the number of contributors, the community cannot afford to be dragged backwards by anyone, no matter how important or (previously) respected. Debian, or even free software overall, cannot change all the problems with diversity in STEM but we must not perpetuate the problems either. Those people involved in free software need to be open to change, to input from all portions of society and welcoming. Puerile jokes and disrespectful attitudes must be a thing of the past. The technical contributions do not excuse behaviours that act to prevent new people joining the community. Debian is getting older, the community and the people. The presence of spouses at Debian social events does not fix the problem of the lack of diversity at Debian technical events. As contributors age, Debian must welcome new, younger, people to continue the work. All the technical contributions from the people during the 20th century will not sustain Debian in the 21st century. Bit rot affects us all. If the people who provided those contributions are not encouraging a more diverse mix to sustain free software into the future then all their contributions will be for nought and free software will die. So it comes to the FSF and RMS. I did hear Richard speak at an event in Bristol many years ago. I haven't personally witnessed the behavioural patterns that have been described by others but my memories of that event only add to the reality of those accounts. No attempts to be inclusive, jokes that focused on division and perpetuating the white male echo chamber. I'm not perfect, I struggle with some of this at times. Personally, I find the FSFE and Red Hat statements much more in line with my feelings on the matter than the open letter which is the basis of the GR. I deplore the preliminary FSF statement on governance as closed-minded, opaque and archaic. It only adds to my concerns that the FSF is not fit for the 21st century. The open letter has the advantage that it is a common text which has the backing of many in the community, individuals and groups, who are fit for purpose and whom I have respected for a long time. Free software must be open to contributions from all who are technically capable of doing the technical work required. Free software must equally require respect for all contributors and technical excellence is not an excuse for disrespect. Diversity is the life blood of all social groups and without social cohesion, technical contributions alone do not support a community. People can change and apologies are welcome when accompanied by modified behaviour. The issues causing a lack of diversity in Debian are complex and mostly reflective of a wider problem with STEM. Debian can only survive by working to change those parts of the wider problem which come within our influence. Influence is key here, this is a soft issue, replete with unreliable emotions, unhelpful polemic and complexity. Reputations are hard won and easily blemished. The problems are all about how Debian looks to those who are thinking about where to focus in the years to come. It looks like the FSFE should be the ones to take the baton from the FSF - unless the FSF can adopt a truly inclusive and open governance. My problem is not with RMS himself, what he has or has not done, which apologies are deemed sincere and whether behaviour has changed. He is one man and his contributions can be respected even as his behaviour is criticised. My problem is with the FSF for behaving in a closed, opaque and divisive manner and then making a governance statement that makes things worse, whilst purporting to be transparent. Institutions lay the foundations for the future of the community and must be expected to hold individuals to account. Free and open have been contentious concepts for the FSF, with all the arguments about Open Source which is not Free Software. It is clear that the FSF do understand the implications of freedom and openness. It is absurd to then adopt a closed and archaic governance. A valid governance model for the FSF would never have allowed RMS back onto the board, instead the FSF should be the primary institution to hold him, and others like him, to account for their actions. The FSF needs to be front and centre in promoting diversity and openness. The FSF could learn from the FSFE. The calls for the FSF to adopt a more diverse, inclusive, board membership are not new. I can only echo the Red Hat statement:
in order to regain the confidence of the broader free software community, the FSF should make fundamental and lasting changes to its governance.
...
[There is] no reason to believe that the most recent FSF board statement signals any meaningful commitment to positive change.
And the FSFE statement:
The goal of the software freedom movement is to empower all people to control technology and thereby create a better society for everyone. Free Software is meant to serve everyone regardless of their age, ability or disability, gender identity, sex, ethnicity, nationality, religion or sexual orientation. This requires an inclusive and diverse environment that welcomes all contributors equally.
The FSF has not demonstrated the behaviour I expect from a free software institution and I cannot respect an institution which proclaims a mantra of freedom and openness that is not reflected in the governance of that institution. The preliminary statement by the FSF board is abhorrent. The FSF must now take up the offers from those institutions within the community who retain the respect of that community. I'm only one voice, but I would implore the FSF to make substantive, positive and permanent change to the governance, practices and future of the FSF or face irrelevance.

Next.