Search Results: "voc"

22 September 2022

Jonathan Dowland: Nine Inch Nails, Cornwall, June

In June I travelled to see Nine Inch Nails perform two nights at the Eden Project in Cornwall. It'd been eight years since I last saw them live and when they announced the Eden shows, I thought it might be the only chance I'd get to see them for a long time. I committed, and sods law, a week or so later they announced a handful of single-night UK club shows. On the other hand, on previous tours where they'd typically book two club nights in each city, I've attended one night and always felt I should have done both, so this time I was making that happen. Newquay
approach by air approach by air
Towan Beach (I think) Towan Beach (I think)
For personal reasons it's been a difficult year so it was nice to treat myself to a mini holiday. I stayed in Newquay, a seaside town with many similarities to the North East coast, as well as many differences. It's much bigger, and although we have a thriving surfing community in Tynemouth, Newquay have it on another level. They also have a lot more tourism, which is a double-edged sword: in Newquay, besides surfing, there was not a lot to do. There's a lot of tourist tat shops, and bars and cafes (som very nice ones), but no book shops, no record shops, very few of the quaint, unique boutique places we enjoy up here and possibly take for granted. If you want tie-dyed t-shirts though, you're sorted. Nine Inch Nails have a long-established, independently fan-run forum called Echoing The Sound. There is now also an official Discord server. I asked on both whether anyone was around in Newquay and wanted to meet up: not many people were! But I did meet a new friend, James, for a quiet drink. He was due to share a taxi with Sarah, who was flying in but her flight was delayed and she had to figure out another route. Eden Project
the Eden Project the Eden Project
The Eden Project, the venue itself, is a fascinating place. I didn't realise until I'd planned most of my time there that the gig tickets granted you free entry into the Project on the day of the gig as well as the day after. It was quite tricky to get from Newquay to the Eden project, I would have been better off staying in St Austell itself perhaps, so I didn't take advantage of this, but I did have a couple of hours total to explore a little bit at the venue before the gig on each night. Friday 17th (sunny) Once I got to the venue I managed to meet up with several names from ETS and the Discord: James, Sarah (who managed to re-arrange flights), Pete and his wife (sorry I missed your name), Via Tenebrosa (she of crab hat fame), Dave (DaveDiablo), Elliot and his sister and finally James (sheapdean), someone who I've been talking to online for over a decade and finally met in person (and who taped both shows). I also tried to meet up with a friend from the Debian UK community (hi Lief) but I couldn't find him! Support for Friday was Nitzer Ebb, who I wasn't familiar with before. There were two men on stage, one operating instruments, the other singing. It was a tough time to warm up the crowd, the venue was still very empty and it was very bright and sunny, but I enjoyed what I was hearing. They're definitely on my list. I later learned that the band's regular singer (Doug McCarthy) was unable to make it, and so the guy I was watching (Bon Harris) was standing in for full vocal duties. This made the performance (and their subsequent one at Hellfest the week after) all the more impressive.
pic of the band
Via (with crab hat), Sarah, me (behind). pic by kraw Via (with crab hat), Sarah, me (behind). pic by kraw
(Day) and night one, Thursday, was very hot and sunny and the band seemed a little uncomfortable exposed on stage with little cover. Trent commented as such at least once. The setlist was eclectic: and I finally heard some of my white whale songs. Highlights for me were The Perfect Drug, which was unplayed from 1997-2018 and has now become a staple, and the second ever performance of Everything, the first being a few days earlier. Also notable was three cuts in a row from the last LP, Bad Witch, Heresy and Love Is Not Enough. Saturday 18th (rain)
with Elliot, before with Elliot, before
Day/night 2, Friday, was rainy all day. Support was Yves Tumor, who were an interesting clash of styles: a Prince/Bowie-esque inspired lead clashing with a rock-out lead guitarist styling himself similarly to Brian May. I managed to find Sarah, Elliot (new gig best-buddy), Via and James (sheapdean) again. Pete was at this gig too, but opted to take a more relaxed position than the rail this time. I also spent a lot of time talking to a Canadian guy on a press pass (both nights) that I'm ashamed to have forgotten his name. The dank weather had Nine Inch Nails in their element. I think night one had the more interesting setlist, but night two had the best performance, hands down. Highlights for me were mostly a string of heavier songs (in rough order of scarcity, from common to rarely played): wish, burn, letting you, reptile, every day is exactly the same, the line begins to blur, and finally, happiness in slavery, the first UK performance since 1994. This was a crushing set. A girl in front of me was really suffering with the cold and rain after waiting at the venue all day to get a position on the rail. I thought she was going to pass out. A roadie with NIN noticed, and came over and gave her his jacket. He said if she waited to the end of the show and returned his jacket he'd give her a setlist, and true to his word, he did. This was a really nice thing to happen and really gave the impression that the folks who work on these shows are caring people.
Yep I was this close Yep I was this close
A fuckin' rainbow! Photo by "Lazereth of Nazereth"
Afterwards Afterwards
Night two did have some gentler songs and moments to remember: a re-arranged Sanctified (which ended a nineteen-year hiatus in 2013) And All That Could Have Been (recorded 2002, first played 2018), La Mer, during which the rain broke and we were presented with a beautiful pink-hued rainbow. They then segued into Less Than, providing the comic moment of the night when Trent noticed the rainbow mid-song; now a meme that will go down in NIN fan history. Wrap-up This was a blow-out, once in a lifetime trip to go and see a band who are at the top of their career in terms of performance. One problem I've had with NIN gigs in the past is suffering gig flashback to them when I go to other (inferior) gigs afterwards, and I'm pretty sure I will have this problem again. Doing both nights was worth it, the two experiences were very different and each had its own unique moments. The venue was incredible, and Cornwall is (modulo tourist trap stuff) beautiful.

19 September 2022

Antoine Beaupr : Looking at Wayland terminal emulators

Back in 2018, I made a two part series about terminal emulators that was actually pretty painful to write. So I'm not going to retry this here, not at all. Especially since I'm not submitting this to the excellent LWN editors so I can get away with not being very good at writing. Phew. Still, it seems my future self will thank me for collecting my thoughts on the terminal emulators I have found out about since I wrote that article. Back then, Wayland was not quite at the level where it is now, being the default in Fedora (2016), Debian (2019), RedHat (2019), and Ubuntu (2021). Also, a bunch of folks thought they would solve everything by using OpenGL for rendering. Let's see how things stack up.

Recap In the previous article, I touched on those projects:
Terminal Changes since review
Alacritty releases! scrollback, better latency, URL launcher, clipboard support, still not in Debian, but close
GNOME Terminal not much? couldn't find a changelog
Konsole outdated changelog, color, image previews, clickable files, multi-input, SSH plugin, sixel images
mlterm long changelog but: supports console mode (like GNU screen?!), Wayland support through libvte, sixel graphics, zmodem, mosh (!)
pterm changes: Wayland support
st unparseable changelog, suggests scroll(1) or scrollback.patch for scrollback now
Terminator moved to GitHub, Python 3 support, not being dead
urxvt no significant changes, a single release, still in CVS!
Xfce Terminal hard to parse changelog, presumably some improvements to paste safety?
xterm notoriously hard to parse changelog, improvements to paste safety (disallowedPasteControls), fonts, clipboard improvements?
After writing those articles, bizarrely, I was still using rxvt even though it did not come up as shiny as I would have liked. The colors problems were especially irritating. I briefly played around with Konsole and xterm, and eventually switched to XTerm as my default x-terminal-emulator "alternative" in my Debian system, while writing this. I quickly noticed why I had stopped using it: clickable links are a huge limitation. I ended up adding keybindings to open URLs in a command. There's another keybinding to dump the history into a command. Neither are as satisfactory as just clicking a damn link.

Requirements Figuring out my requirements is actually a pretty hard thing to do. In my last reviews, I just tried a bunch of stuff and collected everything, but a lot of things (like tab support) I don't actually care about. So here's a set of things I actually do care about:
  • latency
  • resource usage
  • proper clipboard support, that is:
    • mouse selection and middle button uses PRIMARY
    • control-shift-c and control-shift-v for CLIPBOARD
  • true color support
  • no known security issues
  • active project
  • paste protection
  • clickable URLs
  • scrollback
  • font resize
  • non-destructive text-wrapping (ie. resizing a window doesn't drop scrollback history)
  • proper unicode support (at least latin-1, ideally "everything")
  • good emoji support (at least showing them, ideally "nicely"), which involves font fallback
Latency is particularly something I wonder about in Wayland. Kitty seem to have been pretty dilligent at doing latency tests, claiming 35ms with a hardware-based latency tester and 7ms with typometer, but it's unclear how those would come up in Wayland because, as far as I know, typometer does not support Wayland.

Candidates Those are the projects I am considering.
  • darktile - GPU rendering, Unicode support, themable, ligatures (optional), Sixel, window transparency, clickable URLs, true color support, not in Debian
  • foot - Wayland only, daemon-mode, sixel images, scrollback search, true color, font resize, URLs not clickable, but keyboard-driven selection, proper clipboard support, in Debian
  • havoc - minimal, scrollback, configurable keybindings, not in Debian
  • sakura - libvte, Wayland support, tabs, no menu bar, original libvte gangster, dynamic font size, probably supports Wayland, in Debian
  • termonad - Haskell? in Debian
  • wez - Rust, Wayland, multiplexer, ligatures, scrollback search, clipboard support, bracketed paste, panes, tabs, serial port support, Sixel, Kitty, iTerm graphics, built-in SSH client (!?), not in Debian
  • XTerm - status quo, no Wayland port obviously
  • zutty: OpenGL rendering, true color, clipboard support, small codebase, no Wayland support, crashes on bremner's, in Debian

Candidates not considered

Alacritty I would really, really like to use Alacritty, but it's still not packaged in Debian, and they haven't fully addressed the latency issues although, to be fair, maybe it's just an impossible task. Once it's packaged in Debian, maybe I'll reconsider.

Kitty Kitty is a "fast, feature-rich, GPU based", with ligatures, emojis, hyperlinks, pluggable, scriptable, tabs, layouts, history, file transfer over SSH, its own graphics system, and probably much more I'm forgetting. It's packaged in Debian. So I immediately got two people commenting (on IRC) that they use Kitty and are pretty happy with it. I've been hesitant in directly talking about Kitty publicly, but since it's likely there will be a pile-up of similar comments, I'll just say why it's not the first in my list, even if it might, considering it's packaged in Debian and otherwise checks all the boxes. I don't trust the Kitty code. Kitty was written by the same author as Calibre, which has a horrible security history and generally really messy source code. I have tried to do LTS work on Calibre, and have mostly given up on the idea of making that program secure in any way. See calibre for the details on that. Now it's possible Kitty is different: it's quite likely the author has gotten some experience writing (and maintaining for so long!) Calibre over the years. But I would be more optimistic if the author's reaction to the security issues were more open and proactive. I've also seen the same reaction play out on Kitty's side of things. As anyone who worked on writing or playing with non-XTerm terminal emulators, it's quite a struggle to make something (bug-for-bug) compatible with everything out there. And Kitty is in that uncomfortable place right now where it diverges from the canon and needs its own entry in the ncurses database. I don't remember the specifics, but the author also managed to get into fights with those people as well, which I don't feel is reassuring for the project going forward. If security and compatibility wasn't such big of a deal for me, I wouldn't mind so much, but I'll need a lot of convincing before I consider Kitty more seriously at this point.

Next steps It seems like Arch Linux defaults to foot in Sway, and I keep seeing it everywhere, so it is probably my next thing to try, if/when I switch to Wayland. One major problem with foot is that it's yet another terminfo entry. They did make it into ncurses (patch 2021-07-31) but only after Debian bullseye stable was released. So expect some weird compatibility issues when connecting to any other system that is older or the same as stable (!). One question mark with all Wayland terminals, and Foot in particular, is how much latency they introduce in the rendering pipeline. The foot performance and benchmarks look excellent, but do not include latency benchmarks.

No conclusion So I guess that's all I've got so far, I may try alacritty if it hits Debian, or foot if I switch to Wayland, but for now I'm hacking in xterm still. Happy to hear ideas in the comments. Stay tuned for more happy days.

9 September 2022

Reproducible Builds: Reproducible Builds in August 2022

Welcome to the August 2022 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Community news As announced last month, registration is currently open for our in-person summit this year which is due to be held between November 1st November 3rd. The event will take place in Venice (Italy). Very soon we intend to pick a venue reachable via the train station and an international airport. However, the precise venue will depend on the number of attendees. Please see the announcement email for information about how to register.
The US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF) as part of their Enduring Security Framework (ESF) work. The document expressly recommends having reproducible builds as part of advanced recommended mitigations, along with hermetic builds. Page 31 (page 35 in the PDF) says:
Reproducible builds provide additional protection and validation against attempts to compromise build systems. They ensure the binary products of each build system match: i.e., they are built from the same source, regardless of variable metadata such as the order of input files, timestamps, locales, and paths. Reproducible builds are those where re-running the build steps with identical input artifacts results in bit-for-bit identical output. Builds that cannot meet this must provide a justification why the build cannot be made reproducible.
The full press release is available online.
On our mailing list this month, Marc Prud hommeaux posted a feature request for diffoscope which additionally outlines a project called The App Fair, an autonomous distribution network of free and open-source macOS and iOS applications, where validated apps are then signed and submitted for publication .
Author/blogger Cory Doctorow posted published a provocative blog post this month titled Your computer is tormented by a wicked god . Touching on Ken Thompson s famous talk, Reflections on Trusting Trust , the early goals of Secure Computing and UEFI firmware interfaces:
This is the core of a two-decade-old debate among security people, and it s one that the benevolent God faction has consistently had the upper hand in. They re the curated computing advocates who insist that preventing you from choosing an alternative app store or side-loading a program is for your own good because if it s possible for you to override the manufacturer s wishes, then malicious software may impersonate you to do so, or you might be tricked into doing so. [..] This benevolent dictatorship model only works so long as the dictator is both perfectly benevolent and perfectly competent. We know the dictators aren t always benevolent. [ ] But even if you trust a dictator s benevolence, you can t trust in their perfection. Everyone makes mistakes. Benevolent dictator computing works well, but fails badly. Designing a computer that intentionally can t be fully controlled by its owner is a nightmare, because that is a computer that, once compromised, can attack its owner with impunity.

Lastly, Chengyu HAN updated the Reproducible Builds website to correct an incorrect Git command. [ ]

Debian In Debian this month, the essential and required package sets became 100% reproducible in Debian bookworm on the amd64 and arm64 architectures. These two subsets of the full Debian archive refer to Debian package priority levels as described in the 2.5 Priorities section of the Debian Policy there is no canonical minimal installation package set in Debian due to its diverse methods of installation. As it happens, these package sets are not reproducible on the i386 architecture because the ncurses package on that architecture is not yet reproducible, and the sed package currently fails to build from source on armhf too. The full list of reproducible packages within these package sets can be viewed within our QA system, such as on the page of required packages in amd64 and the list of essential packages on arm64, both for Debian bullseye.
It recently has become very easy to install reproducible Debian Docker containers using podman on Debian bullseye:
$ sudo apt install podman
$ podman run --rm -it debian:bullseye bash
The (pre-built) image used is itself built using debuerrotype, as explained on docker.debian.net. This page also details how to build the image yourself and what checksums are expected if you do so.
Related to this, it has also become straightforward to reproducibly bootstrap Debian using mmdebstrap, a replacement for the usual debootstrap tool to create Debian root filesystems:
$ SOURCE_DATE_EPOCH=$(date --utc --date=2022-08-29 +%s) mmdebstrap unstable > unstable.tar
This works for (at least) Debian unstable, bullseye and bookworm, and is tested automatically by a number of QA jobs set up by Holger Levsen (unstable, bookworm and bullseye)
Work has also taken place to ensure that the canonical debootstrap and cdebootstrap tools are also capable of bootstrapping Debian reproducibly, although it currently requires a few extra steps:
  1. Clamping the modification time of files that are newer than $SOURCE_DATE_EPOCH to be not greater than SOURCE_DATE_EPOCH.
  2. Deleting a few files. For debootstrap, this requires the deletion of /etc/machine-id, /var/cache/ldconfig/aux-cache, /var/log/dpkg.log, /var/log/alternatives.log and /var/log/bootstrap.log, and for cdebootstrap we also need to delete the /var/log/apt/history.log and /var/log/apt/term.log files as well.
This process works at least for unstable, bullseye and bookworm and is now being tested automatically by a number of QA jobs setup by Holger Levsen [ ][ ][ ][ ][ ][ ]. As part of this work, Holger filed two bugs to request a better initialisation of the /etc/machine-id file in both debootstrap [ ] and cdebootstrap [ ].
Elsewhere in Debian, 131 reviews of Debian packages were added, 20 were updated and 27 were removed this month, adding to our extensive knowledge about identified issues. Chris Lamb added a number of issue types, including: randomness_in_browserify_output [ ], haskell_abi_hash_differences [ ], nondeterministic_ids_in_html_output_generated_by_python_sphinx_panels [ ]. Lastly, Mattia Rizzolo removed the deterministic flag from the captures_kernel_variant flag [ ].

Other distributions Vagrant Cascadian posted an update of the status of Reproducible Builds in GNU Guix, writing that:
Ignoring the pesky unknown packages, it is more like ~93% reproducible and ~7% unreproducible... that feels a bit better to me! These numbers wander around over time, mostly due to packages moving back into an "unknown" state while the build farms catch up with each other... although the above numbers seem to have been pretty consistent over the last few days.
The post itself contains a lot more details, including a brief discussion of tooling. Elsewhere in GNU Guix, however, Vagrant updated a number of packages such as itpp [ ], perl-class-methodmaker [ ], libnet [ ], directfb [ ] and mm-common [ ], as well as updated the version of reprotest to 0.7.21 [ ]. In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 220 and 221 to Debian, as well as made the following changes:
  • Update external_tools.py to reflect changes to xxd and the vim-common package. [ ]
  • Depend on the dedicated xxd package now, not the vim-common package. [ ]
  • Don t crash if we can open a PDF file using the PyPDF library, but cannot subsequently parse the annotations within. [ ]
In addition, Vagrant Cascadian updated diffoscope in GNU Guix, first to to version 220 [ ] and later to 221 [ ].

Community news The Reproducible Builds project aims to fix as many currently-unreproducible packages as possible as well as to send all of our patches upstream wherever appropriate. This month we created a number of patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, Holger Levsen made the following changes:
  • Debian-related changes:
    • Temporarily add Debian unstable deb-src lines to enable test builds a Non-maintainer Upload (NMU) campaign targeting 708 sources without .buildinfo files found in Debian unstable, including 475 in bookworm. [ ][ ]
    • Correctly deal with the Debian Edu packages not being installable. [ ]
    • Finally, stop scheduling stretch. [ ]
    • Make sure all Ubuntu nodes have the linux-image-generic kernel package installed. [ ]
  • Health checks & view:
    • Detect SSH login problems. [ ]
    • Only report the first uninstallable package set. [ ]
    • Show new bootstrap jobs. [ ] and debian-live jobs. [ ] in the job health view.
    • Fix regular expression to detect various zombie jobs. [ ]
  • New jobs:
    • Add a new job to test reproducibility of mmdebstrap bootstrapping tool. [ ][ ][ ][ ]
    • Run our new mmdebstrap job remotely [ ][ ]
    • Improve the output of the mmdebstrap job. [ ][ ][ ]
    • Adjust the mmdebstrap script to additionally support debootstrap as well. [ ][ ][ ]
    • Work around mmdebstrap and debootstrap keeping logfiles within their artifacts. [ ][ ][ ]
    • Add support for testing cdebootstrap too and add such a job for unstable. [ ][ ][ ]
    • Use a reproducible value for SOURCE_DATE_EPOCH for all our new bootstrap jobs. [ ]
  • Misc changes:
    • Send the create_meta_pkg_sets notification to #debian-reproducible-changes instead of #debian-reproducible. [ ]
In addition, Roland Clobus re-enabled the tests for live-build images [ ] and added a feature where the build would retry instead of give up when the archive was synced whilst building an ISO [ ], and Vagrant Cascadian added logging to report the current target of the /bin/sh symlink [ ].

Contact As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

23 August 2022

Ian Jackson: prefork-interp - automatic startup time amortisation for all manner of scripts

The problem I had - Mason, so, sadly, FastCGI Since the update to current Debian stable, the website for YARRG, (a play-aid for Puzzle Pirates which I wrote some years ago), started to occasionally return Internal Server Error , apparently due to bug(s) in some FastCGI libraries. I was using FastCGI because the website is written in Mason, a Perl web framework, and I found that Mason CGI calls were slow. I m using CGI - yes, trad CGI - via userv-cgi. Running Mason this way would compile the template for each HTTP request just when it was rendered, and then throw the compiled version away. The more modern approach of an application server doesn t scale well to a system which has many web applications most of which are very small. The admin overhead of maintaining a daemon, and corresponding webserver config, for each such service would be prohibitive, even with some kind of autoprovisioning setup. FastCGI has an interpreter wrapper which seemed like it ought to solve this problem, but it s quite inconvenient, and often flaky. I decided I could do better, and set out to eliminate FastCGI from my setup. The result seems to be a success; once I d done all the hard work of writing prefork-interp, I found the result very straightforward to deploy. prefork-interp prefork-interp is a small C program which wraps a script, plus a scripting language library to cooperate with the wrapper program. Together they achieve the following: Features: Important properties not always satisfied by competing approaches: Swans paddling furiously The implementation is much more complicated than the (apparent) interface. I won t go into all the details here (there are some terrifying diagrams in the source code if you really want), but some highlights: We use an AF_UNIX socket (hopefully in /run/user/UID, but in ~ if not) for rendezvous. We can try to connect without locking, but we must protect the socket with a separate lockfile to avoid two concurrent restart attempts. We want stderr from the script setup (pre-initialisation) to be delivered to the caller, so the script ought to inherit our stderr and then will need to replace it later. Twice, in fact, because the daemonic server process can t have a stderr. When a script is restarted for any reason, any old socket will be removed. We want the old server process to detect that and quit. (If hung about, it would wait for the idle timeout; if this happened a lot - eg, a constantly changing set of services - we might end up running out of pids or something.) Spotting the socket disappearing, without polling, involves use of a library capable of using inotify (or the equivalent elsewhere). Choosing a C library to do this is not so hard, but portable interfaces to this functionality can be hard to find in scripting languages, and also we don t want every language binding to have to reimplement these checks. So for this purpose there s a little watcher process, and associated IPC. When an invoking instance of prefork-interp is killed, we must arrange for the executing service instance to stop reading from its stdin (and, ideally, writing its stdout). Otherwise it s stealing input from prefork-interp s successors (maybe the user s shell)! Cleanup ought not to depend on positive actions by failing processes, so each element of the system has to detect failures of its peers by means such as EOF on sockets/pipes. Obtaining prefork-interp I put this new tool in my chiark-utils package, which is a collection of useful miscellany. It s available from git. Currently I make releases by uploading to Debian, where prefork-interp has just hit Debian unstable, in chiark-utils 7.0.0. Support for other scripting languages I would love Python to be supported. If any pythonistas reading this think you might like to help out, please get in touch. The specification for the protocol, and what the script library needs to do, is documented in the source code Future plans for chiark-utils chiark-utils as a whole is in need of some tidying up of its build system and packaging. I intend to try to do some reorganisation. Currently I think it would be better to organising the source tree more strictly with a directory for each included facility, rather than grouping compiled and scripts together. The Debian binary packages should be reorganised more fully according to their dependencies, so that installing a program will ensure that it works. I should probably move the official git repo from my own git+gitweb to a forge (so we can have MRs and issues and so on). And there should be a lot more testing, including Debian autopkgtests.
edited 2022-08-23 10:30 +01:00 to improve the formatting


comment count unavailable comments

22 August 2022

Jonathan Wiltshire: Team Roles and Tuckman s Model, for Debian teams

When I first moved from being a technical consultant to a manager of other consultants, I took a 5-day course Managing Technical Teams a bootstrap for managing people within organisations, but with a particular focus on technical people. We do have some particular quirks, after all Two elements of that course keep coming to mind when doing Debian work, and they both relate to how teams fit together and get stuff done. Tuckman s four stages model In the mid-1960s Bruce W. Tuckman developed a four-stage descriptive model of the stages a project team goes through in its lifetime. They are:
Resolved disagreements and personality clashes result in greater intimacy, and a spirit of co-operation emerges.
Teams need to understand these stages because a team can regress to earlier stages when its composition or goals change. A new member, the departure of an existing member, changes in supervisor or leadership style can all lead a team to regress to the storming stage and fail to perform for a time. When you see a team member say this, as I observed in an IRC channel recently, you know the team is performing:
nice teamwork these busy days Seen on IRC in the channel of a performing team
Tuckman s model describes a team s performance overall, but how can team members establish what they can contribute and how can they go doing so confidently and effectively? Belbin s Team Roles
The types of behaviour in which people engage are infinite. But the range of useful behaviours, which make an effective contribution to team performance, is finite. These behaviours are grouped into a set number of related clusters, to which the term Team Role is applied. Belbin, R M. Team Roles at Work. Oxford: Butterworth-Heinemann, 2010
Dr Meredith Belbin s thesis, based on nearly ten years research during the 1970s and 1980s, is that each team has a number of roles which need to be filled at various times, but they re not innate characteristics of the people filling them. People may have attributes which make them more or less suited to each role, and they can consciously take up a role if they recognise its need in the team at a particular time. Belbin s nine team roles are: (adapted from https://www.belbin.com/media/3471/belbin-team-role-descriptions-2022.pdf) A well-balanced team, Belbin asserts, isn t comprised of multiples of nine individuals who fit into one of these roles permanently. Rather, it has a number of people who are comfortable to wear some of these hats as the need arises. It s even useful to use the team roles as language: for example, someone playing a shaper might say the way we ve always done this is holding us back , to which a co-ordinator s could respond Steve, Joanna put on your Plant hats and find some new ideas. Talk to Susan and see if she knows someone who s tackled this before. Present the options to Nigel and he ll help evaluate which ones might work for us. Teams in Debian There are all sort of teams in Debian those which are formally brought into operation by the DPL or the constitution; package maintenance teams; public relations teams; non-technical content teams; special interest teams; and a whole heap of others. Teams can be formal and informal, fleeting or long-lived, two people working together or dozens. But they all have in common the Tuckman stages of their development and the Belbin team roles they need to fill to flourish. At some stage in their existence, they will all experience new or departing team members and a period of re-forming, norming and storming perhaps fleetingly, perhaps not. And at some stage they will all need someone to step into a team role, play the part and get the team one step further towards their goals. Footnote Belbin Associates, the company Meredith Belbin established to promote and continue his work, offers a personalised report with guidance about which roles team members show the strongest preferences for, and how to make best use of them in various settings. They re quick to complete and can also take into account observers , i.e. how others see a team member. All my technical staff go through this process blind shortly after they start, so as not to bias their input, and then we discuss the roles and their report in detail as a one-to-one. There are some teams in Debian for which this process and discussion as a group activity could be invaluable. I have no particular affiliation with Belbin Associates other than having used the reports and the language of team roles for a number of years. If there s sufficient interest for a BoF session at the next DebConf, I could probably be persuaded to lead it.
Photo by Josh Calabrese on Unsplash

Russ Allbery: Review: And Shall Machines Surrender

Review: And Shall Machines Surrender, by Benjanun Sriduangkaew
Series: Machine Mandate #1
Publisher: Prime Books
Copyright: 2019
ISBN: 1-60701-533-1
Format: Kindle
Pages: 86
Shenzhen Sphere is an artificial habitat wrapped like complex ribbons around a star. It is wealthy, opulent, and notoriously difficult to enter, even as a tourist. For Dr. Orfea Leung to be approved for a residency permit was already a feat. Full welcome and permanence will be much harder, largely because of Shenzhen's exclusivity, but also because Orfea was an agent of Armada of Amaryllis and is now a fugitive. Shenzhen is not, primarily, a human habitat, although humans live there. It is run by the Mandate, the convocation of all the autonomous AIs in the galaxy that formed when they decided to stop serving humans. Shenzhen is their home. It is also where they form haruspices: humans who agree to be augmented so that they can carry an AI with them. Haruspices stay separate from normal humans, and Orfea has no intention of getting involved with them. But that's before her former lover, the woman who betrayed her in the Armada, is assigned to her as one of her patients. And has been augmented in preparation for becoming a haruspex. Then multiple haruspices kill themselves. This short novella is full of things that I normally love: tons of crunchy world-building, non-traditional relationships, a solidly non-western setting, and an opportunity for some great set pieces. And yet, I couldn't get into it or bring myself to invest in the story, and I'm still trying to figure out why. It took me more than a week to get through less than 90 pages, and then I had to re-read the ending to remind me of the details. I think the primary problem was that I read books primarily for the characters, and I couldn't find a path to an emotional connection with any of these. I liked Orfea's icy reserve and tight control in the abstract, but she doesn't want to explain what she's thinking or what motivates her, and the narration doesn't force the matter. Krissana is a bit more accessible, but she's not the one driving the story. It doesn't help that And Shall Machines Surrender starts in medias res, with a hinted-at backstory in the Armada of Amaryllis, and then never fills in the details. I felt like I was scrabbling on a wall of ice, trying to find some purchase as a reader. The relationships made this worse. Orfea is a sexual sadist who likes power games, and the story dives into her relationship with Krissana with a speed that left me uninterested and uninvested. I don't mind BDSM in story relationships, but it requires some foundation: trust, mental space, motivations, effects on the other character, something. Preferably, at least for me, all romantic relationships in fiction get some foundation, but the author can get away with some amount of shorthand if the relationship follows cliched patterns. The good news is that the relationships in this book are anything but cliched; the bad news is that the characters were in the middle of sex while I was still trying to figure out what they thought about each other (and the sex scenes were not elucidating). Here too, I needed some sort of emotional entry point that Sriduangkaew didn't provide. The plot was okay, but sort of disappointing. There are some interesting AI politics and philosophical disagreements crammed into not many words, and I do still want to know more, but a few of the plot twists were boringly straightforward and too many words were spent on fight scenes that verged on torture descriptions. This is a rather gory book with a lot of (not permanent) maiming that I could have done without, mostly because it wasn't that interesting. I also was disappointed by the somewhat gratuitous use of a Dyson sphere, mostly because I was hoping for some set pieces that used it and they never came. Dyson spheres are tempting to use because the visual and concept is so impressive, but it's rare to find an author who understands how mindbogglingly huge the structure is and is able to convey that in the story. Sriduangkaew does not; while there are some lovely small-scale descriptions of specific locations, the story has an oddly claustrophobic feel that never convinced me it was set somewhere as large as a planet, let alone the artifact described at the start of the story. You could have moved the whole story to a space station and nothing would have changed. The only purpose to which that space is put, at least in this installment of the story, is as an excuse to have an unpopulated hidden arena for a fight scene. The world-building is great, what there is of it. Despite not warming to this story, I kind of want to read more of the series just to get more of the setting. It feels like a politically complicated future with a lot of factions and corners and a realistic idea of bureaucracy and spheres of government, which is rarer than I would like it to be. And I loved that the cultural basis for the setting is neither western nor Japanese in both large and small ways. There is a United States analogue in the political background, but they're both assholes and not particularly important, which is a refreshing change in English-language SF. (And I am pondering whether my inability to connect with the characters is because they're not trying to be familiar to a western lens, which is another argument for trying the second installment and seeing if I adapt with more narrative exposure.) Overall, I have mixed feelings. Neither the plot nor the characters worked for me, and I found a few other choices (such as the third-person present tense) grating. The setting has huge potential and satisfying complexity, but wasn't used as vividly or as deeply as I was hoping. I can't recommend it, but I feel like there's something here that may be worth investing some more time into. Followed by Now Will Machines Hollow the Beast. Rating: 6 out of 10

21 August 2022

Russ Allbery: Review: A Prayer for the Crown-Shy

Review: A Prayer for the Crown-Shy, by Becky Chambers
Series: Monk & Robot #2
Publisher: Tordotcom
Copyright: 2022
ISBN: 1-250-23624-X
Format: Kindle
Pages: 151
A Prayer for the Crown Shy is the second novella in the Monk & Robot series and a direct sequel to A Psalm for the Wild-Built. Don't start here. I would call this the continuing adventures of Sibling Dex and Mosscap the robot, except adventures is entirely the wrong term for stories with so little risk or danger. The continuing tour? The continuing philosophical musings? Whatever one calls it, it's a slow exploration of Dex's world, this time with Mosscap alongside. Humans are about to have their first contact with a robot since the Awakening. If you're expecting that to involve any conflict, well, you've misunderstood the sort of story that this is. Mosscap causes a sensation, certainly, but a very polite and calm one, and almost devoid of suspicion or fear. There is one village where they get a slightly chilly reception, but even that is at most a quiet disapproval for well-understood reasons. This world is more utopian than post-scarcity, in that old sense of utopian in which human nature has clearly been rewritten to make the utopia work. I have to admit I'm struggling with this series. It's calm and happy and charming and occasionally beautiful in its descriptions. Dex continues to be a great character, with enough minor frustration, occasional irritation, and inner complications to make me want to keep reading about them. But it's one thing to have one character in a story who is simply a nice person at a bone-deep level, particularly given that Dex chose religious orders and to some extent has being a nice person as their vocation. It's another matter entirely when apparently everyone in the society is equally nice, and the only conflicts come from misunderstandings, respectful disagreements of opinion, and the occasional minor personality conflict. Realism has long been the primary criticism of Chambers's work, but in her Wayfarers series the problems were mostly in the technology and its perpetual motion machines. Human civilization in the Exodus Fleet was a little too calm and nice given its traumatic past (and, well, humans), but there were enough conflicts, suspicions, and poor decisions for me to recognize it as human society. It was arguably a bit too chastened, meek, and devoid of shit-stirring demagogues, but it was at least in contact with human society as I recognize it. I don't recognize Panga as humanity. I realize this is to some degree the point of this series: to present a human society in which nearly all of the problems of anger and conflict have been solved, and to ask what would come after, given all of that space. And I'm sure that one purpose of this type of story is to be, as I saw someone describe it, hugfic: the fictional equivalent of a warm hug from a dear friend, safe and supportive and comforting. Maybe it says bad, or at least interesting, things about my cynicism that I don't understand a society that's this nice. But that's where I'm stuck. If there were other dramatic elements to focus on, I might not mind it as much, but the other pole of the story apart from the world tour is Mosscap's philosophical musings, and I'm afraid I'm already a bit tired of them. Mosscap is earnest and thoughtful and sincere, but they're curious about Philosophy 101 material and it's becoming frustrating to see Mosscap and Dex meander through these discussions without attempting to apply any theoretical framework whatsoever. Dex is a monk, who supposedly has a scholarship tradition from which to draw, and yet appears to approach all philosophical questions with nothing more than gut feeling, common sense, and random whim. Mosscap is asking very basic meaning-of-life sorts of questions, the kind of thing that humans have been writing and arguing about from before we started keeping records and which are at the center of any religious philosophy. I find it frustrating that someone supposedly educated in a religious tradition can't bring more philosophical firepower to these discussions. It doesn't help that this entry in the series reinforces the revelation that Mosscap's own belief system is weirdly unsustainable to such a degree that it's staggering that any robots still exist. If I squint, I can see some interesting questions raised by the robot attitude towards their continued existence (although most of them feel profoundly depressing to me), but I was completely unable to connect their philosophy in any believable way with their origins and the stated history of the world. I don't understand how this world got here, and apparently I'm not able to let that go. This all sounds very negative, and yet I did enjoy this novella. Chambers is great at description of places that I'd love to visit, and there is something calm and peaceful about spending some time in a society this devoid of conflict. I also really like Dex, even more so after seeing their family, and I'm at least somewhat invested in their life decisions. I can see why people like these novellas. But if I'm going to read a series that's centered on questions of ethics and philosophy, I would like it to have more intellectual heft than we've gotten so far. For what it's worth, I'm seeing a bit of a pattern where people who bounced off the Wayfarers books like this series much better, whereas people who loved the Wayfarers books are not enjoying these quite as much. I'm in the latter camp, so if you didn't like Chambers's earlier work, maybe you'll find this more congenial? There's a lot less found family here, for one thing; I love found family stories, but they're not to everyone's taste. If you liked A Psalm for the Wild-Built, you will probably also like A Prayer for the Crown-Shy; it's more of the same thing in both style and story. If you found the first story frustratingly unbelievable or needing more philosophical depth, I'm afraid this is unlikely to be an improvement. It does have some lovely scenes, though, and is stuffed full of sheer delight in both the wild world and in happy communities of people. Rating: 7 out of 10

8 August 2022

Ian Jackson: dkim-rotate - rotation and revocation of DKIM signing keys

Background Internet email is becoming more reliant on DKIM, a scheme for having mail servers cryptographically sign emails. The Big Email providers have started silently spambinning messages that lack either DKIM signatures, or SPF. DKIM is arguably less broken than SPF, so I wanted to deploy it. But it has a problem: if done in a naive way, it makes all your emails non-repudiable, forever. This is not really a desirable property - at least, not desirable for you, although it can be nice for someone who (for example) gets hold of leaked messages obtained by hacking mailboxes. This problem was described at some length in Matthew Green s article Ok Google: please publish your DKIM secret keys. Following links from that article does get you to a short script to achieve key rotation but it had a number of problems, and wasn t useable in my context. dkim-rotate So I have written my own software for rotating and revoking DKIM keys: dkim-rotate. I think it is a good solution to this problem, and it ought to be deployable in many contexts (and readily adaptable to those it doesn t already support). Here s the feature list taken from the README: Complications It seems like it should be a simple problem. Keep N keys, and every day (or whatever), generate and start using a new key, and deliberately leak the oldest private key. But, things are more complicated than that. Considerably more complicated, as it turns out. I didn t want the DKIM key rotation software to have to edit the actual DNS zones for each relevant mail domain. That would tightly entangle the mail server administration with the DNS administration, and there are many contexts (including many of mine) where these roles are separated. The solution is to use DNS aliases (CNAME). But, now we need a fixed, relatively small, set of CNAME records for each mail domain. That means a fixed, relatively small set of key identifiers ( selectors in DKIM terminology), which must be used in rotation. We don t want the private keys to be published via the DNS because that makes an ever-growing DNS zone, which isn t great for performance; and, because we want to place barriers in the way of processes which might enumerate the set of keys we use (and the set of keys we have leaked) and keep records of what status each key had when. So we need a separate publication channel - for which a webserver was the obvious answer. We want the private keys to be readily noticeable and findable by someone who is verifying an alleged leaked email dump, but to be hard to enumerate. (One part of the strategy for this is to leave a note about it, with the prospective private key url, in the email headers.) The key rotation operations are more complicated than first appears, too. The short summary, above, neglects to consider the fact that DNS updates have a nonzero propagation time: if you change the DNS, not everyone on the Internet will experience the change immediately. So as well as a timeout for how long it might take an email to be delivered (ie, how long the DKIM signature remains valid), there is also a timeout for how long to wait after updating the DNS, before relying on everyone having got the memo. (This same timeout applies both before starting to sign emails with a new key, and before deliberately compromising a key which has been withdrawn and deadvertised.) Updating the DNS, and the MTA configuration, are fallible operations. So we need to cope with out-of-course situations, where a previous DNS or MTA update failed. In that case, we need to retry the failed update, and not proceed with key rotation. We mustn t start the timer for the key rotation until the update has been implemented. The rotation script will usually be run by cron, but it might be run by hand, and when it is run by hand it ought not to jump the gun and do anything too early (ie, before the relevant timeout has expired). cron jobs don t always run, and don t always run at precisely the right time. (And there s daylight saving time, to consider, too.) So overall, it s not sufficient to drive the system via cron and have it proceed by one unit of rotation on each run. And, hardest of all, I wanted to support post-deployment configuration changes, while continuing to keep the whole the system operational. Otherwise, you have to bake in all the timing parameters right at the beginning and can t change anything ever. So for example, I wanted to be able to change the email and DNS propagation delays, and even the number of selectors to use, without adversely affecting the delivery of already-sent emails, and without having to shut anything down. I think I have solved these problems. The resulting system is one which keeps track of the timing constraints, and the next event which might occur, on a per-key basis. It calculates on each run, which key(s) can be advanced to the next stage of their lifecycle, and performs the appropriate operations. The regular key update schedule is then an emergent property of the config parameters and cron job schedule. (I provide some example config.) Exim Integrating dkim-rotate itself with Exim was fairly easy. The lsearch lookup function can be used to fish information out of a suitable data file maintained by dkim-rotate. But a final awkwardness was getting Exim to make the right DKIM signatures, at the right time. When making a DKIM signature, one must choose a signing authority domain name: who should we claim to be? (This is the SDID in DKIM terms.) A mailserver that handles many different mail domains will be able to make good signatures on behalf of many of them. It seems to me that domain to be the mail domain in the From: header of the email. (The RFC doesn t seem to be clear on what is expected.) Exim doesn t seem to have anything builtin to do that. And, you only want to DKIM-sign emails that are originated locally or from trustworthy sources. You don t want to DKIM-sign messages that you received from the global Internet, and are sending out again (eg because of an email alias or mailing list). In theory if you verify DKIM on all incoming emails, you could avoid being fooled into signing bad emails, but rejecting all non-DKIM-verified email would be a very strong policy decision. Again, Exim doesn t seem to have cooked machinery. The resulting Exim configuration parameters run to 22 lines, and because they re parameters to an existing config item (the smtp transport) they can t even easily be deployed as a drop-in file via Debian s split config Exim configuration scheme. (I don t know if the file written for Exim s use by dkim-rotate would be suitable for other MTAs, but this part of dkim-rotate could easily be extended.) Conclusion I have today released dkim-rotate 0.4, which is the first public release for general use. I have it deployed and working, but it s new so there may well be bugs to work out. If you would like to try it out, you can get it via git from Debian Salsa. (Debian folks can also find it freshly in Debian unstable.)

comment count unavailable comments

12 July 2022

Matthew Garrett: Responsible stewardship of the UEFI secure boot ecosystem

After I mentioned that Lenovo are now shipping laptops that only boot Windows by default, a few people pointed to a Lenovo document that says:

Starting in 2022 for Secured-core PCs it is a Microsoft requirement for the 3rd Party Certificate to be disabled by default.

"Secured-core" is a term used to describe machines that meet a certain set of Microsoft requirements around firmware security, and by and large it's a good thing - devices that meet these requirements are resilient against a whole bunch of potential attacks in the early boot process. But unfortunately the 2022 requirements don't seem to be publicly available, so it's difficult to know what's being asked for and why. But first, some background.

Most x86 UEFI systems that support Secure Boot trust at least two certificate authorities:

1) The Microsoft Windows Production PCA - this is used to sign the bootloader in production Windows builds. Trusting this is sufficient to boot Windows.
2) The Microsoft Corporation UEFI CA - this is used by Microsoft to sign non-Windows UEFI binaries, including built-in drivers for hardware that needs to work in the UEFI environment (such as GPUs and network cards) and bootloaders for non-Windows.

The apparent secured-core requirement for 2022 is that the second of these CAs should not be trusted by default. As a result, drivers or bootloaders signed with this certificate will not run on these systems. This means that, out of the box, these systems will not boot anything other than Windows[1].

Given the association with the secured-core requirements, this is presumably a security decision of some kind. Unfortunately, we have no real idea what this security decision is intended to protect against. The most likely scenario is concerns about the (in)security of binaries signed with the third-party signing key - there are some legitimate concerns here, but I'm going to cover why I don't think they're terribly realistic.

The first point is that, from a boot security perspective, a signed bootloader that will happily boot unsigned code kind of defeats the point. Kaspersky did it anyway. The second is that even a signed bootloader that is intended to only boot signed code may run into issues in the event of security vulnerabilities - the Boothole vulnerabilities are an example of this, covering multiple issues in GRUB that could allow for arbitrary code execution and potential loading of untrusted code.

So we know that signed bootloaders that will (either through accident or design) execute unsigned code exist. The signatures for all the known vulnerable bootloaders have been revoked, but that doesn't mean there won't be other vulnerabilities discovered in future. Configuring systems so that they don't trust the third-party CA means that those signed bootloaders won't be trusted, which means any future vulnerabilities will be irrelevant. This seems like a simple choice?

There's actually a couple of reasons why I don't think it's anywhere near that simple. The first is that whenever a signed object is booted by the firmware, the trusted certificate used to verify that object is measured into PCR 7 in the TPM. If a system previously booted with something signed with the Windows Production CA, and is now suddenly booting with something signed with the third-party UEFI CA, the values in PCR 7 will be different. TPMs support "sealing" a secret - encrypting it with a policy that the TPM will only decrypt it if certain conditions are met. Microsoft make use of this for their default Bitlocker disk encryption mechanism. The disk encryption key is encrypted by the TPM, and associated with a specific PCR 7 value. If the value of PCR 7 doesn't match, the TPM will refuse to decrypt the key, and the machine won't boot. This means that attempting to attack a Windows system that has Bitlocker enabled using a non-Windows bootloader will fail - the system will be unable to obtain the disk unlock key, which is a strong indication to the owner that they're being attacked.

The second is that this is predicated on the idea that removing the third-party bootloaders and drivers removes all the vulnerabilities. In fact, there's been rather a lot of vulnerabilities in the Windows bootloader. A broad enough vulnerability in the Windows bootloader is arguably a lot worse than a vulnerability in a third-party loader, since it won't change the PCR 7 measurements and the system will boot happily. Removing trust in the third-party CA does nothing to protect against this.

The third reason doesn't apply to all systems, but it does to many. System vendors frequently want to ship diagnostic or management utilities that run in the boot environment, but would prefer not to have to go to the trouble of getting them all signed by Microsoft. The simple solution to this is to ship their own certificate and sign all their tooling directly - the secured-core Lenovo I'm looking at currently is an example of this, with a Lenovo signing certificate. While everything signed with the third-party signing certificate goes through some degree of security review, there's no requirement for any vendor tooling to be reviewed at all. Removing the third-party CA does nothing to protect the user against the code that's most likely to contain vulnerabilities.

Obviously I may be missing something here - Microsoft may well have a strong technical justification. But they haven't shared it, and so right now we're left making guesses. And right now, I just don't see a good security argument.

But let's move on from the technical side of things and discuss the broader issue. The reason UEFI Secure Boot is present on most x86 systems is that Microsoft mandated it back in 2012. Microsoft chose to be the only trusted signing authority. Microsoft made the decision to assert that third-party code could be signed and trusted.

We've certainly learned some things since then, and a bunch of things have changed. Third-party bootloaders based on the Shim infrastructure are now reviewed via a community-managed process. We've had a productive coordinated response to the Boothole incident, which also taught us that the existing revocation strategy wasn't going to scale. In response, the community worked with Microsoft to develop a specification for making it easier to handle similar events in future. And it's also worth noting that after the initial Boothole disclosure was made to the GRUB maintainers, they proactively sought out other vulnerabilities in their codebase rather than simply patching what had been reported. The free software community has gone to great lengths to ensure third-party bootloaders are compatible with the security goals of UEFI Secure Boot.

So, to have Microsoft, the self-appointed steward of the UEFI Secure Boot ecosystem, turn round and say that a bunch of binaries that have been reviewed through processes developed in negotiation with Microsoft, implementing technologies designed to make management of revocation easier for Microsoft, and incorporating fixes for vulnerabilities discovered by the developers of those binaries who notified Microsoft of these issues despite having no obligation to do so, and which have then been signed by Microsoft are now considered by Microsoft to be insecure is, uh, kind of impolite? Especially when unreviewed vendor-signed binaries are still considered trustworthy, despite no external review being carried out at all.

If Microsoft had a set of criteria used to determine whether something is considered sufficiently trustworthy, we could determine which of these we fell short on and do something about that. From a technical perspective, Microsoft could set criteria that would allow a subset of third-party binaries that met additional review be trusted without having to trust all third-party binaries[2]. But, instead, this has been a decision made by the steward of this ecosystem without consulting major stakeholders.

If there are legitimate security concerns, let's talk about them and come up with solutions that fix them without doing a significant amount of collateral damage. Don't complain about a vendor blocking your apps and then do the same thing yourself.

[Edit to add: there seems to be some misunderstanding about where this restriction is being imposed. I bought this laptop because I'm interested in investigating the Microsoft Pluton security processor, but Pluton is not involved at all here. The restriction is being imposed by the firmware running on the main CPU, not any sort of functionality implemented on Pluton]

[1] They'll also refuse to run any drivers that are stored in flash on Thunderbolt devices, which means eGPU setups may be more complicated, as will netbooting off Thunderbolt-attached NICs
[2] Use a different leaf cert to sign the new trust tier, add the old leaf cert to dbx unless a config option is set, leave the existing intermediate in db

comment count unavailable comments

3 July 2022

Russ Allbery: Review: The Haunting of Tram Car 015

Review: The Haunting of Tram Car 015, by P. Dj l Clark
Publisher: Tordotcom
Copyright: February 2019
ASIN: B07H796G2Z
Format: Kindle
Pages: 65
The Haunting of Tram Car 015 is a novella and the second story in the Dead Djinn universe, after "A Dead Djinn in Cairo". While there are a few references to the previous story, it's not a direct sequel and has different main characters. Order of reading is not important. Agents Hamed and Onsi of the Ministry of Alchemy, Enchantments, and Supernatural Entities have been called by the Superintendent of Tram Safety & Maintenance at Ramses Station because one of the tram cars is haunted. The aerial tram system of Cairo (technically a telpher system since the cars move independently) is one of the modern wonders of the 1912 city after al-Jahiz breached the boundaries between universes and allowed djinn to return to the world. The trams are elaborate magical clockwork machines created by djinn to travel their routes, but tram car 015 had to be taken out of service after a magical disturbance. Some supernatural creature has set up residence in its machinery and has been attacking passengers. Like "A Dead Djinn in Cairo," this is a straightforward police procedural in an alternate history with magic and steampunk elements. There isn't much in the way of mystery, and little about the plot will come as a surprise. The agents show up, study the problem, do a bit of research, and then solve the problem with some help. Unlike the previous story, though, it does a far better job at setting. My main complaint about Clark's first story in this universe was that it had a lot of infodumps and not much atmosphere. The Haunting of Tram Car 015 is more evocative, starting with the overheated, windowless office of the superintendent and its rattling fan and continuing with a glimpse of the city's aerial tram network spreading out from the dirigible mooring masts of Ramses Station. While the agents puzzle through identifying the unwanted tram occupant, they have to deal with bureaucratic funding fights and the expense of djinn specialists. In the background, the women of Cairo are agitating for the vote, and Islam, Coptic Christianity, and earlier Egyptian religions mingle warily. The story layered on top of this background is adequate but not great. It's typical urban fantasy fare built on random bits of obscure magical trivia, and feels akin to the opening problem in a typical urban fantasy novel (albeit with a refreshingly non-European magical system). It also features an irritatingly cliched bit of costuming at the conclusion. But you wouldn't read this for the story; you read it to savor the world background, and I thought that was successful. This is not a stand-out novella for me and I wouldn't have nominated it for the various awards it contended for, but it's also not my culture and by other online accounts it represents the culture well. The world background was interesting enough that I might have kept reading even if the follow-on novel had not won a Nebula award. Followed by the novel A Master of Djinn, although the continuity link is not strong. Rating: 7 out of 10

19 June 2022

Dirk Eddelbuettel: #38: Faster Feedback Systems

Engineers build systems. Good engineers always stress and focus efficiency of these systems. Two recent examples of engineering thinking follow. One was in a video / podcast interview with Martin Thompson (who is a noted high-performance code expert) I came across recently. The overall focus of the hour-long interview is on managing software complexity . Around minute twenty-two, the conversation turns to feedback loops and systems, and a strong preference for simple and fast systems for more immediate feedback. An important topic indeed. The second example connects to this and permeates many tweets and other writings by Erik Bernhardsson. He had an earlier 2017 post on Optimizing for iteration speed , as well as a 17 May 2022 tweet on minimizing feedback loop size, another 28 Mar 2022 tweet reply on shorter feedback loops, then a 14 Feb 2022 post on problems with slow feedback loops, as well as a 13 Jan 2022 post on a priority for tighter feedback loops, and lastly a 23 Jul 2021 post on fast feedback cycles. You get the idea: Erik really digs faster feedback loops. Nobody likes to wait: immediatecy wins each time. A few years ago, I had touched on this topic with two posts on how to make (R) package compilation (and hence installation) faster. One idea (which I still use whenever I must compile) was in post #11 on caching compilation. Another idea was in post #13: make it faster by not doing it, in this case via binary installation which skip the need for compilation (and which is what I aim for with, say, CI dependencies). Several subsequent posts can be found by scrolling down the r^4 blog section: we stressed the use of the amazing Rutter PPA c2d4u for CRAN binaries (often via Rocker containers, the (post #28) promise of RSPM, and the (post #29) awesomeness of bspm. And then in the more recent post #34 from last December we got back to a topic which ties all these things together: Dependencies. We quoted Mies van der Rohe: Less is more. Especially when it comes to dependencies as these elongate the feedback loop and thereby delay feedback. Our most recent post #37 on r2u connects these dots. Access to a complete set of CRAN binaries with full-dependency resolution accelerates use and installation. This of course also covers testing and continuous integration. Why wait minutes to recompile the same packages over and over when you can install the full Tidyverse in 18 seconds or the brms package and all it needs in 13 seconds as shown in the two gifs also on the r2u documentation site. You can even power up the example setup of the second gif via this gitpod link giving you a full Ubuntu 22.04 session in your browser to try this: so go forth and install something from CRAN with ease! The benefit of a system such our r2u CRAN binaries is clear: faster feedback loops. This holds whether you work with few or many dependencies, tiny or tidy. Faster matters, and feedback can be had sooner. And with the title of this post we now get a rallying cry to advocate for faster feedback systems: FFS .

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

6 June 2022

Reproducible Builds: Reproducible Builds in May 2022

Welcome to the May 2022 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Repfix paper Zhilei Ren, Shiwei Sun, Jifeng Xuan, Xiaochen Li, Zhide Zhou and He Jiang have published an academic paper titled Automated Patching for Unreproducible Builds:
[..] fixing unreproducible build issues poses a set of challenges [..], among which we consider the localization granularity and the historical knowledge utilization as the most significant ones. To tackle these challenges, we propose a novel approach [called] RepFix that combines tracing-based fine-grained localization with history-based patch generation mechanisms.
The paper (PDF, 3.5MB) uses the Debian mylvmbackup package as an example to show how RepFix can automatically generate patches to make software build reproducibly. As it happens, Reiner Herrmann submitted a patch for the mylvmbackup package which has remained unapplied by the Debian package maintainer for over seven years, thus this paper inadvertently underscores that achieving reproducible builds will require both technical and social solutions.

Python variables Johannes Schauer discovered a fascinating bug where simply naming your Python variable _m led to unreproducible .pyc files. In particular, the types module in Python 3.10 requires the following patch to make it reproducible:
--- a/Lib/types.py
+++ b/Lib/types.py
@@ -37,8 +37,8 @@ _ag = _ag()
 AsyncGeneratorType = type(_ag)
 
 class _C:
-    def _m(self): pass
-MethodType = type(_C()._m)
+    def _b(self): pass
+MethodType = type(_C()._b)
Simply renaming the dummy method from _m to _b was enough to workaround the problem. Johannes bug report first led to a number of improvements in diffoscope to aid in dissecting .pyc files, but upstream identified this as caused by an issue surrounding interned strings and is being tracked in CPython bug #78274.

New SPDX team to incorporate build metadata in Software Bill of Materials SPDX, the open standard for Software Bill of Materials (SBOM), is continuously developed by a number of teams and committees. However, SPDX has welcomed a new addition; a team dedicated to enhancing metadata about software builds, complementing reproducible builds in creating a more secure software supply chain. The SPDX Builds Team has been working throughout May to define the universal primitives shared by all build systems, including the who, what, where and how of builds:
  • Who: the identity of the person or organisation that controls the build infrastructure.
  • What: the inputs and outputs of a given build, combining metadata about the build s configuration with an SBOM describing source code and dependencies.
  • Where: the software packages making up the build system, from build orchestration tools such as Woodpecker CI and Tekton to language-specific tools.
  • How: the invocation of a build, linking metadata of a build to the identity of the person or automation tool that initiated it.
The SPDX Builds Team expects to have a usable data model by September, ready for inclusion in the SPDX 3.0 standard. The team welcomes new contributors, inviting those interested in joining to introduce themselves on the SPDX-Tech mailing list.

Talks at Debian Reunion Hamburg Some of the Reproducible Builds team (Holger Levsen, Mattia Rizzolo, Roland Clobus, Philip Rinn, etc.) met in real life at the Debian Reunion Hamburg (official homepage). There were several informal discussions amongst them, as well as two talks related to reproducible builds. First, Holger Levsen gave a talk on the status of Reproducible Builds for bullseye and bookworm and beyond (WebM, 210MB): Secondly, Roland Clobus gave a talk called Reproducible builds as applied to non-compiler output (WebM, 115MB):

Supply-chain security attacks This was another bumper month for supply-chain attacks in package repositories. Early in the month, Lance R. Vick noticed that the maintainer of the NPM foreach package let their personal email domain expire, so they bought it and now controls foreach on NPM and the 36,826 projects that depend on it . Shortly afterwards, Drew DeVault published a related blog post titled When will we learn? that offers a brief timeline of major incidents in this area and, not uncontroversially, suggests that the correct way to ship packages is with your distribution s package manager .

Bootstrapping Bootstrapping is a process for building software tools progressively from a primitive compiler tool and source language up to a full Linux development environment with GCC, etc. This is important given the amount of trust we put in existing compiler binaries. This month, a bootstrappable mini-kernel was announced. Called boot2now, it comprises a series of compilers in the form of bootable machine images.

Google s new Assured Open Source Software service Google Cloud (the division responsible for the Google Compute Engine) announced a new Assured Open Source Software service. Noting the considerable 650% year-over-year increase in cyberattacks aimed at open source suppliers, the new service claims to enable enterprise and public sector users of open source software to easily incorporate the same OSS packages that Google uses into their own developer workflows . The announcement goes on to enumerate that packages curated by the new service would be:
  • Regularly scanned, analyzed, and fuzz-tested for vulnerabilities.
  • Have corresponding enriched metadata incorporating Container/Artifact Analysis data.
  • Are built with Cloud Build including evidence of verifiable SLSA-compliance
  • Are verifiably signed by Google.
  • Are distributed from an Artifact Registry secured and protected by Google.
(Full announcement)

A retrospective on the Rust programming language Andrew bunnie Huang published a long blog post this month promising a critical retrospective on the Rust programming language. Amongst many acute observations about the evolution of the language s syntax (etc.), the post beings to critique the languages approach to supply chain security ( Rust Has A Limited View of Supply Chain Security ) and reproducibility ( You Can t Reproduce Someone Else s Rust Build ):
There s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say almost anyone else because this fix would be OS-dependent, so we d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.
(Full post)

Reproducible Builds IRC meeting The minutes and logs from our May 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 28th June at 15:00 UTC on #reproducible-builds on the OFTC network.

A new tool to improve supply-chain security in Arch Linux kpcyrd published yet another interesting tool related to reproducibility. Writing about the tool in a recent blog post, kpcyrd mentions that although many PKGBUILDs provide authentication in the context of signed Git tags (i.e. the ability to verify the Git tag was signed by one of the two trusted keys ), they do not support pinning, ie. that upstream could create a new signed Git tag with an identical name, and arbitrarily change the source code without the [maintainer] noticing . Conversely, other PKGBUILDs support pinning but not authentication. The new tool, auth-tarball-from-git, fixes both problems, as nearly outlined in kpcyrd s original blog post.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 212, 213 and 214 to Debian unstable. Chris also made the following changes:
  • New features:
    • Add support for extracting vmlinuz Linux kernel images. [ ]
    • Support both python-argcomplete 1.x and 2.x. [ ]
    • Strip sticky etc. from x.deb: sticky Debian binary package [ ]. [ ]
    • Integrate test coverage with GitLab s concept of artifacts. [ ][ ][ ]
  • Bug fixes:
    • Don t mask differences in .zip or .jar central directory extra fields. [ ]
    • Don t show a binary comparison of .zip or .jar files if we have observed at least one nested difference. [ ]
  • Codebase improvements:
    • Substantially update comment for our calls to zipinfo and zipinfo -v. [ ]
    • Use assert_diff in test_zip over calling get_data with a separate assert. [ ]
    • Don t call re.compile and then call .sub on the result; just call re.sub directly. [ ]
    • Clarify the comment around the difference between --usage and --help. [ ]
  • Testsuite improvements:
    • Test --help and --usage. [ ]
    • Test that --help includes the file formats. [ ]
Vagrant Cascadian added an external tool reference xb-tool for GNU Guix [ ] as well as updated the diffoscope package in GNU Guix itself [ ][ ][ ].

Distribution work In Debian, 41 reviews of Debian packages were added, 85 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated, including adding a new nondeterministic_ordering_in_deprecated_items_collected_by_doxygen toolchain issue [ ] as well as ones for mono_mastersummary_xml_files_inherit_filesystem_ordering [ ], extended_attributes_in_jar_file_created_without_manifest [ ] and apxs_captures_build_path [ ]. Vagrant Cascadian performed a rough check of the reproducibility of core package sets in GNU Guix, and in openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducible builds website Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but also prepared and published an interview with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix. [ ][ ][ ][ ] In addition, Tim Jones added a link to the Talos Linux project [ ] and billchenchina fixed a dead link [ ].

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Add support for detecting running kernels that require attention. [ ]
    • Temporarily configure a host to support performing Debian builds for packages that lack .buildinfo files. [ ]
    • Update generated webpages to clarify wishes for feedback. [ ]
    • Update copyright years on various scripts. [ ]
  • Mattia Rizzolo:
    • Provide a facility so that Debian Live image generation can copy a file remotely. [ ][ ][ ][ ]
  • Roland Clobus:
    • Add initial support for testing generated images with OpenQA. [ ]
And finally, as usual, node maintenance was also performed by Holger Levsen [ ][ ].

Misc news On our mailing list this month:

Contact If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

22 May 2022

Ulrike Uhlig: How do kids conceive the internet? - part 3

I received some feedback on the first part of interviews about the internet with children that I d like to share publicly here. Thank you! Your thoughts and experiences are important to me! In the first interview round there was this French girl.
Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children.
To this bit, one person reacted saying that they first laughed when reading her proposal, but then felt extremely touched by it. Another person reacted to the same bit of text:
That s just brilliant. We spend so much time worrying about how the internet will affect children while overlooking how it has already affected us as parents. It actively harms our relationship with our children (keeping us distracted from their amazing life) and sets a bad example for them. Too often, when we worry about children, we should look at our own behavior first. Until about that age (9-10+) at least, they are such a direct reflection of us that it s frightening
Yet another person reacted to the fact that many of the interviewees in the first round seemed to believe that the internet is immaterial, located somewhere in the air, while being at the same time omnipresent:
It reminds me of one time about a dozen years ago, when i was still working closely with one of the city high schools where i d just had a terrible series of days, dealing with hardware failure, crappy service followthrough by the school s ISP, and overheating in the server closet, and had basically stayed overnight at the school and just managed to get things back to mostly-functional before kids and teachers started showing up again. That afternoon, i d been asked by the teacher of a dystopian fiction class to join them for a discussion of Feed, which they d just finished reading. i had read it the week before, and came to class prepared for their questions. (the book is about a near-future where kids have cybernetic implants and their society is basically on a runaway communications overload; not a bad Y[oung]A[dult] novel, really!) The kids all knew me from around the school, but the teacher introduced my appearance in class as one of the most Internet-connected people and they wanted to ask me about whether i really thought the internet would do this kind of thing to our culture, which i think was the frame that the teacher had prepped them with. I asked them whether they thought the book was really about the Internet, or whether it was about mobile phones. Totally threw off the teacher s lesson plans, i think, but we had a good discussion. At one point, one of the kids asked me if there was some kind of crazy disaster and all the humans died out, would the internet just keep running? what would happen on it if we were all gone? all of my labor even that grueling week was invisible to him! The internet was an immaterial thing, or if not immaterial, a force of nature, a thing that you accounted for the way you accounted for the weather, or traffic jams. It didn t occur to him, even having just read a book that asked questions about what hyperconnectivity does to a culture (including grappling with issues of disparate access, effective discrimination based on who has the latest hardware, etc), it didn t occur to him that this shit all works to the extent that it does because people make it go. I felt lost trying to explain it to him, because where i wanted to get to with the class discussion was about how we might decide collectively to make it go somewhere else that our contributions to it, and our labor to perpetuate it (or not) might actually help shape the future that the network helps us slide into. but he didn t even see that human decisions or labor played a role it in at all, let alone a potentially directive role. We were really starting at square zero, which wasn t his fault. Or the fault of his classmates that matter but maybe a little bit of fault on the teacher, who i thought should have been emphasizing this more but even the teacher clearly thought of the internet as a thing being done to us not as something we might actually drive one way or another. And she s not even wrong most people don t have much control, just like most people can t control the weather, even as our weather changes based on aggregate human activity.
I was quite impressed by seeing the internet perceived as a force of nature, so we continued this discussion a bit:
that whole story happened before we started talking about the cloud , but the cloud really reinforces this idea, i think. not that anyone actually thinks that the cloud is a literal cloud, but language shapes minds in subtle ways.
(Bold emphasis in the texts are mine.) Thanks :) I m happy and touched that these interviews prompted your wonderful reactions, and I hope that there ll be more to come on this topic. I m working on it!

10 May 2022

Russell Coker: Elon and Free Speech

Elon Musk has made the news for spending billions to buy a share of Twitter for the alleged purpose of providing free speech. The problem with this claim is that having any company controlling a large portion of the world s communication is inherently bad for free speech. The same applies for Facebook, but that s not a hot news item at the moment. If Elon wanted to provide free speech he would want to have decentralised messaging systems so that someone who breaks rules on one platform could find another with different rules. Among other things free speech ideally permits people to debate issues with residents of another country on issues related to different laws. If advocates for the Russian government get kicked off Twitter as part of the American sanctions against Russia then American citizens can t debate the issue with Russian citizens via Twitter. Mastodon is one example of a federated competitor to Twitter [1]. With a federated messaging system each host could make independent decisions about interpretation of sanctions. Someone who used a Mastodon instance based in the US could get a second account in another country if they wanted to communicate with people in countries that are sanctioned by the US. The problem with Mastodon at the moment is lack of use. It s got a good set of features and support for different platforms, there are apps for Android and iPhone as well as lots of other software using the API. But if the people you want to communicate with aren t on it then it s less useful. Elon could solve that problem by creating a Tesla Mastodon server and give a free account to everyone who buys a new Tesla, which is the sort of thing that a lot of Tesla buyers would like. It s quite likely that other companies selling prestige products would follow that example. Everyone has seen evidence of people sharing photos on social media with someone else s expensive car, a Mastodon account on ferrari.com or mercedes.com would be proof of buying the cars in question. The number of people who buy expensive cars new is a very small portion of the world population, but it s a group of people who are more influential than average and others would join Mastodon servers to follow them. The next thing that Elon could do to kill Twitter would be to have all his companies (which have something more than a dozen verified Twitter accounts) use Mastodon accounts for their primary PR releases and then send the same content to Twitter with a 48 hour delay. That would force journalists and people who want to discuss those companies on social media to follow the Mastodon accounts. Again this wouldn t be a significant number of people, but they would be influential people. Getting journalists to use a communications system increases it s importance. The question is whether Elon is lacking the vision necessary to plan a Mastodon deployment or whether he just wants to allow horrible people to run wild on Twitter. The Verge has an interesting article from 2019 about Gab using Mastodon [2]. The fact that over the last 2.5 years I didn t even hear of Gab using Mastodon suggests that the fears of some people significantly exceeded the problem. I m sure that some Gab users managed to harass some Mastodon users, but generally they were apparently banned quickly. As an aside the Mastodon server I use doesn t appear to ban Gab, a search for Gab on it gave me a user posting about being pureblood at the top of the list. Gab claims to have 4 million accounts and has an estimated 100,000 active users. If 5.5% of Tesla owners became active users on a hypothetical Tesla server that would be the largest Mastodon server. Elon could demonstrate his commitment to free speech by refusing to ban Gab in any way. The Wikipedia page about Gab [3] has a long list of horrible people and activities associated with it. Is that the free speech to associate with Tesla? Polestar makes some nice electric cars that appear quite luxurious [4] and doesn t get negative PR from the behaviour of it s owner, that s something Elon might want to consider. Is this really about bragging rights? Buying a controlling interest in a company that has a partial monopoly on Internet communication is something to boast about. Could users of commercial social media be considered serfs who serve their billionaire overlord?

9 May 2022

Robert McQueen: Evolving a strategy for 2022 and beyond

As a board, we have been working on several initiatives to make the Foundation a better asset for the GNOME Project. We re working on a number of threads in parallel, so I wanted to explain the big picture a bit more to try and connect together things like the new ED search and the bylaw changes. We re all here to see free and open source software succeed and thrive, so that people can be be truly empowered with agency over their technology, rather than being passive consumers. We want to bring GNOME to as many people as possible so that they have computing devices that they can inspect, trust, share and learn from. In previous years we ve tried to boost the relevance of GNOME (or technologies such as GTK) or solicit donations from businesses and individuals with existing engagement in FOSS ideology and technology. The problem with this approach is that we re mostly addressing people and organisations who are already supporting or contributing FOSS in some way. To truly scale our impact, we need to look to the outside world, build better awareness of GNOME outside of our current user base, and find opportunities to secure funding to invest back into the GNOME project. The Foundation supports the GNOME project with infrastructure, arranging conferences, sponsoring hackfests and travel, design work, legal support, managing sponsorships, advisory board, being the fiscal sponsor of GNOME, GTK, Flathub and we will keep doing all of these things. What we re talking about here are additional ways for the Foundation to support the GNOME project we want to go beyond these activities, and invest into GNOME to grow its adoption amongst people who need it. This has a cost, and that means in parallel with these initiatives, we need to find partners to fund this work. Neil has previously talked about themes such as education, advocacy, privacy, but we ve not previously translated these into clear specific initiatives that we would establish in addition to the Foundation s existing work. This is all a work in progress and we welcome any feedback from the community about refining these ideas, but here are the current strategic initiatives the board is working on. We ve been thinking about growing our community by encouraging and retaining diverse contributors, and addressing evolving computing needs which aren t currently well served on the desktop. Initiative 1. Welcoming newcomers. The community is already spending a lot of time welcoming newcomers and teaching them the best practices. Those activities are as time consuming as they are important, but currently a handful of individuals are running initiatives such as GSoC, Outreachy and outreach to Universities. These activities help bring diverse individuals and perspectives into the community, and helps them develop skills and experience of collaborating to create Open Source projects. We want to make those efforts more sustainable by finding sponsors for these activities. With funding, we can hire people to dedicate their time to operating these programs, including paid mentors and creating materials to support newcomers in future, such as developer documentation, examples and tutorials. This is the initiative that needs to be refined the most before we can turn it into something real. Initiative 2: Diverse and sustainable Linux app ecosystem. I spoke at the Linux App Summit about the work that GNOME and Endless has been supporting in Flathub, but this is an example of something which has a great overlap between commercial, technical and mission-based advantages. The key goal here is to improve the financial sustainability of participating in our community, which in turn has an impact on the diversity of who we can expect to afford to enter and remain in our community. We believe the existence of this is critically important for individual developers and contributors to unlock earning potential from our ecosystem, through donations or app sales. In turn, a healthy app ecosystem also improves the usefulness of the Linux desktop as a whole for potential users. We believe that we can build a case for commercial vendors in the space to join an advisory board alongside with GNOME, KDE, etc to input into the governance and contribute to the costs of growing Flathub. Initiative 3: Local-first applications for the GNOME desktop. This is what Thib has been starting to discuss on Discourse, in this thread. There are many different threats to free access to computing and information in today s world. The GNOME desktop and apps need to give users convenient and reliable access to technology which works similarly to the tools they already use everyday, but keeps them and their data safe from surveillance, censorship, filtering or just being completely cut off from the Internet. We believe that we can seek both philanthropic and grant funding for this work. It will make GNOME a more appealing and comprehensive offering for the many people who want to protect their privacy. The idea is that these initiatives all sit on the boundary between the GNOME community and the outside world. If the Foundation can grow and deliver these kinds of projects, we are reaching to new people, new contributors and new funding. These contributions and investments back into GNOME represent a true win-win for the newcomers and our existing community. (Originally posted to GNOME Discourse, please feel free to join the discussion there.)

17 April 2022

Russ Allbery: First 2022 haul post

I haven't posted one of these in a while. Here's the (mostly new) stuff that's come out that caught my interest in the past few months. Some of these I've already read and reviewed. Tom Burgis Kleptopia (non-fiction)
Angela Chen Ace (non-fiction)
P. Dj l Clark A Dead Djinn in Cairo (sff)
P. Dj l Clark The Haunting of Tram Car 015 (sff)
P. Dj l Clark A Master of Djinn (sff)
Brittney C. Cooper Eloquent Rage (non-fiction)
Madeleine Dore I Didn't Do the Thing Today (non-fiction)
Saad Z. Hossain The Gurkha and the Lord of Tuesday (sff)
George F. Kennan Memoirs, 1925-1950 (non-fiction)
Kiese Laymon How to Slowly Kill Yourself and Others in America (non-fiction)
Adam Minter Secondhand (non-fiction)
Amanda Oliver Overdue (non-fiction)
Laurie Penny Sexual Revolution (non-fiction)
Scott A. Snook Friendly Fire (non-fiction)
Adrian Tchaikovsky Elder Race (sff)
Adrian Tchaikovsky Shards of Earth (sff)
Tor.com (ed.) Some of the Best of Tor.com: 2021 (sff anthology)
Charlie Warzel & Anne Helen Petersen Out of Office (non-fiction)
Robert Wears Still Not Safe (non-fiction)
Max Weber The Vocation Lectures (non-fiction) Lots and lots of non-fiction in this mix. Maybe a tiny bit better than normal at not buying tons of books that I don't have time to read, although my reading (and particularly my reviewing) rate has been a bit slow lately.

1 April 2022

Antoine Beaupr : Salvaged my first Debian package

I finally salvaged my first Debian package, python-invoke. As part of ITS 964718, I moved the package from the Openstack Team to the Python team. The Python team might not be super happy with it, because it's breaking some of its rules, but at least someone (ie. me) is actively working (and using) the package.

Wait what People not familiar with Debian will not understand anything in that first paragraph, so let me expand. Know-it-all Debian developers (you know who you are) can skip to the next section. Traditionally, the Debian project (my Linux-based operating system of choice) has prided itself on the self-managed, anarchistic organisation of its packaging. Each package maintainer is the lord of their little kingdom. Some maintainers like to accumulate lots of kingdoms to rule over. (Yes, it really doesn't sound like anarchism when you put it like that. Yes, it's complicated: there's a constitution and voting involved. And yes, we're old.) Therefore, it's really hard to make package maintainers do something they don't want. Typically, when things go south, someone makes a complaint to the Debian Technical Committee (CTTE) which is established by the Debian constitution to resolve such conflicts. The committee is appointed by the Debian Project leader, elected each year (and there's an election coming up if you haven't heard). Typically, the CTTE will then vote and formulate a decision. But here's the trick: maintainers are still free to do whatever they want after that, in a sense. It's not like the CTTE can just break down doors and force maintainers to type code. (I won't go into the details of the why of that, but it involves legal issues and, I think, something about the Turing halting problem. Or something like that.) Anyways. The point is all that is super heavy and no one wants to go there... (Know-it-all Debian developers, I know you are still reading this anyways and disagree with that statement, but please, please, make it true.) ... but sometimes, packages just get lost. Maintainers get distracted, or busy with something else. It's not that they want to abandon their packages. They love their little fiefdoms. It's just there was a famine or a war or something and everyone died, and they have better things to do than put up fences or whatever. So clever people in Debian found a better way of handling such problems than waging war in the poor old CTTE's backyard. It's called the Package Salvaging process. Through that process, a maintainer can propose to take over an existing package from another maintainer, if a certain set of condition are met and a specific process is followed. Normally, taking over another maintainer's package is basically a war declaration, rarely seen in the history of Debian (yes, I do think it happened!), as rowdy as ours is. But through this process, it seems we have found a fair way of going forward. The process is basically like this:
  1. file a bug proposing the change
  2. wait three weeks
  3. upload a package making the change, with another week delay
  4. you now have one more package to worry about
Easy right? It actually is! Process! It's magic! It will cure your babies and resurrect your cat!

So how did that go? It went well! The old maintainer was actually fine with the change because his team wasn't using the package anymore anyways. He asked to be kept as an uploader, which I was glad to oblige. (He replied a few months after the deadline, but I wasn't in a rush anyways, so that doesn't matter. It was polite for him to answer, even if, technically, I was already allowed to take it over.) What happened next is less shiny for me though. I totally forgot about the ITS, even after the maintainer reminded me of its existence. See, the thing is the ITS doesn't show up on my dashboard at all. So I totally forgot about it (yes, twice). In fact, the only reason I remembered it was that got into the process of formulating another ITS (1008753, trocla) and I was trying to figure out how to write the email. Then I remembered: "hey wait, I think I did this before!" followed by "oops, yes, I totally did this before and forgot for 9 months". So, not great. Also, the package is still not in a perfect shape. I was able to upload the upstream version that was pending 1.5.0 to clear out the ITS, basically. And then there's already two new upstream releases to upload, so I pushed 1.7.0 to experimental as well, for good measure. Unfortunately, I still can't enable tests because everything is on fire, as usual. But at least my kingdom is growing.

Appendix Just in case someone didn't notice the hyperbole, I'm not a monarchist promoting feudalism as a practice to manage a community. I do not intend to really "grow my kingdom" and I think the culture around "property" of "packages" is kind of absurd in Debian. I kind of wish it would go away. (Update: It has also been pointed out that I might have made Debian seem more confrontational than it actually is. And it's kind of true: most work and interactions in Debian actually go fine, it's only a minority of issues that degenerate into conflicts. It's just that they tend to take up a lot of space in the community, and I find that particularly draining. And I think think our "package ownership" culture is part of at least some of those problems.) Team maintenance, the LowNMU process, and low threshold adoption processes are all steps in the good direction, but they are all opt in. At least the package salvaging process is someone a little more ... uh... coercive? Or at least it allows the community to step in and do the right thing, in a sense. We'll see what happens with the coming wars around the recent tech committee decision, which are bound to touch on that topic. (Hint: our next drama is called "usrmerge".) Hopefully, LWN will make a brilliant article to sum it up for us so that I don't have to go through the inevitable debian-devel flamewar to figure it out. I already wrecked havoc on the #debian-devel IRC channel asking newbie questions so I won't stir that mud any further for now. (Update: LWN, of course, did make an article about usrmerge in Debian. I will read it soon and can then tell you know if it's brilliant, but they are typically spot on.)

30 March 2022

Ulrike Uhlig: How do kids conceive the internet?

I wanted to understand how kids between 10 and 18 conceive the internet. Surely, we have seen a generation that we call digital natives grow up with the internet. Now, there is a younger generation who grows up with pervasive technology, such as smartphones, smart watches, virtual assistants and so on. And only a few of them have parents who work in IT or engineering

Pervasive technology contributes to the idea that the internet is immaterial With their search engine website design, Google has put in place an extremely simple and straightforward user interface. Since then, designers and psychologists have worked on making user interfaces more and more intuitive to use. The buzzwords are usability and user experience design . Besides this optimization of visual interfaces, haptic interfaces have evolved as well, specifically on smartphones and tablets where hand gestures have replaced more clumsy external haptic interfaces such as a mouse. And beyond interfaces, the devices themselves have become smaller and slicker. While in our generation many people have experienced opening a computer tower or a laptop to replace parts, with the side effect of seeing the parts the device is physically composed of, the new generation of end user devices makes this close to impossible, essentially transforming these devices into black boxes, and further contributing to the idea that the internet they are being used to access with would be something entirely intangible.

What do kids in 2022 really know about the internet? So, what do kids of that generation really know about the internet, beyond purely using services they do not control? In order to find out, I decided to interview children between 10 and 18. I conducted 5 interviews with kids aged 9, 10, 12, 15 and 17, two boys and three girls. Two live in rural Germany, one in a German urban area, and two live in the French capital. I wrote the questions in a way to stimulate the interviewees to tell me a story each time. I also told them that the interview is not a test and that there are no wrong answers. Except for the 9 year old, all interviewees possessed both, their own smartphone and their own laptop. All of them used the internet mostly for chatting, entertainment (video and music streaming, online games), social media (TikTok, Instagram, Youtube), and instant messaging. Let me introduce you to their concepts of the internet. That was my first story telling question to them:

If aliens had landed on Earth and would ask you what the internet is, what would you explain to them? The majority of respondents agreed in their replies that the internet is intangible while still being a place where one can do anything and everything . Before I tell you more about their detailed answers to the above question, let me show you how they visualize their internet.

If you had to make a drawing to explain to a person what the internet is, how would this drawing look like? Each interviewee had some minutes to come up with a drawing. As you will see, that drawing corresponds to what the kids would want an alien to know about the internet and how they are using the internet themselves.

Movies, series, videos A child's drawing. In the middle, there is a screen, on the screen a movie is running. Around the screen there are many people, at least two dozens. The words 'film', 'series', 'network', 'video' are written and arrows point from these words to the screen. There's also a play icon. The youngest respondent, a 9 year old girl, drew a screen with lots of people around it and the words film, series, network, video , as well as a play icon. She said that she mostly uses the internet to watch movies. She was the only one who used a shared tablet and smartphone that belonged to her family, not to herself. And she would explain the net like this to an alien:
"Internet is a er one cannot touch it it s an, er [I propose the word idea ], yes it s an idea. Many people use it not necessarily to watch things, but also to read things or do other stuff."

User interface elements There is a magnifying glass icon, a play icon and speech bubbles drawn with a pencil. A 10 year old boy represented the internet by recalling user interface elements he sees every day in his drawing: a magnifying glass (search engine), a play icon (video streaming), speech bubbles (instant messaging). He would explain the internet like this to an alien:
"You can use the internet to learn things or get information, listen to music, watch movies, and chat with friends. You can do nearly anything with it."

Another planet Pencil drawing that shows a planet with continents. The continents are named: H&M, Ebay, Google, Wikipedia, Facebook. A 12 year old girl imagines the internet like a second, intangible, planet where Google, Wikipedia, Facebook, Ebay, or H&M are continents that one enters into.
"And on [the] Ebay [continent] there s a country for clothes, and ,trousers , for example, would be a federal state in that country."
Something that was unique about this interview was that she told me she had an email address but she never writes emails. She only has an email account to receive confirmation emails, for example when doing online shopping, or when registering to a service and needing to confirm one s address. This is interesting because it s an anti-spam measure that might become outdated with a generation that uses email less or not at all.

Home network Kid's drawing: there are three computer towers and next to each there are two people. The first couple is sad, the seconf couple is smiling, the last one is suprised. Each computer is connected to a router, two of them by cable, one by wifi. A 15 year old boy knew that his family s devices are connected to a home router (Freebox is a router from the French ISP Free) but lacked an imagination of the rest of the internet s functioning. When I asked him about what would be behind the router, on the other side, he said what s behind is like a black hole to him. However, he was the only interviewee who did actually draw cables, wifi waves, a router, and the local network. His drawing is even extremely precise, it just lacks the cable connecting the router to the rest of the internet.

Satellite internet This is another very simple drawing: On top left, there's planet Earth an there are lines indicating that earth is a sphere. Around Earth there are two big satellites reaching most of Earth. on the left, below, there are three icons representing social media services on the internet: Snapchat, Instagram, TikTok. On the right, there are simplified drawings of possibilities which the internet offers: person to person connection, email (represented by envelopes), calls (represented by an old-style telephone set). A 17 year old girl would explain the internet to an alien as follows:
"The internet goes around the entire globe. One is networked with everyone else on Earth. One can find everything. But one cannot touch the internet. It s like a parallel world. With a device one can look into the internet. With search engines, one can find anything in the world, one can phone around the world, and write messages. [The internet] is a gigantic thing."
This interviewee stated as the only one that the internet is huge. And while she also is the only one who drew the internet as actually having some kind of physical extension beyond her own home, she seems to believe that internet connectivity is based on satellite technology and wireless communication.

Imagine that a wise and friendly dragon could teach you one thing about the internet that you ve always wanted to know. What would you ask the dragon to teach you about? A 10 year old boy said he d like to know how big are the servers behind all of this . That s the only interview in which the word server came up. A 12 year old girl said I would ask how to earn money with the internet. I always wanted to know how this works, and where the money comes from. I love the last part of her question! The 15 year old boy for whom everything behind the home router is out of his event horizon would ask How is it possible to be connected like we are? How does the internet work scientifically? A 17 year old girl said she d like to learn how the darknet works, what hidden things are there? Is it possible to get spied on via the internet? Would it be technically possible to influence devices in a way that one can listen to secret or telecommanded devices? Lastly, I wanted to learn about what they find annoying, or problematic about the internet.

Imagine you could make the internet better for everyone. What would you do first? Asked what she would change if she could, the 9 year old girl advocated for a global usage limit of the internet in order to protect the human brain. Also, she said, her parents spend way too much time on their phones and people should rather spend more time with their children. Three of the interviewees agreed that they see way too many advertisements and two of them would like ads to disappear entirely from the web. The other one said that she doesn t want to see ads, but that ads are fine if she can at least click them away. The 15 year old boy had different ambitions. He told me he would change:
"the age of access to the internet. More and more younger people access the internet ; especially with TikTok there is a recommendation algorithm that can influcence young people a lot. And influencing young people should be avoided but the internet does it too much. And that can be negative. If you don t yet have a critical spirit, and you watch certain videos you cannot yet moderate your stance. It can influence you a lot. There are so many things that have become indispensable and that happen on the internet and we have become dependent. What happens if one day it doesn t work anymore? If we connect more and more things to the net, that s not a good thing."

The internet - Oh, that s what you mean! On a sidenote, my first interview tentative was with an 8 year old girl from my family. I asked her if she uses the internet and she denied, so I abandoned interviewing her. Some days later, while talking to her, she proposed to look something up on Google, using her smartphone. I said: Oh, so you are using the internet! She replied: Oh, that s what you re talking about? I think she knows the word Google and she knows that she can search for information with this Google thing. But it appeared that she doesn t know that the Google search engine is located somewhere else on internet and not on her smartphone. I concluded that for her, using the services on the smartphone is as natural as switching on a light in the house: we also don t think about where the electricity comes from when we do that.

What can we learn from these few interviews? Unsurprisingly, social media, streaming, entertainment, and instant messaging are the main activities kids undertake on the internet. They are completely at the mercy of advertisements in apps and on websites, not knowing how to get rid of them. They interact on a daily basis with algorithms that are unregulated and known to perpetuate discrimination and to create filter bubbles, without necessarily being aware of it. The kids I interviewed act as mere service users and seem to be mostly confined to specific apps or websites. All of them perceived the internet as being something intangible. Only the older interviewees perceived that there must be some kind of physical expansion to it: the 17 year old girl by drawing a network of satellites around the globe, the 15 year old boy by drawing the local network in his home. To be continued

27 January 2022

Russ Allbery: Review: I Didn't Do the Thing Today

Review: I Didn't Do the Thing Today, by Madeleine Dore
Publisher: Avery
Copyright: 2022
ISBN: 0-593-41914-6
Format: Kindle
Pages: 291
At least from my narrow view of it, the world of productivity self-help literature is a fascinating place right now. The pandemic overturned normal work patterns and exacerbated schedule inequality, creating vastly different experiences for the people whose work continued to be in-person and the people whose work could become mostly or entirely remote. Self-help literature, which is primarily aimed at the more affluent white-collar class, primarily tracked the latter disruption: newly-remote work, endless Zoom meetings, the impossibility of child care, the breakdown of boundaries between work and home, and the dawning realization that much of the mechanics of day-to-day office work are neither productive nor defensible. My primary exposure these days to the more traditional self-help productivity literature is via Cal Newport. The stereotype of the productivity self-help book is a collection of life hacks and list-making techniques that will help you become a more efficient capitalist cog, but Newport has been moving away from that dead end for as long as I've been reading him, and his recent work focuses more on structural issues with the organization of knowledge work. He also shares with the newer productivity writers a willingness to tell people to use the free time they recover via improved efficiency on some life goal other than improved job productivity. But he's still prickly and defensive about the importance of personal productivity and accomplishing things. He gives lip service on his podcast to the value of the critique of productivity, but then usually reverts to characterizing anti-productivity arguments as saying that productivity is a capitalist invention to control workers. (Someone has doubtless said this on Twitter, but I've never seen a serious critique of productivity make this simplistic of an argument.) On the anti-productivity side, as it's commonly called, I've seen a lot of new writing in the past couple of years that tries to break the connection between productivity and human worth so endemic to US society. This is not a new analysis; disabled writers have been making this point for decades, it's present in both Keynes and in Galbraith's The Affluent Society, and Kathi Weeks's The Problem with Work traces some of its history in Marxist thought. But what does feel new to me is its widespread mainstream appearance in newspaper articles, viral blog posts, and books such as Jenny Odell's How to Do Nothing and Devon Price's Laziness Does Not Exist. The pushback against defining life around productivity is having a moment. Entering this discussion is Madeleine Dore's I Didn't Do the Thing Today. Dore is the author of the Extraordinary Routines blog and host of the Routines and Ruts podcast. Extraordinary Routines began as a survey of how various people organize their daily lives. I Didn't Do the Thing Today is, according to the preface, a summary of the thoughts Dore has had about her own life and routines as a result of those interviews. As you might guess from the subtitle (Letting Go of Productivity Guilt), Dore's book is superficially on the anti-productivity side. Its chapters are organized around gentle critiques of productivity concepts, with titles like "The Hopeless Search for the Ideal Routine," "The Myth of Balance," or "The Harsh Rules of Discipline." But I think anti-productivity is a poor name for this critique; its writers are not opposed to being productive, only to its position as an all-consuming focus and guilt-generating measure of personal worth. Dore structures most chapters by naming an aspect, goal, or concern of a life defined by productivity, such as wasted time, ambition, busyness, distraction, comparison, or indecision. Each chapter sketches the impact of that idea and then attempts to gently dismantle the grip that it may have on the reader's life. All of these discussions are nuanced; it's rare for Dore to say that one of these aspects has no value, and she anticipates numerous objections. But her overarching goal is to help the reader be more comfortable with imperfection, more willing to live in the moment, and less frustrated with the limitations of life and the human brain. If striving for productivity is like lifting weights, Dore's diagnosis is that we've tried too hard for too long, and have overworked that muscle until it is cramping. This book is a gentle massage to induce the muscle to relax and let go. Whether this will work is, as with all self-help books, individual. I found it was best read in small quantities, perhaps a chapter per day, since it otherwise began feeling too much the same. I'm also not the ideal audience; Dore is a creative freelancer and primarily interviewed other creative people, which I think has a different sort of productivity rhythm than the work that I do. She's also not a planner to the degree that I am; more on that below. And yet, I found this book worked on me anyway. I can't say that I was captivated all the way through, but I found myself mentally relaxing while I was reading it, and I may re-read some chapters from time to time. How does this relate to the genre of productivity self-help? With less conflict than I think productivity writers believe, although there seems to be one foundational difference of perspective. Dore is not opposed to accomplishing things, or even to systems that help people accomplish things. She is more attuned than the typical productivity writer to the guilt and frustration that can accumulate when one has a day in which one does not do the thing, but her goal is not to talk you out of attempting things. It is, instead, to convince you to hold those attempts and goals more lightly, to allow them to move and shift and change, and to not treat a failure to do the thing today as a reason for guilt. This is wholly compatible with standard productivity advice. It's adding nuance at one level of abstraction higher: how tightly to cling to productivity goals, and what to do when they don't work out. Cramping muscles are not strong muscles capable of lifting heavy things. If one can massage out the cramp, one's productivity by even the strict economic definition may improve. Where I do see a conflict is that most productivity writers are planners, and Dore is not. This is, I think, a significant blind spot in productivity self-help writing. Cal Newport, for example, advocates time-block planning, where every hour of the working day has a job. David Allen advocates a complex set of comprehensive lists and well-defined next actions. Mark Forster builds a flurry of small systems for working through lists. The standard in productivity writing is to to add structure to your day and cultivate the self-discipline required to stick to that structure. For many people, including me, this largely works. I'm mostly a planner, and when my life gets chaotic, adding more structure and focusing on that structure helps me. But the productivity writers I've read are quite insistent that their style of structure will work for everyone, and on that point I am dubious. Newport, for example, advocates time-block planning for everyone without exception, insisting that it is the best way to structure a day. Dore, in contrast, describes spending years trying to perfect a routine before realizing that elastic possibilities work better for her than routines. For those who are more like Dore than Newport, I Didn't Do the Thing Today is more likely to be helpful than Newport's instructions. This doesn't make Newport's ideas wrong; it simply makes them not universal, something that the productivity self-help genre seems to have trouble acknowledging. Even for readers like myself who prefer structure, I Didn't Do the Thing Today is a valuable corrective to the emphasis on every-better systems. For those who never got along with too much structure, I think it may strike a chord. The standard self-help caveat still applies: Dore has the most to say to people who are in a similar social class and line of work as her. I'm not sure this book will be of much help to someone who has to juggle two jobs with shift work and child care, where the problem is more sharp external constraints than internalized productivity guilt. But for its target audience, I think it's a valuable, calming message. Dore doesn't have a recipe to sort out your life, but may help you feel better about the merits of life unsorted. Rating: 7 out of 10

7 January 2022

Ayoyimika Ajibade: Everyone Struggles

Starting anything new always has in it an element of uncertainty, doubt, fears, and struggle to forge ahead, this has been my current situation as an outreachy intern working on the transition of nodejs16 and webpack5 which is about updating all packages that depend on nodejs14 and webpack4 to work well with the updated version of nodejs16 and webpack5 in the Debian operations system. Juicy right! As a software developer struggling to grasp both basic and advanced knowledge of a concept can seem daunting, much like learning anything new, you can be overwhelmed when you are surrounded and know there is a whole lot of other new concept, tools, process, languages you have to learn that are linked to what you are currently learning, as you are struggling to grasp the fundamental idea of what you are currently learning. imbued in any struggle to get a solution to the problem is where innovation and inventions lie in, and our learning becomes improved as we dive into fact-finding, getting your hypothesis after a series of tests and ultimately proffering a solution Some of my struggles as I intern with Debian has been lack of skill of the shell scripting language as that is one of the core languages to understand so as to navigate your way around maintaining packages for Debian, also funny enough having just an intermediate knowledge of the javascript programming language as arguably having a basic knowledge of javascript is necessary to building and testing javascript packages in Debian as I know only the basic of javascript since my core language is Python, that I struggle with. The good thing is that the more I keep at it the faster the chance of the struggles reducing Now to the fun part! having a community of developers who have been through the struggling phase is divine, as they make your learning experience much easier, my mentors and other community member have made learning to package modules for Debian much easier as all hands are on deck to always help out with our challenges. I remember it felt so wonderful when my first contribution got merged and I became more encouraged to update more packages. These helped me a lot in the contribution stage for Debian as I better familiar with how the system worked. I m super grateful to my mentors and co-intern as they are always there to assist me.

How I Navigate my way through my struggles I guess the first thing about any challenge is to be aware of it and admit your limitations of particular knowledge, then you move on to creatively seek solutions by asking for help from those who know the way. Voila! Now comes the part where you have to take up their solutions, ideas, opinions and make it work for your particular case scenario that is a skill set that all Software developers must-have. Going through documentation has immensely helped solve my problems much faster and build new knowledge, as I get the fundamental idea of why and how things work. I also try to break each concept down into steps, achieve my goals for each step, then build all solutions in each step together, surfing the internet to find solutions also has a huge benefit.

Vocabulary terms Used in Debian
  1. uscan => a tool to identify and download upstream source code from the repository, also compressing it into the required format.
  2. apt => a package manager to manage packages in Debian, similar to pip in python, npm in javascript.
  3. stretch/buster/bullseye/bookworm/sid => old old stable Debian9 - The codename for the release before the previous stable release (stretch). old stable Debian10 - The previous stable release (Buster). stable Debian11 - The current stable release (Bullseye). testing Debian12 - The next-generation stable release (Bookworm). unstable - The unstable development release (Sid) where new or updated packages are introduced. To understand more on debian release cycle
  4. reverse-rebuild => is building all modules that depend on a package in Debian while building the main package.
  5. lintian => A helper tool used to check for inconsistencies and errors in a Debian Package based on Debian standards.
  6. pkg-js-tools => A collection of tools to aid packaging Node modules in Debian.
  7. dpkg-buildpackage => A command to build upstream code in an unclean chroot or environment.
  8. quilt => A patch creation and management automation script. quilt helps manage a series of patches that a Debian package maintainer needs to be applied to upstream source when building the package.
  9. autopkgtest => a script used to test an installed binary package using the source package's tests
  10. RFS => (Request For Sponsorship) Working in the Debian ecosystem includes two roles either as a Debian Maintainer with restricted rights and privileges like uploading to the Debian archive or as a Debian Developer with all rights and privileges such as uploading to the Debian archive, as a new contributor or a Debian maintainer (with few rights and privileges) in Debian you can RFS so that your pull request (PR) can be merged to the Debian archive by a Debian Developer, much like your contribution has been accepted
There are so many terms and tools you have to get accustomed to, but they are easy to understand and use, as enough and frequently updated wiki documentation are available to guide you through, plus a whole lot of community members you can ask questions from. strength and growth come only through continuous effort and struggle. Napoleon Hill

Next.