Russ Allbery: Review: On Vicious Worlds
| Series: | Kindom Trilogy #2 |
| Publisher: | Orbit |
| Copyright: | October 2024 |
| ISBN: | 0-316-46362-0 |
| Format: | Kindle |
| Pages: | 444 |
| Series: | Kindom Trilogy #2 |
| Publisher: | Orbit |
| Copyright: | October 2024 |
| ISBN: | 0-316-46362-0 |
| Format: | Kindle |
| Pages: | 444 |
$ git diff --shortstat v1.11.5 493 files changed, 25015 insertions(+), 21135 deletions(-)
RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on
the garbage collector (8b580f).
The effect on Akvorado s overall performance was somewhat uncertain, but a
user reported 35% lower CPU usage after migrating from the previous
version, plus resolution of the long-standing BMP component issue.
/api/v0/inlet/metrics. With the introduction
of the outlet, many metrics moved. Some were also renamed (4c0b15) to match
Prometheus best practices. Kafka consumer lag was added as a new metric
(e3a778).
If you do not have your own observability stack, the Docker Compose setup
shipped with Akvorado provides one. You can enable it by activating the profiles
introduced for this purpose (529a8f).
The prometheus profile ships Prometheus to store metrics and Alloy
to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka
metrics are collected through the exporter bundled with Alloy (560113).
Other metrics are exposed using Prometheus metrics endpoints and are
automatically fetched by Alloy with the help of some Docker labels, similar to
what is done to configure Traefik. cAdvisor was also added (83d855) to
provide some container-related metrics.
The loki profile ships Loki to store logs (45c684). While Alloy
can collect and ship logs to Loki, its parsing abilities are limited: I could
not find a way to preserve all metadata associated with structured logs produced
by many applications, including Akvorado. Vector replaces Alloy (95e201)
and features a domain-specific language, VRL, to transform logs. Annoyingly,
Vector currently cannot retrieve Docker logs from before it was
started.
Finally, the grafana profile ships Grafana, but the shipped dashboards are
broken. This is planned for a future version.
$ git diff --shortstat v1.11.5 -- console/data/docs 10 files changed, 1873 insertions(+), 1203 deletions(-)


## Test inlet has received NetFlow flows GET http://127.0.0.1:8080/prometheus/api/v1/query [Query] query: sum(akvorado_inlet_flow_input_udp_packets_total job="akvorado-inlet",listener=":2055" ) HTTP 200 [Captures] inlet_receivedflows: jsonpath "$.data.result[0].value[1]" toInt [Asserts] variable "inlet_receivedflows" > 10 ## Test inlet has sent them to Kafka GET http://127.0.0.1:8080/prometheus/api/v1/query [Query] query: sum(akvorado_inlet_kafka_sent_messages_total job="akvorado-inlet" ) HTTP 200 [Captures] inlet_sentflows: jsonpath "$.data.result[0].value[1]" toInt [Asserts] variable "inlet_sentflows" >= inlet_receivedflows
Dockerfile uses multi-stage and multi-platform builds: one
stage builds the JavaScript part on the host platform, one stage builds the Go
part cross-compiled on the host platform, and the final stage assembles the
image on top of a slim distroless image (268e95 and d526ca).
# This is a simplified version FROM --platform=$BUILDPLATFORM node:20-alpine AS build-js RUN apk add --no-cache make WORKDIR /build COPY console/frontend console/frontend COPY Makefile . RUN make console/data/frontend FROM --platform=$BUILDPLATFORM golang:alpine AS build-go RUN apk add --no-cache make curl zip WORKDIR /build COPY . . COPY --from=build-js /build/console/data/frontend console/data/frontend RUN go mod download RUN make all-indep ARG TARGETOS TARGETARCH TARGETVARIANT VERSION RUN make FROM gcr.io/distroless/static:latest COPY --from=build-go /build/bin/akvorado /usr/local/bin/akvorado ENTRYPOINT [ "/usr/local/bin/akvorado" ]
--platform
linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted
line execute only once for all platforms. This significantly speeds up the
build.
Akvorado now ships Docker images for these platforms: linux/amd64,
linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting
ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU.
On x86-64, there are two choices. If your CPU is recent enough, Docker
downloads linux/amd64/v3. This version contains additional optimizations and
should run faster than the linux/amd64 version. It would be interesting to
ship an image for linux/arm64/v8.2, but Docker does not support the same
mechanism for AArch64 yet (792808).
go
tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be
compiled with older toolchains by automatically downloading a newer one
(94fb1c).5 Users can still override GOTOOLCHAIN to revert this
decision. The recommended toolchain updates weekly through CI to ensure we get
the latest minor release (5b11ec). This change also simplifies updates to
newer versions: only go.mod needs updating.
Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have
started converting some unit tests to the new test/synctest package
(bd787e, 7016d8, and 159085).
Diff() to display the
differences when it fails:
got := input.Keys() expected := []int 1, 2, 3 if diff := helpers.Diff(got, expected); diff != "" t.Fatalf("Keys() (-got, +want):\n%s", diff)
kylelemons/godebug. This package is
no longer maintained and has some shortcomings: for example, by default, it does
not compare struct private fields, which may cause unexpectedly successful
tests. I replaced it with google/go-cmp, which is stricter
and has better output (e2f1df).
kentik/patricia, an implementation of a patricia tree
focused on reducing garbage collection pressure.
gaissmai/bart is a more recent alternative using an
adaptation of [Donald Knuth s ART algorithm][] that promises better
performance and delivers it: 90% faster lookups and 27% faster
insertions (92ee2e and fdb65c).
Unlike kentik/patricia, gaissmai/bart does not help efficiently store values
attached to each prefix. I adapted the same approach as kentik/patricia to
store route lists for each prefix: store a 32-bit index for each prefix, and use
it to build a 64-bit index for looking up routes in a map. This leverages Go s
efficient map structure.
gaissmai/bart also supports a lockless routing table version, but this is not
simple because we would need to extend this to the map storing the routes and to
the interning mechanism. I also attempted to use Go s new unique package to
replace the intern package included in Akvorado, but performance was
worse.7
planetscale/vtprotobuf (e49a74, and 8b580f).
Moreover, the dependency on protoc, a C++ program, was somewhat annoying.
Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf
schema into Go code (f4c879).
Another small optimization to reduce the size of the Akvorado binary by
10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It
includes the ASN database, as well as the SVG images for the documentation. A
small layer of code makes this change transparent (b1d638 and e69b91).
chalk and debug and another
impacting the popular package @ctrl/tinycolor. These attacks also
exist in other ecosystems, but JavaScript is a prime target due to heavy use of
small third-party dependencies. The previous version of Akvorado relied on 653
dependencies.
npm-run-all was removed (3424e8, 132 dependencies). patch-package was
removed (625805 and e85ff0, 69 dependencies) by moving missing
TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a
linter written in Rust (97fd8c, 125 dependencies, including the plugins).
I switched from npm to Pnpm, an alternative package manager (fce383).
Pnpm does not run install scripts by default8 and prevents installing
packages that are too recent. It is also significantly faster.9 Node.js
does not ship Pnpm but it ships Corepack, which allows us to use Pnpm
without installing it. Pnpm can also list licenses used by each dependency,
removing the need for license-compliance (a35ca8, 42 dependencies).
For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite
was replaced with its faster Rolldown version (463827).
After these changes, Akvorado only pulls 225 dependencies.
$ git show --shortstat ac68c5970e2c tail -1 231 files changed, 6474 insertions(+), 3877 deletions(-)
intern package uses 32-bit integers, while unique uses 64-bit pointers.
See commit 74e5ac.
npm. See commit dab2f7.
I ve noticed that procrastination and inability to be consistently productive at work has become quite common in recent years. This is clearly visible in younger people who have grown up with an endless stream of entertainment literally at their fingertips, on their mobile phone. It is however a trap one can escape from with a little bit of help.
Procrastination is natural they say humans are lazy by nature after all. Probably all of us have had moments when we choose to postpone a task we know we should be working on, and instead spent our time doing secondary tasks (valorisation). Classic example is cleaning your apartment when you should be preparing for an exam. Some may procrastinate by not doing any work at all, and just watching YouTube videos or the like. To some people, typically those who are in their 20s and early in their career, procrastination can be a big challenge and finding the discipline to stick to planned work may need intentional extra effort, and perhaps even external help.
During my 20+ year career in software development I ve been blessed to work with engineers of various backgrounds and each with their unique set of strengths. I have also helped many grow in various areas and overcome challenges, such as lack of intrinsic motivation and managing procrastination, and some might be able to get it in check with some simple advice.
tl;dr: there is an attack in the wild which is triggering dangerous-but-seemingly-intended behaviour in the Oj JSON parser when used in the default and recommended manner, which can lead to everyone s favourite kind of security problem: object deserialization bugs! If you have theojgem anywhere in yourGemfile.lock, the quickest mitigation is to make sure you haveOj.default_options = mode: :strictsomewhere, and that no library is overwriting that setting to something else.
PG::UndefinedColumn exception, which looked something like this:
PG::UndefinedColumn: ERROR: column "xyzzydeadbeef" does not existThis is weird on two fronts: firstly, this application has been running for a while, and if there was a schema problem, I d expect it to have made itself apparent long before now. And secondly, while I don t profess to perfection in my programming, I m usually better at naming my database columns than that. Something is definitely hinky here, so let s jump into the mystery mobile!
"name":":xyzzydeadbeef", ...The leading colon looks an awful lot like the syntax for a Ruby symbol, but it s in a JSON string. Surely there s no way a JSON parser would be turning that into a symbol, right? Right?!? Immediately, I thought that that possibly was what was happening, because I use Sequel for my SQL database access needs, and Sequel treats symbols as database column names. It seemed like too much of a coincidence that a vaguely symbol-shaped string was being sent in, and the exact same name was showing up as a column name. But how the flying fudgepickles was a JSON string being turned into a Ruby symbol, anyway? Enter Oj.
oj (for Optimized JSON ), touted as The fastest JSON parser and object serializer .
Given the history, it s not surprising that people who wanted the best possible performance turned to Oj, leading to it being found in a great many projects, often as a sub-dependency of a dependency of a dependency (which is how it ended up in my project).
You might have noticed in Oj s description that, in addition to claiming fastest , it also describes itself as an object serializer .
Anyone who has kept an eye on the security bug landscape will recall that object deserialization is a rich vein of vulnerabilities to mine.
Libraries that do object deserialization, especially ones with a history that goes back to before the vulnerability class was well-understood, are likely to be trouble magnets.
And thus, it turns out to be with Oj.
By default, Oj will happily turn any string that starts with a colon into a symbol:
>> require "oj"
>> Oj.load(' "name":":xyzzydeadbeef","username":"bob","answer":42 ')
=> "name"=>:xyzzydeadbeef, "username"=>"bob", "answer"=>42
How that gets exploited is only limited by the creativity of an attacker.
Which I ll talk about more shortly but first, a word from my rant cortex.
0.0.0.0 with no password as soon as its installed, or a library whose default behaviour is to permit arbitrary code execution, it all contributes to a software ecosystem that is an appalling security nightmare.
When a user (in this case, a developer who wants to parse JSON) comes across a new piece of software, they have by definition no idea what they re doing with that software.
They re going to use the defaults, and follow the most easily-available documentation, to achieve their goal.
It is unrealistic to assume that a new user of a piece of software is going to do things the right way , unless that right way is the only way, or at least the by-far-the-easiest way.
Conversely, the developer(s) of the software is/are the domain experts.
They have knowledge of the problem domain, through their exploration while building the software, and unrivalled expertise in the codebase.
Given this disparity in knowledge, it is tantamount to malpractice for the experts the developer(s) to off-load the responsibility for the safe and secure use of the software to the party that has the least knowledge of how to do that (the new user).
To apply this general principle to the specific case, take the Using section of the Oj README.
The example code there calls Oj.load, with no indication that this code will, in fact, parse specially-crafted JSON documents into Ruby objects.
The brand-user user of the library, no doubt being under pressure to Get Things Done, is almost certainly going to look at this Using example, get the apparent result they were after (a parsed JSON document), and call it a day.
It is unlikely that a brand-new user will, for instance, scroll down to the Further Reading section, find the second last (of ten) listed documents, Security.md , and carefully peruse it.
If they do, they ll find an oblique suggestion that parsing untrusted input is never a good idea .
While that s true, it s also rather unhelpful, because I d wager that by far the majority of JSON parsed in the world is untrusted , in one way or another, given the predominance of JSON as a format for serializing data passing over the Internet.
This guidance is roughly akin to putting a label on a car s airbags that driving at speed can be hazardous to your health : true, but unhelpful under the circumstances.
The solution is for default behaviours to be secure, and any deviation from that default that has the potential to degrade security must, at the very least, be clearly labelled as such.
For example, the Oj.load function should be named Oj.unsafe_load, and the Oj.load function should behave as the Oj.safe_load function does presently.
By naming the unsafe function as explicitly unsafe, developers (and reviewers) have at least a fighting chance of recognising they re doing something risky.
We put warning labels on just about everything in the real world; the same should be true of dangerous function calls.
OK, rant over.
Back to the story.
# request_body has the JSON representation of the form being submitted
body = Oj.load(request_body)
DB[:users].where(id: user_id).update(name: body["name"])
In normal operation, this will issue an SQL query along the lines of UPDATE users SET name='Jaime' WHERE id=42.
If the name given is Jaime O Dowd , all is still good, because Sequel quotes string values, etc etc.
All s well so far.
But, imagine there is a column in the users table that normally users cannot read, perhaps admin_notes.
Or perhaps an attacker has gotten temporary access to an account, and wants to dump the user s password hash for offline cracking.
So, they send an update claiming that their name is :admin_notes (or :password_hash).
In JSON, that ll look like "name":":admin_notes" , and Oj.load will happily turn that into a Ruby object of "name"=>:admin_notes .
When run through the above update the user code fragment, it ll produce the SQL UPDATE users SET name=admin_notes WHERE id=42.
In other words, it ll copy the contents of the admin_notes column into the name column which the attacker can then read out just by refreshing their profile page.
Oj.load, they ve been handed remote code execution on a platter.
Gemfile.lock (or SBOM, if that s your thing) to see if the oj gem is anywhere in your codebase.
Remember that even if you don t use it directly, it s popular enough that it is used in a lot of places.
If you find it in your transitive dependency tree anywhere, there s a chance you re vulnerable, limited only by the ingenuity of attackers to feed crafted JSON into a deeply-hidden Oj.load call.oj directly and use it in your project, consider not doing that.
The json gem is acceptably fast, and JSON.parse won t create arbitrary Ruby objects.oj to do so, find all calls to Oj.load in your code and switch them to call Oj.safe_load.permitted_classes argument to Psych.load).
I d make it a priority to move away from using Oj for that, and switch to something somewhat safer (such as the aforementioned Psych).
At the very least, audit and comment heavily to minimise the risk of user-provided input sneaking into those calls somehow, and pass mode: :object as the second argument to Oj.load, to make it explicit that you are opting-in to this far more dangerous behaviour only when it s absolutely necessary.Oj.load in your dependencies, consider setting the default Oj parsing mode to :strict, by putting Oj.default_options = mode: :strict somewhere in your initialization code (and make sure no dependencies are setting it to something else later!).
There is a small chance that this change of default might break something, if a dependency is using Oj to deliberately create Ruby objects from JSON, but the overwhelming likelihood is that Oj s just being used to parse ordinary JSON, and these calls are just RCE vulnerabilities waiting to give you a bad time.

This post is based on presentation given at the Validos annual members meeting on June 25th, 2025.When I started getting into Linux and open source over 25 years ago, the majority of the software development in this area was done by academics and hobbyists. The number of companies participating in open source has since exploded in parallel with the growth of mobile and cloud software, the majority of which is built on top of open source. For example, Android powers most mobile phones today and is based on Linux. Almost all software used to operate large cloud provider data centers, such as AWS or Google, is either open source or made in-house by the cloud provider. Pretty much all companies, regardless of the industry, have been using open source software at least to some extent for years. However, the degree to which they collaborate with the upstream origins of the software varies. I encourage all companies in a technical industry to start contributing upstream. There are many benefits to having a good relationship with your upstream open source software vendors, both for the short term and especially for the long term. Moreover, with the rollout of CRA in EU in 2025-2027, the law will require software companies to contribute security fixes upstream to the open source projects their products use. To ensure the process is well managed, business-aligned and legally compliant, there are a few do s and don t do s that are important to be aware of.
Ada Lovelace Day was
celebrated on October 8 in 2024, and on this occasion, to celebrate and
raise awareness of the contributions of women to the STEM fields we
interviewed some of the women in Debian.
Here we share their thoughts, comments, and concerns with the hope of inspiring
more women to become part of the Sciences, and of course, to work inside of
Debian.
This article was simulcasted to the debian-women mail list.
Beatrice Torracca
1. Who are you?
I am Beatrice, I am Italian. Internet technology and everything computer-related
is just a hobby for me, not my line of work or the subject of my academic
studies. I have too many interests and too little time. I would like to do lots
of things and at the same time I am too Oblomovian to do any.
2. How did you get introduced to Debian?
As a user I started using newsgroups when I had my first dialup connection and
there was always talk about this strange thing called
Linux. Since moving from DR DOS to Windows was a shock
for me, feeling like I lost the control of my machine, I tried Linux with
Debian Potato and I never strayed
away from Debian since then for my personal equipment.
3. How long have you been into Debian?
Define "into". As a user... since Potato, too many years to count. As a
contributor, a similar amount of time, since early 2000 I think. My first
archived email about contributing to the translation of the description of
Debian packages dates 2001.
4. Are you using Debian in your daily life? If yes, how?
Yes!! I use testing. I have it on my desktop PC at home and I have it on my
laptop. The desktop is where I have a local IMAP server that fetches all the
mails of my email accounts, and where I sync and back up all my data. On both I
do day-to-day stuff (from email to online banking, from shopping to taxes), all
forms of entertainment, a bit of work if I have to work from home
(GNU R for statistics,
LibreOffice... the usual suspects). At work I am
required to have another OS, sadly, but I am working on setting up a
Debian Live system to use there too.
Plus if at work we start doing bioinformatics there might be a Linux machine in
our future... I will of course suggest and hope for a Debian system.
5. Do you have any suggestions to improve women's participation in Debian?
This is a tough one. I am not sure. Maybe, more visibility for the women already
in the Debian Project, and make the newcomers feel seen, valued and welcomed. A
respectful and safe environment is key too, of course, but I think Debian made
huge progress in that aspect with the
Code of Conduct. I am a big fan of
promoting diversity and inclusion; there is always room for improvement.
Ileana Dumitrescu (ildumi)
1. Who are you?
I am just a girl in the world who likes cats and packaging
Free Software.
2. How did you get introduced to Debian?
I was tinkering with a computer running Debian a few years ago, and I decided to
learn more about Free Software. After a search or two, I found
Debian Women.
3. How long have you been into Debian?
I started looking into contributing to Debian in 2021. After contacting Debian
Women, I received a lot of information and helpful advice on different ways I
could contribute, and I decided package maintenance was the best fit for me. I
eventually became a Debian Maintainer in 2023, and I continue to maintain a few
packages in my spare time.
4. Are you using Debian in your daily life? If yes, how?
Yes, it is my favourite GNU/Linux operating system! I use it for email,
chatting, browsing, packaging, etc.
5. Do you have any suggestions to improve women's participation in Debian?
The mailing list for Debian Women may
attract more participation if it is utilized more. It is where I started, and I
imagine participation would increase if it is more engaging.
Kathara Sasikumar (kathara)
1. Who are you?
I'm Kathara Sasikumar, 22 years old and a recent Debian user turned Maintainer
from India. I try to become a creative person through sketching or playing
guitar chords, but it doesn't work! xD
2. How did you get introduced to Debian?
When I first started college, I was that overly enthusiastic student who signed
up for every club and volunteered for anything that crossed my path just like
every other fresher.
But then, the pandemic hit, and like many, I hit a low point. COVID depression
was real, and I was feeling pretty down. Around this time, the
FOSS Club at my college suddenly became more active.
My friends, knowing I had a love for free software, pushed me to join the club.
They thought it might help me lift my spirits and get out of the slump I was in.
At first, I joined only out of peer pressure, but once I got involved, the club
really took off. FOSS Club became more and more active during the pandemic, and
I found myself spending more and more time with it.
A year later, we had the opportunity to host a
MiniDebConf at our college. Where I got to
meet a lot of Debian developers and maintainers, attending their talks
and talking with them gave me a wider perspective on Debian, and I loved the
Debian philosophy.
At that time, I had been distro hopping but never quite settled down. I
occasionally used Debian but never stuck around. However, after the MiniDebConf,
I found myself using Debian more consistently, and it truly connected with me.
The community was incredibly warm and welcoming, which made all the difference.
3. How long have you been into Debian?
Now, I've been using Debian as my daily driver for about a year.
4. Are you using Debian in your daily life? If yes, how?
It has become my primary distro, and I use it every day for continuous learning
and working on various software projects with free and open-source tools. Plus,
I've recently become a Debian Maintainer (DM) and have taken on the
responsibility of maintaining a few packages. I'm looking forward to
contributing more to the Debian community
Rhonda D'Vine (rhonda)
1. Who are you?
My name is Rhonda, my pronouns are she/her, or per/pers. I'm 51 years old,
working in IT.
2. How did you get introduced to Debian?
I was already looking into Linux because of university, first it was
SuSE. And people played around with gtk. But when they
packaged GNOME and it just didn't even install I
looked for alternatives. A working colleague from back then gave me a CD of
Debian. Though I couldn't install from it because
Slink didn't recognize the pcmcia
drive. I had to install it via floppy disks, but apart from that it was
quite well done. And the early GNOME was working, so I never looked back.
3. How long have you been into Debian?
Even before I was more involved, a colleague asked me whether I could help with
translating the release documentation. That was my first contribution to Debian,
for the slink release in early 1999. And I was using some other software before
on my SuSE systems, and I wanted to continue to use them on Debian obviously. So
that's how I got involved with packaging in Debian. But I continued to help with
translation work, for a long period of time I was almost the only person active
for the German part of the website.
4. Are you using Debian in your daily life? If yes, how?
Being involved with Debian was a big part of the reason I got into my jobs since
a long time now. I always worked with maintaining Debian (or
Ubuntu) systems.
Privately I run Debian on my laptop, with occasionally switching to Windows in
dual boot when (rarely) needed.
5. Do you have any suggestions to improve women's participation in Debian?
There are factors that we can't influence, like that a lot of women are pushed
into care work because patriarchal structures work that way, and don't have the
time nor energy to invest a lot into other things. But we could learn to
appreciate smaller contributions better, and not focus so much on the quantity
of contributions. When we look at longer discussions on mailing lists, those
that write more mails actually don't contribute more to the discussion, they
often repeat themselves without adding more substance. Through working on our
own discussion patterns this could create a more welcoming environment for a lot
of people.
Sophie Brun (sophieb)
1. Who are you?
I'm a 44 years old French woman. I'm married and I have 2 sons.
2. How did you get introduced to Debian?
In 2004 my boyfriend (now my husband) installed Debian on my personal computer
to introduce me to Debian. I knew almost nothing about Open Source. During my
engineering studies, a professor mentioned the existence of Linux,
Red Hat in particular, but without giving any details.
I learnt Debian by using and reading (in advance)
The Debian Administrator's Handbook.
3. How long have you been into Debian?
I've been a user since 2004. But I only started contributing to Debian in 2015:
I had quit my job and I wanted to work on something more meaningful. That's why
I joined my husband in Freexian, his company.
Unlike most people I think, I started contributing to Debian for my work. I only
became a DD in 2021 under gentle social pressure and when I felt confident
enough.
4. Are you using Debian in your daily life? If yes, how?
Of course I use Debian in my professional life for almost all the tasks: from
administrative tasks to Debian packaging.
I also use Debian in my personal life. I have very basic needs:
Firefox,
LibreOffice, GnuCash
and Rhythmbox are the main
applications I need.
Sruthi Chandran (srud)
1. Who are you?
A feminist, a librarian turned Free Software advocate and a Debian Developer.
Part of Debian Outreach team and
DebConf Committee.
2. How did you get introduced to Debian?
I got introduced to the free software world and Debian through my husband. I
attended many Debian events with him. During one such event, out of curiosity, I
participated in a Debian packaging workshop. Just after that I visited a Tibetan
community in India and they mentioned that there was no proper Tibetan font in
GNU/Linux. Tibetan font was my first package in Debian.
3. How long have you been into Debian?
I have been contributing to Debian since 2016 and Debian Developer since 2019.
4. Are you using Debian in your daily life? If yes, how?
I haven't used any other distro on my laptop since I got introduced to Debian.
5. Do you have any suggestions to improve women's participation in Debian?
I was involved with actively mentoring newcomers to Debian since I started
contributing myself. I specially work towards reducing the gender gap inside the
Debian and Free Software community in general. In my experience, I believe that
visibility of already existing women in the community will encourage more women
to participate. Also I think we should reintroduce mentoring through
debian-women.
T ssia Cam es Ara jo (tassia)
1. Who are you?
T ssia Cam es Ara jo, a Brazilian living in Canada. I'm a passionate learner who
tries to push myself out of my comfort zone and always find something new to
learn. I also love to mentor people on their learning journey. But I don't
consider myself a typical geek. My challenge has always been to not get
distracted by the next project before I finish the one I have in my hands. That
said, I love being part of a community of geeks and feel empowered by it. I love
Debian for its technical excellence, and it's always reassuring to know that
someone is taking care of the things I don't like or can't do. When I'm not
around computers, one of my favorite things is to feel the wind on my cheeks,
usually while skating or riding a bike; I also love music, and I'm always
singing a melody in my head.
2. How did you get introduced to Debian?
As a student, I was privileged to be introduced to FLOSS at the same time I was
introduced to computer programming. My university could not afford to have labs
in the usual proprietary software model, and what seemed like a limitation at
the time turned out to be a great learning opportunity for me and my colleagues.
I joined this student-led initiative to "liberate" our servers and build
LTSP-based labs - where a single powerful computer could power a few dozen
diskless thin clients. How revolutionary it was at the time! And what an
achievement! From students to students, all using Debian. Most of that group
became close friends; I've married one of them, and a few of them also found
their way to Debian.
3. How long have you been into Debian?
I first used Debian in 2001, but my first real connection with the community was
attending DebConf 2004. Since then, going to DebConfs has become a habit. It is
that moment in the year when I reconnect with the global community and my
motivation to contribute is boosted. And you know, in 20 years I've seen people
become parents, grandparents, children grow up; we've had our own child and had
the pleasure of introducing him to the community; we've mourned the loss of
friends and healed together. I'd say Debian is like family, but not the kind you
get at random once you're born, Debian is my family by choice.
4. Are you using Debian in your daily life? If yes, how?
These days I teach at Vanier College in Montr al. My favorite course to teach is
UNIX, which I have the pleasure of teaching mostly using Debian. I try to
inspire my students to discover Debian and other FLOSS projects, and we are
happy to run a FLOSS club with participation from students, staff and alumni. I
love to see these curious young minds put to the service of FLOSS. It is like
recruiting soldiers for a good battle, and one that can change their lives, as
it certainly did mine.
5. Do you have any suggestions to improve women's participation in Debian?
I think the most effective way to inspire other women is to give visibility to
active women in our community. Speaking at conferences, publishing content,
being vocal about what we do so that other women can see us and see themselves
in those positions in the future. It's not easy, and I don't like being in the
spotlight. It took me a long time to get comfortable with public speaking, so I
can understand the struggle of those who don't want to expose themselves. But I
believe that this space of vulnerability can open the way to new connections. It
can inspire trust and ultimately motivate our next generation. It's with this in
mind that I publish these lines.
Another point we can't neglect is that in Debian we work on a volunteer basis,
and this in itself puts us at a great disadvantage. In our societies, women
usually take a heavier load than their partners in terms of caretaking and other
invisible tasks, so it is hard to afford the free time needed to volunteer. This
is one of the reasons why I bring my son to the conferences I attend, and so far
I have received all the support I need to attend DebConfs with him. It is a way
to share the caregiving burden with our community - it takes a village to raise
a child. Besides allowing us to participate, it also serves to show other women
(and men) that you can have a family life and still contribute to Debian.
My feeling is that we are not doing super well in terms of diversity in Debian
at the moment, but that should not discourage us at all. That's the way it is
now, but that doesn't mean it will always be that way. I feel like we go through
cycles. I remember times when we had many more active female contributors, and
I'm confident that we can improve our ratio again in the future. In the
meantime, I just try to keep going, do my part, attract those I can, reassure
those who are too scared to come closer. Debian is a wonderful community, it is
a family, and of course a family cannot do without us, the women.
These interviews were conducted via email exchanges in October, 2024. Thanks to
all the wonderful women who participated in this interview. We really appreciate
your contributions in Debian and to Free/Libre software.
Things developed since my last post. Some lesions opened up on my ankle which was initially good news: the pain substantially reduced. But they didn t heal fast enough and so medics decided on surgical debridement. That was last night. It seemed to be successful and I m in recovery from surgery as I write. It s hard to predict the near-future, a lot depends on how well and fast I heal.
I ve got a negative-pressure dressing on it, which is incredible: a constantly maintained suction to aid in debridement and healing. Modern medicine feels like a sci fi novel.
The debridement operation was a success: nothing bad grew afterwards. I was
discharged after a couple of nights with crutches, instructions not to
weight-bear, a remarkable, portable negative-pressure "Vac" pump that lived by
my side, and some strong painkillers.
About two weeks later, I had a skin graft. The surgeon took some skin from my
thigh and stitched it over the debridement wound. I was discharged same-day,
again with the Vac pump, and again with instructions not to weight-bear, at
least for a few days.
This time I only kept the Vac pump for a week, and after a dressing change
(the first time I saw the graft), I was allowed to walk again. Doing so is
strangely awkward, and sometimes a little painful. I have physio exercises
to help me regain strength and understanding about what I can do.
The donor site remained bandaged for another week before I saw it. I was
expecting a stitched cut, but the surgeons have removed the top few layers
only, leaving what looks more like a graze or sun-burn. There are four
smaller, tentative-looking marks adjacent, suggesting they got it right on
the fifth attempt. I'm not sure but I think these will all fade away to
near-invisibility with time, and they don't hurt at all.
I've now been off work for roughly 12 weeks, but I think I am returning
very soon. I am looking forward to returning to some sense of normality.
It's been an interesting experience. I thought about writing more
about what I've gone through, in particular my experiences in Hospital,
dealing with the bureaucracy and things falling "between the gaps".
Hanif Kureishi has done a better job than I could.
It's clear that the NHS is staffed by incredibly passionate people, but
there are a lot of structural problems that interfere with care.
Dear Debian community,
this are my bits from DPL for August.
Happy Birthday Debian
On 16th of August Debian celebrated its 31th birthday. Since I'm
unable to write a better text than our great publicity team I'm
simply linking to their article for those who might have missed it:
https://bits.debian.org/2024/08/debian-turns-31.html
Removing more packages from unstable
Helmut Grohne argued for more aggressive package removal and
sought consensus on a way forward. He provided six examples of processes
where packages that are candidates for removal are consuming valuable
person-power. I d like to add that the Bug of the Day initiative (see
below) also frequently encounters long-unmaintained packages with popcon
votes sometimes as low as zero, and often fewer than ten.
Helmut's email included a list of packages that would meet the suggested
removal criteria. There was some discussion about whether a popcon vote
should be included in these criteria, with arguments both for and
against it. Although I support including popcon, I acknowledge that
Helmut has a valid point in suggesting it be left out.
While I ve read several emails in agreement, Scott Kitterman made
a valid point "I don't think we need more process. We just need
someone to do the work of finding the packages and filing the bugs." I
agree that this is crucial to ensure an automated process doesn t lead
to unwanted removals. However, I don t see "someone" stepping up to file
RM bugs against other maintainers' packages. As long as we have strict
ownership of packages, many people are hesitant to touch a package, even
for fixing it. Asking for its removal might be even less well-received.
Therefore, if an automated procedure were to create RM bugs based on
defined criteria, it could help reduce some of the social pressure.
In this aspect the opinion of Niels Thykier is interesting: "As
much as I want automation, I do not mind the prototype starting as a
semi-automatic process if that is what it takes to get started."
The urgency of the problem to remove packages was put by CharlesPlessy
into the words: "So as of today, it is much less work to
keep a package rotting than removing it." My observation when trying to
fix the Bug of the Day exactly fits this statement.
I would love for this discussion to lead to more aggressive removals
that we can agree upon, whether they are automated, semi-automated, or
managed by a person processing an automatically generated list
(supported by an objective procedure). To use an analogy: I ve found
that every image collection improves with aggressive pruning. Similarly,
I m convinced that Debian will improve if we remove packages that no
longer serve our users well.
DEP14 / DEP18
There are two DEPs that affect our workflow for maintaining
packages particularly for those who agree on using Git for Debian
packages. DEP-14 recommends a standardized layout for Git packaging
repositories, which benefits maintainers working across teams and makes
it easier for newcomers to learn a consistent repository structure.
DEP-14 stalled for various reasons. Sam Hartman suspected it might
be because 'it doesn't bring sufficient value.' However, the assumption
that git-buildpackage is incompatible with DEP-14 is incorrect, as
confirmed by its author, Guido G nther. As one of the two key tools
for Debian Git repositories (besides dgit) fully supports DEP-14, though
the migration from the previous default is somewhat complex.
Some investigation into mass-converting older formats to DEP-14 was
conducted by the Perl team, as Gregor Hermann pointed out..
The discussion about DEP-14 resurfaced with the suggestion of DEP-18.
Guido G nther proposed the title Encourage Continuous Integration and Merge
Request-Based Collaboration for Debian Packages ,
which more accurately reflects the DEP's technical intent.
Otto Kek l inen, who initiated DEP-18 (thank you, Otto), provided a good
summary of the current status. He also assembled a very helpful
overview of Git and GitLab usage in other Linux distros.
More Salsa CI
As a result of the DEP-18 discussion, Otto Kek l inen suggested
implementing Salsa CI for our top popcon packages.
I believe it would be a good idea to enable CI by default across Salsa
whenever a new repository is created.
Progress in Salsa migration
In my campaign, I stated that I aim to reduce the number of
packages maintained outside Salsa to below 2,000. As of March 28, 2024,
the count was 2,368. Today, it stands at 2,187 (UDD query: SELECT DISTINCT
count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;).
After a third of my DPL term (OMG), we've made significant progress,
reducing the amount in question (369 packages) by nearly half. I'm
pleased with the support from the DDs who moved their packages to Salsa.
Some packages were transferred as part of the Bug of the Day initiative
(see below).
Bug of the Day
As announced in my 'Bits from the DPL' talk at DebConf, I started
an initiative called Bug of the Day. The goal is to train newcomers
in bug triaging by enabling them to tackle small, self-contained QA
tasks. We have consistently identified target packages and resolved at
least one bug per day, often addressing multiple bugs in a single
package.
In several cases, we followed the Package Salvaging procedure outlined
in the Developers Reference. Most instances were either welcomed by
the maintainer or did not elicit a response. Unfortunately, there was
one exception where the recipient of the Package Salvage bug expressed
significant dissatisfaction. The takeaway is to balance formal
procedures with consideration for the recipient s perspective.
I'm pleased to confirm that the Matrix channel has seen an increase
in active contributors. This aligns with my hope that our efforts would
attract individuals interested in QA work. I m particularly pleased
that, within just one month, we have had help with both fixing bugs and
improving the code that aids in bug selection.
As I aim to introduce newcomers to various teams within Debian, I also
take the opportunity to learn about each team's specific policies
myself. I rely on team members' assistance to adapt to these policies. I
find that gaining this practical insight into team dynamics is an
effective way to understand the different teams within Debian as DPL.
Another finding from this initiative, which aligns with my goal as DPL,
is that many of the packages we addressed are already on Salsa but have
not been uploaded, meaning their VCS fields are not published. This
suggests that maintainers are generally open to managing their packages
on Salsa. For packages that were not yet on Salsa, the move was
generally welcomed.
Publicity team wants you
The publicity team has decided to resume regular meetings to coordinate
their efforts. Given my high regard for their work, I plan to attend
their meetings as frequently as possible, which I began doing with the
first IRC meeting.
During discussions with some team members, I learned that the team could
use additional help. If anyone interested in supporting Debian with
non-packaging tasks reads this, please consider introducing yourself to
debian-publicity@lists.debian.org. Note that this is a publicly archived
mailing list, so it's not the best place for sharing private
information.
Kind regards
Andreas.
To ease my nerves, I struck up a conversation with a man seated nearby who was
also traveling to Abu Dhabi for work. He provided helpful information about
safety and transportation in Abu Dhabi, which reassured me. With the boarding
process complete and my anxiety somewhat eased. I found my window seat on the
flight and settled in, excited for the journey ahead. Next to me was a young
man from Ranchi(Zarkhand, India), heading to Abu Dhabi for work at a mining
factory. We had an engaging conversation about work culture in Abu Dhabi and
recruitment from India.
Upon arriving in Abu Dhabi, I completed my transit, collected my luggage, and
began finding my way to the hotel Premier Inn AbuDhabi,
which was in the airport area. To my surprise, I ran into the same man from the
flight, now in a cab. He kindly offered to drop me at my hotel, which I gladly
accepted since navigating an unfamiliar city with a short acquaintance felt
safer.
At the hotel gate, he asked if I had local currency (Dirhams) for payment,
as sometimes online transactions can fail. That hadn t crossed my mind, and
I realized I might be left stranded if a transaction failed. Recognizing his
help as a godsend, I asked if he could lend me some Dirhams, promising to
transfer the amount later. He kindly assured me to pay him back once I
reached the hotel room. With that relief, I checked into the hotel, feeling
deeply grateful for the unexpected assistance and transferred the money to
him after getting to my room.
I reached Tirana, Albania after a six hours flight, feeling exhausted and I was
suffering from a headache. The air pressure had blocked my ears, and jet lag
added to my fatigue. After collecting my checked luggage, I headed to the first
ATM machine at the airport. Struggling to insert my card, I asked a nearby
gentleman for help. He tried his best, but my card got stuck inside the
machine. Panic set in as I worried about how I would survive without money.
Taking a deep breath, I found an airport employee and explained the situation.
The gentleman stayed with me, offering support and repeatedly apologizing for
his mistake. However, it wasn t his fault, the ATM was out of order, which I
hadn t noticed. My focus was solely on retrieving my ATM card. The airport
employee worked diligently, using a hairpin to carefully extract my card.
Finally, the card was freed, and I felt an immense sense of relief, grateful
for the help of these kind strangers. I used another ATM, successfully withdrew
money, and then went to an airport mobile SIM shop to buy a new SIM card for
local internet and connectivity.
I found my top bunk bed, only to realize I had booked a mixed-gender
dormitory. This detail had completely escaped my notice during the booking
process. I felt unsure about how to handle the situation. Coincidentally,
my experience mirrored what Kangana faced in the movie Queen .
Feeling acidic due to an empty stomach and the exhaustion of heavy
traveling, I wasn t up to cooking in the hostel s kitchen.
I asked the front desk about the nearest restaurant. It was nearly 9:30 PM,
and the streets were deserted. To avoid any mishaps like in the movie
Queen, I kept my passport securely locked in my bag, ensuring it wouldn t
be a victim of theft.
Venturing out for dinner, I felt uneasy on the quiet streets. I eventually
found a restaurant recommended by the hostel, but the menu was almost
entirely non-vegetarian. I struggled to ask about vegetarian options and was
uncertain if any dishes contained eggs, as some people consider eggs to be
vegetarian. Feeling frustrated and unsure, I left the restaurant without
eating.
I noticed a nearby grocery store that was about to close and managed to get
a few extra minutes to shop. I bought some snacks, wafers, milk, and tea
bags (though I couldn t find tea powder to make Indian-style tea). Returning
to the hostel, I made do with wafers, cookies, and milk for dinner. That day
was incredibly tough for me, I filled with exhaustion and struggle in a new
country, I was on the verge of tears .
I made a video call home before sleeping on the top bunk bed. It was a new
experience for me, sharing a room with both unknown men and women. I kept my
passport safe inside my purse and under my pillow while sleeping, staying
very conscious about its security.
I took a bus from Shkod r to the southern part of Albania, heading to
Sarand . The journey lasted about five to six hours, and I had booked a stay
at Mona s Hostel. Upon arrival, I met Eliza from America, and we went
together to Ksamil Beach, spending a wonderful day there.


You can also see it in:
The rest of my photos from the event will be published next week. That will give me a bit more time to process
them correctly and also give all of you a chance to see these pictures with fresh eyes and stir up new memories from
the event.
Christian
David,
CC BY-SA 4.0, via Wikimedia Commons
This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.
Summary: this article shares the experience and learnings of migrating away from Kubernetes PodSecurityPolicy into
Kyverno in the Wikimedia Toolforge platform.
Wikimedia Toolforge is a Platform-as-a-Service, built with
Kubernetes, and maintained by the Wikimedia Cloud Services team (WMCS). It is completely free and open, and we welcome
anyone to use it to build and host tools (bots, webservices, scheduled jobs, etc) in support of Wikimedia projects.
We provide a set of platform-specific services, command line interfaces, and shortcuts to help in the task of setting up
webservices, jobs, and stuff like building container images, or using databases. Using these interfaces makes the
underlying Kubernetes system pretty much invisible to users. We also allow direct access to the Kubernetes API, and some
advanced users do directly interact with it.
Each account has a Kubernetes namespace where they can freely deploy their workloads. We have a number of controls in
place to ensure performance, stability, and fairness of the system, including quotas, RBAC permissions, and up until
recently PodSecurityPolicies (PSP). At the time of this writing, we had around 3.500 Toolforge tool accounts in the
system. We early adopted PSP in 2019 as a way to make sure Pods had the correct runtime configuration. We needed Pods to
stay within the safe boundaries of a set of pre-defined parameters. Back when we adopted PSP there was already the
option to use 3rd party agents, like OpenPolicyAgent Gatekeeper,
but we decided not to invest in them, and went with a native, built-in mechanism instead.
In 2021 it was announced
that the PSP mechanism would be deprecated, and removed in Kubernetes 1.25. Even though we had been warned years in
advance, we did not prioritize the migration of PSP until we were in Kubernetes 1.24, and blocked, unable to upgrade
forward without taking actions.
The WMCS team explored different alternatives for this migration, but eventually we decided to go with
Kyverno as a replacement for PSP. And so with that decision it began the
journey described in this blog post.
First, we needed a source code refactor for one of the key components of our Toolforge Kubernetes:
maintain-kubeusers. This custom piece of
software that we built in-house, contains the logic to fetch accounts from LDAP and do the necessary instrumentation on
Kubernetes to accommodate each one: create namespace, RBAC, quota, a kubeconfig file, etc. With the refactor, we
introduced a proper reconciliation loop, in a way that the software would have a notion of what needs to be done for
each account, what would be missing, what to delete, upgrade, and so on. This would allow us to easily deploy new
resources for each account, or iterate on their definitions.
The initial version of the refactor had a number of problems, though. For one, the new version of maintain-kubeusers was
doing more filesystem interaction than the previous version, resulting in a slow reconciliation loop over all the
accounts. We used NFS as the underlying storage system for Toolforge, and it could be very slow because of reasons
beyond this blog post. This was corrected in the next few days after the initial refactor rollout. A side note with an
implementation detail: we stored a configmap on each account namespace with the state of each resource. Storing more
state on this configmap was our solution to avoid additional NFS latency.
I initially estimated this refactor would take me a week to complete, but unfortunately it took me around three weeks
instead. Previous to the refactor, there were several manual steps and cleanups required to be done when updating the
definition of a resource. The process is now automated, more robust, performant, efficient and clean. So in my opinion
it was worth it, even if it took more time than expected.
Then, we worked on the Kyverno policies themselves. Because we had a very particular PSP setting, in order to ease the
transition, we tried to replicate their semantics on a 1:1 basis as much as possible. This involved things like
transparent mutation of Pod resources, then validation. Additionally, we had one different PSP definition for each
account, so we decided to create one different Kyverno namespaced policy resource for each account namespace remember,
we had 3.5k accounts.
We created a Kyverno policy
template
that we would then render and inject for each account.
For developing and testing all this, maintain-kubeusers and the Kyverno bits, we had a project called
lima-kilo, which was a local Kubernetes setup
replicating production Toolforge. This was used by each engineer in their laptop as a common development environment.
We had planned the migration from PSP to Kyverno policies in stages, like this:
/healthz HTTP endpoint, which more accurately reflected the health of each k8s apiserver.
This is the first article of a 5-episode blog post series written by Guido Berh rster, member of staff at my company Fre(i)e Software GmbH. Thanks, Guido for being on the Polis project.
Enjoy the read on the work Guido has been doing over the past months,| Publisher: | Ballantine Books |
| Copyright: | 2009 |
| ISBN: | 0-553-59217-3 |
| Format: | Mass market |
| Pages: | 438 |
| Publisher: | DAW |
| Copyright: | November 2022 |
| ISBN: | 0-7564-1543-8 |
| Format: | Kindle |
| Pages: | 676 |
The problem here is with Open Source Software.I want to say not only is that view so myopic that it pushes towards the incorrect, but also it blinds us to more serious problems. Now, I don t pretend that there are no problems in the FLOSS community. There have been various pieces written about what this issue says about the FLOSS community (usually without actionable solutions). I m not here to say those pieces are wrong. Just that there s a bigger picture. So with this xz issue, it may well be a state actor (aka spy ) that added this malicious code to xz. We also know that proprietary software and systems can be vulnerable. For instance, a Twitter whistleblower revealed that Twitter employed Indian and Chinese spies, some knowingly. A recent report pointed to security lapses at Microsoft, including preventable lapses in security. According to the Wikipedia article on the SolarWinds attack, it was facilitated by various kinds of carelessness, including passwords being posted to Github and weak default passwords. They directly distributed malware-infested updates, encouraged customers to disable anti-malware tools when installing SolarWinds products, and so forth. It would be naive indeed to assume that there aren t black hat actors among the legions of programmers employed by companies that outsource work to low-cost countries some of which have challenges with bribery. So, given all this, we can t really say the problem is Open Source. Maybe it s more broad:
The problem here is with software.Maybe that inches us closer, but is it really accurate? We have all heard of Boeing s recent issues, which seem to have some element of root causes in corporate carelessness, cost-cutting, and outsourcing. That sounds rather similar to the SolarWinds issue, doesn t it?
Well then, the problem is capitalism.Maybe it has a role to play, but isn t it a little too easy to just say capitalism and throw up our hands helplessly, just as some do with FLOSS as at the start of this article? After all, capitalism also brought us plenty of products of very high quality over the years. When we can point to successful, non-careless products and I own some of them (for instance, my Framework laptop). We clearly haven t reached the root cause yet. And besides, what would you replace it with? All the major alternatives that have been tried have even stronger downsides. Maybe you replace it with better regulated capitalism , but that s still capitalism.
Then the problem must be with consumers.As this argument would go, it s consumers buying patterns that drive problems. Buyers individual and corporate seek flashy features and low cost, prizing those over quality and security. No doubt this is true in a lot of cases. Maybe greed or status-conscious societies foster it: Temu promises people to shop like a billionaire , and unloads on them cheap junk, which all but guarantees that shipments from Temu containing products made with forced labor are entering the United States on a regular basis . But consumers are also people, and some fraction of them are quite capable of writing fantastic software, and in fact, do so. So what we need is some way to seize control. Some way to do what is right, despite the pressures of consumers or corporations. Ah yes, dear reader, you have been slogging through all these paragraphs and now realize I have been leading you to this:
Then the solution is Open Source.Indeed. Faults and all, FLOSS is the most successful movement I know where people are bringing us back to the commons: working and volunteering for the common good, unleashing a thousand creative variants on a theme, iterating in every direction imaginable. We have FLOSS being vital parts of everything from $30 Raspberry Pis to space missions. It is bringing education and communication to impoverished parts of the world. It lets everyone write and release software. And, unlike the SolarWinds and Twitter issues, it exposes both clever solutions and security flaws to the world. If an authentication process in Windows got slower, we would all shrug and mutter Microsoft under our breath. Because, really, what else can we do? We have no agency with Windows. If an authentication process in Linux gets slower, anybody that s interested anybody at all can dive in and ask why and trace it down to root causes. Some look at this and say FLOSS is responsible for this mess. I look at it and say, this would be so much worse if it wasn t FLOSS and experience backs me up on this. FLOSS doesn t prevent security issues itself. What it does do is give capabilities to us all. The ability to investigate. Ability to fix. Yes, even the ability to break and its cousin, the power to learn. And, most rewarding, the ability to contribute.
I ended 2022 with a musical retrospective and very much enjoyed writing
that blog post. As such, I have decided to do the same for 2023! From now on,
this will probably be an annual thing :)
Albums
In 2023, I added 73 new albums to my collection nearly 2 albums every three
weeks! I listed them below in the order in which I acquired them.
I purchased most of these albums when I could and borrowed the rest at
libraries. If you want to browse though, I added links to the album covers
pointing either to websites where you can buy them or to Discogs when digital
copies weren't available.
Once again this year, it seems that Punk (mostly O !) and Metal dominate my
list, mostly fueled by Angry Metal Guy and the amazing Montr al
Skinhead/Punk concert scene.
Concerts
A trend I started in 2022 was to go to as many concerts of artists I like as
possible. I'm happy to report I went to around 80% more concerts in 2023 than
in 2022! Looking back at my list, April was quite a busy month...
Here are the concerts I went to in 2023:
| Series: | Discworld #34 |
| Publisher: | Harper |
| Copyright: | October 2005 |
| Printing: | November 2014 |
| ISBN: | 0-06-233498-0 |
| Format: | Mass market |
| Pages: | 434 |
![]() |
| Photo by Pixabay |
Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:
2x less disk space used (1,417MB vs 2,940MB, including initrd)
3x less peak RAM usage for the initrd boot (68MB vs 204MB)
0.5x increase in download size (949MB vs 600MB)
2.5x faster initrd generation (4.5s vs 11.3s)
approximately the same total time (103s vs 98s, hardware dependent)
For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:
1.3x less disk space used (548MB vs 742MB)
2.2x less peak RAM usage for initrd boot (27MB vs 62MB)
0.4x increase in download size (207MB vs 146MB)
Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.
This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.
The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.
The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.
Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.
Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:
Next.