A new version 0.1.10 of the RcppExamples
package is now on CRAN, and
marks the first release in five and half years.
RcppExamples
provides a handful of short examples detailing by concrete working
examples how to set up basic R data structures in C++. It also provides
a simple example for packaging with Rcpp. The package provides (generally
fairly) simple examples, more (and generally longer) examples are at the
Rcpp Gallery.
This releases brings a bi-directorial example of factor
conversion, updates the Date example, removes the
explicitly stated C++ compilation standard (which CRAN now nags about) and brings a
number of small fixes and maintenance that accrued since the last
release. The NEWS extract follows:
Changes in
RcppExamples version 0.1.10 (2025-03-17)
Simplified DateExample by removing unused API
code
Added a new FactorExample with conversion to and
from character vectors
Updated and modernised continuous integrations multiple
times
A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere.
This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog
Want to share your thoughts about this? Please join the discussion on the Lean community zulip!
Background
When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map, then extra attention used to be necessary so that Lean can see that xs.map applies its argument only elements of the list xs. The usual idiom is to write xs.attach.map instead, where List.attach attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example Nested Recursion in Higher-order Functions .
To make this step less tedious I taught Lean to automatically rewrite xs.map to xs.attach.map (where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then else to the dependent if h : c then else , but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof.
I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better way to do this, and well-founded recursion in general:
WellFounded.fix : (hwf : WellFounded r) (F : (x : ) ((y : ) r y x C y) C x) (x : ) : C x
we have to rewrite the functorial of the recursive function, which naturally has type
F : ((y : ) C y) ((x : ) C x)
to the one above, where all recursive calls take the termination proof r y x. This is a fairly hairy operation, mangling the type of matcher s motives and whatnot.
Things are simpler for recursive definitions using the new partial_fixpoint machinery, where we use Lean.Order.fix
so the functorial s type is unmodified (here will be ((x : ) C x)), and everything else is in the propositional side-condition montone F. For this predicate we have a syntax-guided compositional tactic, and it s easily extensible, e.g. by
theorem monotone_mapM (f : m ) (xs : List ) (hmono : monotone f) :
monotone (fun x => xs.mapM (f x))
Once given, we don t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F that closely matches the function definition as written by the user. Much simpler!
Isabelle has it easier
Isabelle also supports well-founded recursion, and has great support for nested recursion. And it s much simpler!
There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map something like our List.map_congr_left
List.map_congr_left : (h : a l, f a = g a) :
List.map f l = List.map g l
This is because in Isabelle, too, the termination proofs is a side-condition that essentially states the functorial F calls its argument f only on smaller arguments .
Can we have it easy, too?
I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn t strong enough for us.
But maybe there is a way to do it, using an existential to give a witness that F can alternatively implemented using the more restrictive argument. The following callsOn P F predicate can express that F calls its higher-order argument only on arguments that satisfy the predicate P:
section setup
variable : Sort u
variable : Sort v
variable : Sort w
def callsOn (P : Prop) (F : ( y, y) ) :=
(F': ( y, P y y) ), f, F' (fun y _ => f y) = F f
variable (R : Prop)
variable (F : ( y, y) ( x, x))
local infix:50 " " => R
def recursesVia : Prop := x, callsOn ( x) (fun f => F f x)
noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : ( x, x) :=
wf.fix (fun x => (h x).choose)
def fix_eq (wf : WellFounded R) h x :
fix R F wf h x = F (fix R F wf h) x := by
unfold fix
rw [wf.fix_eq]
apply (h x).choose_spec
This allows nice compositional lemmas to discharge callsOn predicates:
theorem callsOn_base (y : ) (hy : P y) :
callsOn P (fun (f : x, x) => f y) := by
exists fun f => f y hy
intros; rfl
@[simp]
theorem callsOn_const (x : ) :
callsOn P (fun (_ : x, x) => x) :=
fun _ => x, fun _ => rfl
theorem callsOn_app
: Sort uu : Sort ww
(F : ( y, y) ) -- can this also support dependent types?
(F : ( y, y) )
(h : callsOn P F )
(h : callsOn P F ) :
callsOn P (fun f => F f (F f)) := by
obtain F ', h := h
obtain F ', h := h
exists (fun f => F ' f (F ' f))
intros; simp_all
theorem callsOn_lam
: Sort uu
(F : ( y, y) ) -- can this also support dependent types?
(h : x, callsOn P (F x)) :
callsOn P (fun f x => F x f) := by
exists (fun f x => (h x).choose f)
intro f
ext x
apply (h x).choose_spec
theorem callsOn_app2
: Sort uu : Sort ww
(g : )
(F : ( y, y) ) -- can this also support dependent types?
(F : ( y, y) )
(h : callsOn P F )
(h : callsOn P F ) :
callsOn P (fun f => g (F f) (F f)) := by
apply_rules [callsOn_app, callsOn_const]
With this setup, we can have the following, possibly user-defined, lemma expressing that List.map calls its arguments only on elements of the list:
theorem callsOn_map ( : Type uu) ( : Type ww)
(P : Prop) (F : ( y, y) ) (xs : List )
(h : x, x xs callsOn P (fun f => F f x)) :
callsOn P (fun f => xs.map (fun x => F f x)) := by
suffices callsOn P (fun f => xs.attach.map (fun x, h => F f x)) by
simpa
apply callsOn_app
apply callsOn_app
apply callsOn_const
apply callsOn_lam
intro x', hx'
dsimp
exact (h x' hx')
apply callsOn_const
end setup
So here is the (manual) construction of a nested map for trees:
section examples
structure Tree ( : Type u) where
val :
cs : List (Tree )
-- essentially
-- def Tree.map (f : ) : Tree Tree :=
-- fun t => f t.val, t.cs.map Tree.map )
noncomputable def Tree.map (f : ) : Tree Tree :=
fix (sizeOf < sizeOf ) (fun map t => f t.val, t.cs.map map )
(InvImage.wf (sizeOf ) WellFoundedRelation.wf) < by
intro v, cs
dsimp only
apply callsOn_app2
apply callsOn_const
apply callsOn_map
intro t' ht'
apply callsOn_base
-- ht' : t' cs -- !
-- sizeOf t' < sizeOf val := v, cs := cs
decreasing_trivial
end examples
This makes me happy!
All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint.
I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn t fit, but the above is dependent, so it looks good.
With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.
The cake is a lie
What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F to return a proof, but since the extra assumptions (e.g. for ite or List.map) only exist in the termination proof, they are not available in F.
Oh wey, how anticlimactic.
PS: Path dependencies
Curiously, if we didn t have functional induction at this point yet, then very likely I d change Lean to use this construction, and then we d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that s called path dependence.
The other day, I noted that the emacs integration with debputy stopped working.
After debugging for a while, I realized that emacs no longer sent the didOpen
notification that is expected of it, which confused debputy. At this point, I was
already several hours into the debugging and I noted there was some discussions on
debian-devel about emacs and byte compilation not working. So I figured I would
shelve the emacs problem for now.
But I needed an LSP capable editor and with my vi skills leaving much to be desired,
I skipped out on vim-youcompleteme. Instead, I pulled out kate, which I had not
been using for years. It had LSP support, so it would fine, right?
Well, no. Turns out that debputy LSP support had some assumptions that worked for
emacs but not kate. Plus once you start down the rabbit hole, you stumble on
things you missed previously.
Getting started
First order of business was to tell kate about debputy. Conveniently, kate has
a configuration tab for adding language servers in a JSON format right next to the tab where
you can see its configuration for built-in LSP (also in JSON format9. So a quick bit of
copy-paste magic and that was done.
Yesterday, I opened an MR against upstream to have the configuration added
(https://invent.kde.org/utilities/kate/-/merge_requests/1748) and they already merged it.
Today, I then filed a wishlist against kate in Debian to have the Debian maintainers
cherry-pick it, so it works out of the box for Trixie (https://bugs.debian.org/1099876).
So far so good.
Inlay hint woes
Since July (2024), debputy has support for Inlay hints. They are basically small
bits of text that the LSP server can ask the editor to inject into the text to provide
hints to the reader.
Typically, you see them used to provide typing hints, where the editor or the underlying
LSP server has figured out the type of a variable or expression that you did not
explicitly type. Another common use case is to inject the parameter name for positional
arguments when calling a function, so the user do not have to count the position to
figure out which value is passed as which parameter.
In debputy, I have been using the Inlay hints to show inherited fields in
debian/control. As an example, if you have a definition like:
Source: foo-src
Section: devel
Priority: optional
Package: foo-bin
Architecture: any
Then foo-bin inherits the Section and Priority field since it does not supply
its own. Previously, debputy would that by injecting the fields themselves and their
value just below the Package field as if you had typed them out directly. The editor
always renders Inlay hints distinctly from regular text, so there was no risk of
confusion and it made the text look like a valid debian/control file end to end. The
result looked something like:
With the second instances of Section and Priority being rendered differently than
its surrendering (usually faded or colorlessly).
Unfortunately, kate did not like injecting Inlay hints with a newline in them,
which was needed for this trick. Reading into the LSP specs, it says nothing about
multi-line Inlay hints being a thing and I figured I would see this problem again
with other editors if I left it be.
I ended up changing the Inlay hints to be placed at the end of the Package field
and then included surrounding () for better visuals. So now, it looks like:
Unfortunately, it is no longer 1:1 with the underlying syntax which I liked about the
previous one. But it works in more editors and is still explicit. I also removed the
Inlay hint for the Homepage field. It takes too much space and I have yet to
meet someone missing it in the binary stanza.
If you have any better ideas for how to render it, feel free to reach out to me.
Spurious completion and hover
As I was debugging the Inlay hints, I wanted to do a quick restart of debputy after
each fix. Then I would trigger a small change to the document to ensure kate would
request an update from debputy to render the Inlay hints with the new code.
The full outgoing payloads are sent via the logs to the client, so it was really about
minimizing which LSP requests are sent to debputy. Notably, two cases would flood the
log:
Completion requests. These are triggered by typing anything at all and since I wanted
to a change, I could not avoid this. So here it was about making sure there would be
nothing to complete, so the result was a small as possible.
Hover doc requests. These are triggered by mouse hovering over field, so this was
mostly about ensuring my mouse movement did not linger over any field on the way
between restarting the LSP server and scrolling the log in kate.
In my infinite wisdom, I chose to make a comment line where I would do the change. I figured
it would neuter the completion requests completely and it should not matter if my cursor
landed on the comment as there would be no hover docs for comments either.
Unfortunately for me, debputy would ignore the fact that it was on a comment line.
Instead, it would find the next field after the comment line and try to complete based on
that. Normally you do not see this, because the editor correctly identifies that none of
the completion suggestions start with a \#, so they are all discarded.
But it was pretty annoying for the debugging, so now debputy has been told to explicitly
stop these requests early on comment lines.
Hover docs for packages
I added a feature in debputy where you can hover over package names in your relationship
fields (such as Depends) and debputy will render a small snippet about it based on
data from your local APT cache.
This doc is then handed to the editor and tagged as markdown provided the editor supports
markdown rendering. Both emacs and kate support markdown. However, not all
markdown renderings are equal. Notably, emacs's rendering does not reformat the text
into paragraphs. In a sense, emacs rendering works a bit like <pre>...</pre> except
it does a bit of fancy rendering inside the <pre>...</pre>.
On the other hand, kate seems to convert the markdown to HTML and then throw the result
into an HTML render engine. Here it is important to remember that not all newlines are equal
in markdown. A Foo<newline>Bar is treated as one "paragraph" (<p>...</p>) and the HTML
render happily renders this as single line Foo Bar provided there is sufficient width to
do so.
A couple of extra newlines made wonders for the kate rendering, but I have a feeling this
is not going to be the last time the hover docs will need some tweaking for prettification.
Feel free to reach out if you spot a weirdly rendered hover doc somewhere.
Making quickfixes available in kate
Quickfixes are treated as generic code actions in the LSP specs. Each code action has a "type"
(kind in the LSP lingo), which enables the editor to group the actions accordingly or
filter by certain types of code actions.
The design in the specs leads to the following flow:
The LSP server provides the editor with diagnostics (there are multiple ways to trigger
this, so we will keep this part simple).
The editor renders them to the user and the user chooses to interact with one of them.
The interaction makes the editor asks the LSP server, which code actions are available
at that location (optionally with filter to only see quickfixes).
The LSP server looks at the provided range and is expected to return the relevant
quickfixes here.
This flow is really annoying from a LSP server writer point of view. When you do the diagnostics
(in step 1), you tend to already know what the possible quickfixes would be. The LSP spec
authors realized this at some point, so there are two features the editor provides to simplify
this.
In the editor request for code actions, the editor is expected to provide the diagnostics
that they received from the server. Side note: I cannot quite tell if this is optional or
required from the spec.
The editor can provide support for remembering a data member in each diagnostic. The
server can then store arbitrary information in that member, which they will see again in
the code actions request. Again, provided that the editor supports this optional feature.
All the quickfix logic in debputy so far has hinged on both of these two features.
As life would have it, kate provides neither of them.
Which meant I had to teach debputy to keep track of its diagnostics on its own. The plus side
is that makes it easier to support "pull diagnostics" down the line, since it requires a similar
feature. Additionally, it also means that quickfixes are now available in more editors. For
consistency, debputy logic is now always used rather than relying on the editor support
when present.
The downside is that I had to spend hours coming up with and debugging a way to find the
diagnostics that overlap with the range provided by the editor. The most difficult part was keeping
the logic straight and getting the runes correct for it.
Making the quickfixes actually work
With all of that, kate would show the quickfixes for diagnostics from debputy and you could
use them too. However, they would always apply twice with suboptimal outcome as a result.
The LSP spec has multiple ways of defining what need to be changed in response to activating a
code action. In debputy, all edits are currently done via the WorkspaceEdit type. It
has two ways of defining the changes. Either via changes or documentChanges with
documentChanges being the preferred one if both parties support this.
I originally read that as I was allowed to provide both and the editor would pick the one it
preferred. However, after seeing kate blindly use both when they are present, I reviewed
the spec and it does say "The edit should either provide changes or documentChanges",
so I think that one is on me.
None of the changes in debputy currently require documentChanges, so I went with just
using changes for now despite it not being preferred. I cannot figure
out the logic of whether an editor supports documentChanges. As I read the notes for this
part of the spec, my understanding is that kate does not announce its support for
documentChanges but it clearly uses them when present. Therefore, I decided to keep it
simple for now until I have time to dig deeper.
Remaining limitations with kate
There is one remaining limitation with kate that I have not yet solved. The kate
program uses KSyntaxHighlighting for its language detection, which in turn is the
basis for which LSP server is assigned to a given document.
This engine does not seem to support as complex detection logic as I hoped from it. Concretely,
it either works on matching on an extension / a basename (same field for both cases) or
mime type. This combined with our habit in Debian to use extension less files like
debian/control vs. debian/tests/control or debian/rules or
debian/upstream/metadata makes things awkward a best.
Concretely, the syntax engine cannot tell debian/control from debian/tests/control as
they use the same basename. Fortunately, the syntax is close enough to work for both and
debputy is set to use filename based lookups, so this case works well enough.
However, for debian/rules and debian/upstream/metadata, my understanding is that if
I assign these in the syntax engine as Debian files, these rules will also trigger for any
file named foo.rules or bar.metadata. That seems a bit too broad for me, so I have
opted out of that for now. The down side is that these files will not work out of the box
with kate for now.
The current LSP configuration in kate does not recognize makefiles or YAML either. Ideally,
we would assign custom languages for the affected Debian files, so we do not steal the ID
from other language servers. Notably, kate has a built-in language server for YAML and
debputy does nothing for a generic YAML document. However, adding YAML as a supported
language for debputy would cause conflict and regressions for users that are already
happy with their generic YAML language server from kate.
So there are certainly still work to be done. If you are good with KSyntaxHighlighting
and know how to solve some of this, I hope you will help me out.
Changes unrelated to kate
While I was working on debputy, I also added some other features that I want to mention.
The debputy lint command will now show related context to diagnostic in its terminal
report when such information is available and is from the same file as the diagnostic
itself (cross file cases are rendered without related information).
The related information is typically used to highlight a source of a conflict. As an
example, if you use the same field twice in a stanza of debian/control, then
debputy will add a diagnostic to the second occurrence. The related information
for that diagnostic would provide the position of the first occurrence.
This should make it easier to find the source of the conflict in the cases where
debputy provides it. Let me know if you are missing it for certain diagnostics.
The diagnostics analysis of debian/control will now identify and flag simple
duplicated relations (complex ones like OR relations are ignored for now). Thanks
to Matthias Geiger for suggesting the feature and Otto Kek l inen for reporting
a false positive that is now fixed.
Closing
I am glad I tested with kate to weed out most of these issues in time before
the freeze. The Debian freeze will start within a week from now. Since debputy
is a part of the toolchain packages it will be frozen from there except for
important bug fixes.
To avoid needless typing, the fish shell features command abbreviations to
expand some words after pressing space. We can emulate such a feature with
Zsh:
# Definition of abbrev-alias for auto-expanding aliasestypeset-ga_vbe_abbrevations
abbrev-alias()alias$1_vbe_abbrevations+=($ 1%%\=*)
_vbe_zle-autoexpand()local-awords;words=($ (z)LBUFFER)if(($ #_vbe_abbrevations[(r)$ words[-1]]));thenzle_expand_alias
fizlemagic-space
zle-N_vbe_zle-autoexpand
bindkey-Memacs" "_vbe_zle-autoexpand
bindkey-Misearch" "magic-space
# Correct common typos(($+commands[git]))&&abbrev-aliasgti=git
(($+commands[grep]))&&abbrev-aliasgrpe=grep
(($+commands[sudo]))&&abbrev-aliassuod=sudo
(($+commands[ssh]))&&abbrev-aliasshs=ssh
# Save a few keystrokes(($+commands[git]))&&abbrev-aliasgls="git ls-files"(($+commands[ip]))&&abbrev-aliasip6='ip -6'abbrev-aliasipb='ip -brief'# Hard to remember options(($+commands[mtr]))&&abbrev-aliasmtrr='mtr -wzbe'
Here is a demo where gls is expanded to git ls-files after pressing space:
Auto-expanding gls to git ls-files
I don t auto-expand all aliases. I keep using regular aliases when slightly
modifying the behavior of a command or for well-known abbreviations:
Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Table of contents:
Reproducible Builds at FOSDEM 2025
Similar to last year s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew J drzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: It s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different. The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn t exist any monitoring of Nixpkgs as a whole. In this talk I ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache. Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon s talk describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research. The slides for the talk are available, as is the full video (23m17s).
Reproducible Builds at PyCascades 2025
Vagrant Cascadian presented at this year s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant s talk, entitled Re-Py-Ducible Builds caught the audience s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing or even more important, if someone else builds it, they get the exact same thing too.
reproduce.debian.net updates
The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.
Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.
Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script specifically to fix the handling the Rules-Requires-Root header in Debian source packages.
Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild package to:
Obey requests from the user/developer for a different temporary directory.
Use the root/superuser for some values of Rules-Requires-Root.
Don t pass --root-owner-group to old versions of dpkg.
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
go (clear GOROOT for func ldShared when -trimpath is used)
Distribution work
There as been the usual work in various distributions this month, such as:
In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew J drzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project s work on unprivileged and reproducible builds continued this month. Notable fixes include:
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.
Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
diffoscope & strip-nondeterminismdiffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:
Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) []
Catch a CalledProcessError when calling html2text. []
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 [][] and 288 [][] as well as submitted a patch to update to 289 []. Vagrant also fixed an issue that was breaking reprotest on Guix [][].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.
Website updates
There were a large number of improvements made to our website this month, including:
Holger Levsen clarified the name of a link to our old Wiki pages on the History page [] and added a number of new links to the Talks & Resources page [][].
James Addison update the website s own README file to document a couple of additional dependencies [][], as well as did more work on a future Getting Started guide page [][].
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
Fix /etc/cron.d and /etc/logrotate.d permissions for Jenkins nodes. []
Add support for riscv64 architecture nodes. [][]
Grant Jochen Sprickerhof access to the o4 node. []
Disable the janitor-setup-worker. [][]
In addition:
kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. []
James Addison updated reproduce.debian.net to display the so-called bad reasons hyperlink inline [] and merged the Categorized issues links into the Reproduced builds column [].
Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package [] as well as updating some documentation [].
Roland Clobus continued their work on reproducible live images for Debian, making changes related to new clustering of jobs in openQA. []
And finally, both Holger Levsen [][][] and Vagrant Cascadian performed significant node maintenance. [][][][][]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
Dear Debian community,
this is bits from DPL for February.
Ftpmaster team is seeking for new team members
In December, Scott Kitterman announced his retirement from the project.
I personally regret this, as I vividly remember his invaluable support
during the Debian Med sprint at the start of the COVID-19 pandemic. He
even took time off to ensure new packages cleared the queue in under 24
hours. I want to take this opportunity to personally thank Scott for his
contributions during that sprint and for all his work in Debian.
With one fewer FTP assistant, I am concerned about the increased
workload on the remaining team. I encourage anyone in the Debian
community who is interested to consider reaching out to the FTP masters
about joining their team.
If you're wondering about the role of the FTP masters, I'd like to share
a fellow developer's perspective:
"My read on the FTP masters is:
In truth, they are the heart of the project.
They know it.
They do a fantastic job."
I fully agree and see it as part of my role as DPL to ensure this
remains true for Debian's future.
If you're looking for a way to support Debian in a critical role where
many developers will deeply appreciate your work, consider reaching out
to the team. It's a great opportunity for any Debian Developer to
contribute to a key part of the project.
Project Status: Six Months of Bug of the Day
In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks
effort, which I intended to start with a Bug of the Day project.
Another idea was an Autopkgtest of the Day, but this has been postponed
due to limited time resources-I cannot run both projects in parallel.
The original goal was to provide small, time-bound examples for
newcomers. To put it bluntly: in terms of attracting new contributors,
it has been a failure so far. My offer to explain individual bug-fixing
commits in detail, if needed, received no response, and despite my
efforts to encourage questions, none were asked.
However, the project has several positive aspects: experienced
developers actively exchange ideas, collaborate on fixing bugs, assess
whether packages are worth fixing or should be removed, and work
together to find technical solutions for non-trivial problems.
So far, the project has been engaging and rewarding every day, bringing
new discoveries and challenges-not just technical, but also social.
Fortunately, in the vast majority of cases, I receive positive responses
and appreciation from maintainers. Even in the few instances where help
was declined, it was encouraging to see that in two cases, maintainers
used the ping as motivation to work on their packages themselves. This
reflects the dedication and high standards of maintainers, whose work is
essential to the project's success.
I once used the metaphor that this project is like wandering through a
dark basement with a lone flashlight-exploring aimlessly and discovering
a wide variety of things that have accumulated over the years. Among
them are true marvels with popcon >10,000, ingenious tools, and
delightful games that I only recently learned about. There are also some
packages whose time may have come to an end-but each of them reflects
the dedication and effort of those who maintained them, and that
deserves the utmost respect.
Leaving aside the challenge of attracting newcomers, what have we
achieved since August 1st last year?
Fixed more than one package per day, typically addressing multiple bugs.
Added and corrected numerous Homepage fields and watch files.
The most frequently patched issue was "Fails To Cross-Build From Source"
(all including patches).
Migrated several packages from cdbs/debhelper to dh.
Rewrote many d/copyright files to DEP5 format and thoroughly reviewed them.
Integrated all affected packages into Salsa and enabled Salsa CI.
Approximately half of the packages were moved to appropriate teams,
while the rest are maintained within the Debian or Salvage teams.
Regularly performed team uploads, ITS, NMUs, or QA uploads.
Filed several RoQA bugs to propose package removals where appropriate.
Reported multiple maintainers to the MIA team when necessary.
With some goodwill, you can see a slight impact on the trends.debian.net
graphs (thank you Lucas for the graphs), but I would never claim that
this project alone is responsible for the progress. What I have also
observed is the steady stream of daily uploads to the delayed queue,
demonstrating the continuous efforts of many contributors. This ongoing
work often remains unseen by most-including myself, if not for my
regular check-ins on this list. I would like to extend my sincere thanks
to everyone pushing fixes there, contributing to the overall quality and
progress of Debian's QA efforts.
If you examine the graphs for "Version Control System" and "VCS Hosting"
with the goodwill mentioned above, you might notice a positive trend
since mid-last year. The "Package Smells" category has also seen
reductions in several areas: "no git", "no DEP5 copyright", "compat <9",
and "not salsa". I'd also like to acknowledge the NMUers who have been
working hard to address the "format != 3.0" issue. Thanks to all their
efforts, this specific issue never surfaced in the Bug of the Day
effort, but their contributions deserve recognition here.
The experience I gathered in this project taught me a lot and inspired
me to some followup we should discuss at a Sprint at DebCamp this year.
Finally, if any newcomer finds this information interesting, I'd be
happy to slow down and patiently explain individual steps as needed. All
it takes is asking questions on the Matrix channel to turn this into
a "teaching by example" session.
By the way, for newcomers who are interested, I used quite a few
abbreviations-all of which are explained in the Debian Glossary.
Sneak Peek at Upcoming Conferences
I will join two conferences in March-feel free to talk to me if you spot
me there.
Most of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay.
OpenSSH
OpenSSH upstream released
9.9p2 with fixes for
CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from
the Debian security team, and prepared updates for all of testing/unstable,
bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and
stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few
more months, but wasn t affected by either vulnerability.
Although I m not particularly active in the Perl team, I fixed a
libnet-ssleay-perl build failure because
it was blocking openssl from migrating to testing, which in turn was
blocking the above openssh fixes.
I also sent a minor sshd -T
fix upstream, simplified
a number of autopkgtests using the newish Restrictions:
needs-sudo facility, and prepared for
removing the obsolete slogin symlink.
PuTTY
I upgraded to the new upstream version
0.83.
GCC 15 build failures
I fixed build failures with GCC
15 in a few packages:
Python team
A lot of my Python team work is driven by its maintainer
dashboard.
Now that we ve finished the transition to Python 3.13 as the default
version, and inspired by a recent debian-devel thread started by
Santiago, I
thought it might be worth spending a bit of time on the uscan error
section. uscan is typically
scraping upstream web sites to figure out whether new versions are
available, and so it s easy for its configuration to become outdated or
broken. Most of this work is pretty boring, but it can often reveal
situations where we didn t even realize that a Debian package was out of
date. I fixed these packages:
cssutils (this in particular was very out of date due to a new and active
upstream maintainer since 2021)
In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing
BSA-121)
and added new backports of python-django-dynamic-fixture and
python-django-pgtrigger, all of which are dependencies of
debusine.
I went through all the build failures related to python-click 8.2.0 (which
was confusingly tagged but not fully released
upstream and posted an
analysis.
I fixed or helped to fix various other build/test failures:
title: MiniDebConf Belo Horizonte 2024 - a brief report
description: by Paulo Henrique de Lima Santana (phls)
published: true
date: 2025-03-01T17:40:50.904Z
tags: blog, english
editor: markdown
dateCreated: 2024-06-06T09:00:00.000Z
From April 27th to 30th, 2024,
MiniDebConf Belo Horizonte 2024 was held at
the Pampulha Campus of
UFMG - Federal University of Minas Gerais, in Belo
Horizonte city.
This was the fifth time that a MiniDebConf (as an exclusive in-person event
about Debian) took place in Brazil. Previous editions were in Curitiba
(2016,
2017, and
2018), and in
Bras lia 2023. We had other MiniDebConfs
editions held within Free Software events such as
FISL and Latinoware, and other
online events. See our
event history.
Parallel to MiniDebConf, on 27th (Saturday)
FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software,
and It has been held since 2005 simultaneously in several cities.
MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about
Debian. We value the presence of both beginner users who are familiarizing
themselves with the system and the official project developers. The spirit of
welcome and collaboration was present during all the event.
2024 edition numbers
During the four days of the event, several activities took place for all
levels of users and collaborators of the Debian project. The official schedule
was composed of:
06 rooms in parallel on Saturday;
02 auditoriums in parallel on Monday and Tuesday;
30 talks/BoFs of all levels;
05 workshops for hands-on activities;
09 lightning talks on general topics;
01 Live Electronics performance with Free Software;
Install fest to install Debian on attendees' laptops;
BSP (Bug Squashing Party);
Uploads of new or updated packages.
The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a
record number of participants.
Total people registered: 399
Total attendees in the event: 224
Of the 224 participants, 15 were official Brazilian contributors,
10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to
several unofficial contributors.
The organization was carried out by 14 people who started working at the end of
2023, including Prof. Lo c Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event.
As MiniDebConf was held at UFMG facilities, we had the help of more than
10 University employees.
See the list with the
names of people who helped in some way in organizing MiniDebConf Belo Horizonte
2024.
The difference between the number of people registered and the number of
attendees in the event is probably explained by the fact that there is no
registration fee, so if the person decides not to go to the event, they will
not suffer financial losses.
The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the
result of the constant efforts made over the last few years to attract more
contributors to the Debian community in Brazil. With each edition the numbers
only increase, with more attendees, more activities, more rooms, and more
sponsors/supporters.
Activities
The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th
(Saturday, Monday and Tuesday) we had talks, discussions, workshops and many
practical activities.
On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing
around the city. In the morning we left the hotel and went, on a chartered bus,
to the
Belo Horizonte Central Market. People took
the opportunity to buy various things such as cheeses, sweets, cacha as and
souvenirs, as well as tasting some local foods.
After a 2-hour tour of the Market, we got back on the bus and hit the road for
lunch at a typical Minas Gerais food restaurant.
With everyone well fed, we returned to Belo Horizonte to visit the city's
main tourist attraction: Lagoa da Pampulha and Capela S o Francisco de Assis,
better known as
Igrejinha da Pampulha.
We went back to the hotel and the day ended in the hacker space that we set up
in the events room for people to chat, packaging, and eat pizzas.
Crowdfunding
For the third time we ran a crowdfunding campaign and it was incredible how
people contributed! The initial goal was to raise the amount equivalent to a
gold tier of R$ 3,000.00. When we reached this goal, we defined a new one,
equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we
achieved this goal. So we proposed as a final goal the value of a gold + silver
+ bronze tiers, which would be equivalent to R$ 6,000.00. The result was that
we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people!
Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated.
Food, accommodation and/or travel grants for participants
Each edition of MiniDebConf brought some innovation, or some different benefit
for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need
some kind of help.
In the registration form, we included the option for the person to request a
food, accommodation and/or travel bursary, but to do so, they would have to
identify themselves as a contributor (official or unofficial) to Debian and
write a justification for the request.
Number of people benefited:
Food: 69
Accommodation: 20
Travel: 18
The food bursary provided lunch and dinner every day. The lunches included
attendees who live in Belo Horizonte and the region. Dinners were paid for
attendees who also received accommodation and/or travel. The accommodation was
held at the BH Jaragu Hotel. And the
travels included airplane or bus tickets, or fuel (for those who came by car or
motorbike).
Much of the money to fund the bursaries came from the Debian Project, mainly
for travels. We sent a budget request to the former Debian leader Jonathan
Carter, and He promptly approved our request.
In addition to this event budget, the leader also approved individual requests
sent by some DDs who preferred to request directly from him.
The experience of offering the bursaries was really good because it allowed
several people to come from other cities.
Photos and videos
You can watch recordings of the talks at the links below:
Thanks
We would like to thank all the attendees, organizers, volunteers, sponsors and
supporters who contributed to the success of MiniDebConf Belo Horizonte 2024.
Sponsors
Gold:
by por Andrew Gon alves
Debian Day in Santa Maria - RS 2024 was held after a 5-year hiatus from the previous version of the event. It took
place on the morning of August 16, in the Blue Hall of the
Franciscan University (UFN) with support from the
Debian community and the Computing Practices Laboratory of UFN.
The event was attended by students from all semesters of the Computer Science,
Digital Games and Informational Systems, where we had the opportunity to talk to the participants.
Around 60 students attended a lecture introducing them to Free and Open Source
Software, Linux and were introduced to the Debian project, both about the
philosophy of the project and how it works in practice and the opportunities
that have opened up for participants by being part of Debian.
After the talk, a packaging demonstration was given by local DD Francisco
Vilmar, who demonstrated in practice how software packaging works in Debian.
I would like to thank all the people who helped us:
Debian Project
Professor Ana Paula Canal (UFN)
Professor Sylvio Andr Garcia (UFN)
Laboratory of Computing Practices
Francisco Vilmar (local DD)
And thanks to all the participants who attended this event asking intriguing
questions and taking an interest in the world of Free Software.
Photos:
Since my motivation boost in the beginning of the month caused me
to wrap up a new release of
liboggz, I have used the
same boost to wrap up new editions of
libfishsound,
liboggplay
and
libkate
too. These have been tagged in upstream git, but not yet published on
the Xiph download location. I am waiting for someone with access to
have time to move the tarballs there, I hope it will happen in a few
days. The same is the case for a minor update of liboggz too.
As I was looking at Xiph packages lacking updates, it occurred to
me that there are packages in Debian that have not received a new
upload in a long time. Looking for a way to identify them, I came
across the ltnu script from the
devscripts
package. It can sort by last update, packages maintained by a single
user/group, and is useful to figure out which packages a single
maintainer should have a look at. But I wanted a archive wide
summary. Based on the UDD SQL
query used by ltnu, I ended up with the following command:
#!/bin/sh
env PGPASSWORD=udd-mirror psql --host=udd-mirror.debian.net --user=udd-mirror udd --command="
select source,
max(version) as ver,
max(date) as uploaded
from upload_history
where distribution='unstable' and
source in (select source
from sources
where release='sid')
group by source
order by max(date) asc
limit 50;"
This will sort all source packages in Debian by upload date, and
list the 50 oldest ones. The end result is a list of packages I
suspect could use some attention:
So there are 8 packages last uploaded to unstable in 2011, 12
packages in 2012 and 26 packages in 2013. I suspect their maintainers
need help and we should all offer our assistance. I already contacted
two of them and hope the rest of the Debian community will chip in to
help too. We should ensure any Debian specific patches are passed
upstream if they still exist, that the package is brought up to speed
with the latest Debian policy, as well as ensure the source can built
with the current compiler set in Debian.
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
I have been testing fish for a couple months now (this file started on
2025-01-03T23:52:15-0500 according to stat(1)), and those are my
notes. I suspect people will have Opinions about my comments here. Do
not comment unless you have some Constructive feedback to provide: I
don't want to know if you think I am holding it Wrong. Consider that I
might have used UNIX shells for longer that you have lived.
I'm not sure I'll keep using fish, but so far it's the first shell
that survived heavy use outside of zsh(1) (unless you count
tcsh(1), but that was in another millenia).
My normal shell is bash(1), and it's still the shell I used
everywhere else than my laptop, as I haven't switched on all the
servers I managed, although it is available since August 2022 on
torproject.org servers. I first got interested in fish because they
ported to Rust, making it one of the rare shells out there
written in a "safe" and modern programming language, released after an
impressive ~2 year of work with Fish 4.0.
Cool things
Current directory gets shortened,
~/wikis/anarc.at/software/desktop/wayland shows up as
~/w/a/s/d/wayland
Autocompletion rocks.
Default prompt rocks. Doesn't seem vulnerable to command injection
assaults, at least it doesn't trip on the git-landmine.
It even includes pipe status output, which was a huge pain to
implement in bash. Made me realized that if the last command succeeds,
we don't see other failures, which is the case of my current prompt
anyways! Signal reporting is better than my bash implementation too.
So far the only modification I have made to the prompt is to add a
printf '\a' to output a bell.
By default, fish keeps a directory history (but separate from the
pushd stack), that can be navigated with cdh, prevd, and
nextd, dirh shows the history.
Less cool
I feel there's visible latency in the prompt creation.
POSIX-style functions (foo() true ) are unsupported. Instead,
fish uses whitespace-sensitive definitions like this:
function foo
true
end
This means my (modest) collection of POSIX functions need to be ported
to fish. Workaround: simple functions can be turned into aliases,
which fish supports (but implements using functions).
EOF heredocs are considered to be "minor syntactic sugar". I find
them frigging useful.
Process substitution is split on newlines, not whitespace. you need to
pipe through string split -n " " to get the equivalent.
<(cmd) doesn't exist: they claim you can use cmd foo - as a
replacement, but that's not correct: I used <(cmd) mostly where
foo does not support - as a magic character to say 'read from
stdin'.
Documentation is... limited. It seems mostly geared the web docs
which are... okay (but I couldn't find out about
~/.config/fish/conf.d there!), but this is really inconvenient when
you're trying to browse the manual pages. For example, fish thinks
there's a fish_prompt manual page, according to its own completion
mechanism, but man(1) cannot find that manual page. I can't find the
manual for the time command (which is actually a keyword!)
Fish renders multi-line commands with newlines. So if your terminal
looks like this, say:
Note that this is an issue specific to foot(1), alacritty(1) and
gnome-terminal(1) don't suffer from that issue. I have already filed
it upstream in foot and it is apparently fixed already.
Globbing is driving me nuts. You can't pass a * to a command
unless fish agrees it's going to match something. You need to escape
it if it doesn't immediately match, and then you need the called
command to actually support globbing. 202[345] doesn't match
folders named 2023, 2024, 2025, it will send the string
202[345] to the command.
Blockers
() is like $(): it's process substitution, and not a
subshell. This is really impractical: I use ( cd foo ; do_something)
all the time to avoid losing the current directory... I guess I'm
supposed to use pushd for this, but ouch. This wouldn't be so bad if
it was just for cd though. Clean constructs like this:
... which fails and suggests using begin/end, at which point: why
not just support the curly braces?
FOO=bar is not allowed. It's actually recognized syntax, but creates
a warning. We're supposed to use set foo bar instead. This really
feels like a needless divergence from standard.
Aliases are... peculiar. Typical constructs like alias mv="\mv -i"
don't work because fish treats aliases as a function definition, and
\ is not magical there. This can be worked around by specifying the
full path to the command, with e.g. alias mv="/bin/mv -i". Another
problem is trying to override a built-in, which seems completely
impossible. In my case, I like the time(1) command the way it
is, thank you very much, and fish provides no way to bypass that
builtin. It is possible to call time(1) with command time, but
it's not possible to replace the command keyword so that means a lot
of typing.
Again: you can't use \ to bypass aliases. This is a huge annoyance
for me. I would need to learn to type command in long form, and I
use that stuff pretty regularly. I guess I could alias command to
c or something, but this is one of those huge muscle memory challenges.
alt . doesn't always work the way i expect.
A Little Vice is a stand-alone self-published
magical girl novel. It
is the author's first novel.
C is a high school student and frequent near-victim of monster attacks.
Due to the nefarious work of Avaritia Wolf and her allies, his high school
is constantly attacked by Beasts, who are magical corruptions of some
internal desire taken to absurd extremes. Standing in their way are the
Angelic Saints: magical girls who transform into Saint Castitas, Saint
Diligentia, and Saint Temperantia and fight the monsters. The monsters for
some reason seem disposed to pick C as their victim for hostage-taking,
mind control, use as a human shield, and other rather traumatic
activities. He's always rescued by the Saints before any great harm is
done, but in some ways this makes the situation worse.
It is obvious to C that the Saints are his three friends Inessa, Ida, and
Temperance, even though no one else seems able to figure this out despite
the blatant clues. Inessa has been his best friend since childhood when
she was awkward and needed his support. Now, she and his other friends
have become literal heroes, beautiful and powerful and capable, constantly
protecting the school and innocent people, and C is little more than a
helpless burden to be rescued. More than anything else, he wishes he could
be an Angelic Saint like them, but of course the whole idea is impossible.
Boys don't get to be magical girls.
(I'm using he/him pronouns for C in this review because C uses them for
himself for most of the book.)
This is a difficult book to review because it is deeply focused on
portraying a specific internal emotional battle in all of its
sometimes-ugly complexity, and to some extent it prioritizes that
portrayal over conventional story-telling. You have probably already
guessed that this is a transgender coming-out story Elkin's choice of
the magical girl genre was done with deep understanding of its role in
transgender narratives but more than that, it is a transgender
coming-out story of a very specific and closely-observed type. C knows who
he wishes he was, but he is certain that this transformation is absolutely
impossible. He is very deep in a cycle of self-loathing for wanting
something so manifestly absurd and insulting to people who have the
virtues that C does not.
A Little Vice is told in the first person from C's perspective, and
most of this book is a relentless observation of C's anxiety and shame
spiral and reflexive deflection of any possibility of a way out. This is
very well-written: Elkin knows the reader is going to disagree with C's
internalized disgust and hopelessness, knows the reader desperately wants
C to break out of that mindset, and clearly signals in a myriad of adroit
ways that Elkin is on the reader's side and does not agree with C's
analysis. C's friends are sympathetic, good-hearted people, and while
sometimes oblivious, it is obvious to the reader that they're also on the
reader's side and would help C in a heartbeat if they saw an opening. But
much of the point of the book is that it's not that easy, that breaking
out of the internal anxiety spiral is nearly impossible, and that C is
very good at rejecting help, both because he cannot imagine what form it
could take but also because he is certain that he does not deserve it.
In other words, much of the reading experience of this book involves
watching C torture and insult himself. It's all the more effective because
it isn't gratuitous. C's internal monologue sounds exactly like how an
anxiety spiral feels, complete with the sort of half-effective coping
mechanisms, deflections, and emotional suppression one develops to blunt
that type of emotional turmoil.
I normally hate this kind of book. I am a happy ending and competence porn
reader by default. The world is full of enough pain that I don't turn to
fiction to read about more pain. It says a lot about how well-constructed
this book is that I stuck with it. Elkin is going somewhere with the
story, C gets moments of joy and delight along the way to keep the reader
from bogging down completely, and the best parts of the book feel like a
prolonged musical crescendo with suspended chords. There is a
climax coming, but Elkin is going to make you wait for it for far longer
than you want to.
The main element that protects A Little Vice from being too grim is
that it is a genre novel that is very playful about both magical girls and
superhero tropes in general. I've already alluded to one of those
elements: Elkin plays with the Mask Principle (the inability of people to
see through entirely obvious secret identities) in knowing and
entertaining ways. But there are also villains, and that leads me to the
absolutely delightful Avaritia Wolf, who for me was the best character in
this book.
The Angelic Saints are not the only possible approach to magical girl
powers in this universe. There are villains who can perform a similar
transformation, except they embrace a vice rather than a virtue. Avaritia
Wolf embraces the vice of greed. They (Avaritia's pronouns change over the
course of the book) also have a secret identity, which I suspect will be
blindingly obvious to most readers but which I'll avoid mentioning since
it's still arguably a spoiler.
The primary plot arc of this book is an attempt to recruit C to the side
of the villains. The Beasts are drawn to him because he has magical
potential, and the villains are less picky about gender. This initially
involves some creepy and disturbing mind control, but it also brings C
into contact with Avaritia and Avaritia's very specific understanding of
greed. As far as Avaritia is concerned, greed means wanting whatever they
want, for whatever reason they feel like wanting it, and there is
absolutely no reason why that shouldn't include being greedy for their
friends to be happy. Or doing whatever they can to make their friends
happy, whether or not that looks like villainy.
Elkin does two things with this plot that I thought were remarkably
skillful. The first is that she directly examines and then undermines the
"easy" transgender magical girl ending. In a world of transformation
magic, someone who wants to be a girl could simply turn into a girl and
thus apparently resolve the conflict in a way that makes everyone happy. I
think there is an important place for that story (I am a vigorous defender
of escapist fantasy and happy endings), but that is not the story that
Elkin is telling. I won't go into the details of why and how the story
complicates and undermines this easy ending, but it's a lot of why this
book feels both painful and honest to a specific, and very not easy,
transgender experience, even though it takes place in an utterly
unrealistic world.
But the second, which is more happy and joyful, is that Avaritia gleefully
uses a wholehearted embrace of every implication of the vice of greed to
bulldoze the binary morality of the story and question the classification
of human emotions into virtues and vices. They are not a hero, or even all
that good; they have some serious flaws and a very anarchic attitude
towards society. But Avaritia provides the compelling, infectious thrill
of the character who looks at the social construction of morality that is
constraining the story and decides that it's all bullshit and refuses to
comply. This is almost the exact opposite of C's default emotional
position at the start of the book, and watching the two characters play
off of each other in a complex friendship is an absolute delight.
The ending of this book is complicated, messy, and incomplete. It is the
sort of ending that I think could be incredibly powerful if it hits
precisely the right chords with the reader, but if you're not that reader,
it can also be a little heartbreaking because Elkin refuses to provide an
easy resolution. The ending also drops some threads that I wish Elkin
hadn't dropped; there are some characters who I thought deserved a
resolution that they don't get. But this is one of those books where the
author knows exactly what story they're trying to tell and tells it
whether or not that fits what the reader wants. Those books are often not
easy reading, but I think there's something special about them.
This is not the novel for people who want detailed world-building that
puts a solid explanation under events. I thought Elkin did a great job
playing with the conventions of an episodic anime, including starting the
book on Episode 12 to imply C's backstory with monster attacks and hinting
at a parallel light anime story by providing TV-trailer-style plot
summaries and teasers at the start and end of each chapter. There is a
fascinating interplay between the story in which the Angelic Saints are
the protagonists, which the reader can partly extrapolate, and the novel
about C that one is actually reading. But the details of the
world-building are kept at the anime plot level: There's an arch-villain,
a World Tree, and a bit of backstory, but none of it makes that much sense
or turns into a coherent set of rules. This is a psychological novel; the
background and rules exist to support C's story.
If you do want that psychological novel... well, I'm not sure whether to
recommend this book or not. I admire the construction of this book a great
deal, but I don't think appealing to the broadest possible audience was
the goal. C's anxiety spiral is very repetitive, because anxiety spirals
are very repetitive, and you have to be willing to read for the grace
notes on the doom loop if you're going to enjoy this book. The
sentence-by-sentence writing quality is fine but nothing remarkable, and
is a bit shy of the average traditionally-published novel. The main appeal
of A Little Vice is in the deep and unflinching portrayal of a
specific emotional journey. I think this book is going to work if you're
sufficiently invested in that journey that you are willing to read the
brutal and repetitive parts. If you're not, there's a chance you will
bounce off this hard.
I was invested, and I'm glad I read this, but caveat emptor. You
may want to try a sample first.
One final note: If you're deep in the book world, you may wonder, like I
did, if the title is a reference to Hanya Yanagihara's (in)famous A
Little Life. I do not know for certain I have not read that book
because I am not interested in being emotionally brutalized but if it
is, I don't think there is much similarity. Both books are to some extent
about four friends, but I couldn't find any other obvious connections from
some Wikipedia reading, and A Little Vice, despite C's emotional
turmoil, seems to be considerably more upbeat.
Content notes: Emotionally abusive parent, some thoughts of self-harm,
mind control, body dysmorphia, and a lot (a lot) of shame and
self-loathing.
Rating: 7 out of 10
Introduction
This is just a note-taking about how to upgrading Mozc package for up-coming trixie ready (with many restrictions) last year.
Maybe Mozc 2.29.5160.102+dfsg-1.3 will be shipped for Debian 13 (trixie).
FTBFS with Mozc 2.28.4715.102+dfsg-2.2
In May 2024, I've found that Mozc was removed from testing, and still in FTBFS.
#1068186 - mozc: FTBFS with abseil 20230802: ../../base/init_mozc.cc:90:29: error: absl::debian5::flags_internal::ArgvListAction has not been declared - Debian Bug report logs
That FTBFS was fixed in the Mozc upstream, but not applied for a while.
Not only upstream patch, but also additional linkage patch was required to fix it.
Mozc is the de-fact standard input method editor for Japanese.
Most of Japanese uses it by default on linux desktop.
(Even though frontend input method framework is different, the background engine is Mozc in most cases -
uim-mozc for task-japanese-desktop, ibus-mozc for task-japanese-gnome-desktop in Debian)
There is a case that Mozc was re-built locally with integrated external dictionary
to improve quantity of vocabulary. If FTBFS keep ongoing, it means that it blocks such a usage.
So I've sent patches to fix it and they were merged.
Motivation to update Mozc
With fixing #1068186, I've also found Mozc version is not synced to upstream for a long time.
At that time, Mozc in unstable was version 2.28.4715.102+dfsg, but upstream already released 2.30.5544.102.
It seems that Mozc's maintainer was too busy and can't afford to update it, so I've tried to do it.
The blockers for updating Mozc
But, it was not so easy task to do so.
If you want to package latest Mozc, there were many blockers.
Newer Mozc requires Bazel to build, but there is no Bazel package to fit it (There is bazel-bootstrap 4.x, but it's old. v6.x or newer one is required.)
Newer abseil and protobuf were required
Renderer was changed to Qt. GTK renderer was removed
Revise existing patchsets (e.g. for UIM, for Fcitx)
It was not all.
Road to latest Mozc
First, I knew the existence of debian-bazel, so I've posted about bazel-packaging progress.
Any updates about bazel packaging effort?
Sadly there was no response from it.
Thus, it was not realistic to adopt Bazel as build tool chain.
In other words, we need to keep GYP patch and maintain it.
And as another topic, upstream changed renderer from GTK+ to Qt.
Here are the major topics about each release of Mozc.
The internal renderer change are too big, and before GYP deprecation
in 2.29.5544.102, GYP support was already removed gradually.
As a result, target to 2.29.5160.102 was the practical approach to make it forward.
Revisit existing patchsets for 2.28.4715.102+dfsg
Second, need to revisit existing patchset to triage them.
UIM patch was maintained in third-party repository,
and directory structure was quite different from Mozc.
It seems that maintenance activity was too low, so it was not enough that picking changes
from macuim. It was required to fix FTBFS additionally.
Fcitx patch was also maintained in fcitx/mozc.
But it tracks only master branch, so it was hard to pick patchset for specific version of Mozc.
Finally, I could manage to refresh patchset for 2.29.5160.102.
OT: Hardware breakage
There was another blocker to do this task.
I've hit the situation that g++ cause SEGV during building Mozc randomly.
First, I wonder why it fails, but digging further more, finally I've found that
memory module was corrupted. Thus I've lost 32GB memory modules. :-<
Unexpected behaviour in uim-mozc
When uploaded Mozc 2.29.5160.102+dfsg-1 to experimental,
I've found that there is a case that uim-mozc behaves weird.
The candidate words were shown with flickering.
But it was not regression in this upload.
uim-mozc with Wayland cause that problem.
Thus GNOME and derivatives might not be affected because ibus-mozc will be used.
Mozc 2.29.5160.102+dfsg-1
As the patchset was matured, then uploaded 2.29.5160.102+dfsg-1 with --delayed 15 option.
$ dput --delayed 15 mozc_2.29.5160.102+dfsg-1_source.changes
Uploading mozc using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/DELAYED/15-day)
running allowed-distribution: check whether a local profile permits uploads to the target distribution
running protected-distribution: warn before uploading to distributions where a special policy applies
running checksum: verify checksums before uploading
running suite-mismatch: check the target distribution for common errors
running gpg: check GnuPG signatures before the upload
signfile dsc mozc_2.29.5160.102+dfsg-1.dsc 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
fixup_buildinfo mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
signfile buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
fixup_changes dsc mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_source.changes
fixup_changes buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo mozc_2.29.5160.102+dfsg-1_source.changes
signfile changes mozc_2.29.5160.102+dfsg-1_source.changes 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3
Successfully signed dsc, buildinfo, changes files
Uploading mozc_2.29.5160.102+dfsg-1.dsc
Uploading mozc_2.29.5160.102+dfsg-1.debian.tar.xz
Uploading mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
Uploading mozc_2.29.5160.102+dfsg-1_source.changes
Mozc 2.29.5160.102+dfsg-1 was landed to unstable at 2024-12-20.
Additional bug fixes
Additionally, the following bugs were also fixed.
These bugs were fixed in 2.29.5160.102+dfsg-1.1
And more, I've found that even though missing pristine-tar branch commit, salsa CI succeeds.
I've sent MR for this issue and already merged into.
Note that protobuf 3.25.4 on experimental depends on older absl
20230802, so it must be rebuilt against absl 20240722.0.
And more, we need to consider how to migrate from GTK renderer to
Qt renderer in the future.
I can t remember exactly the joke I was making at the time in my
work s slack instance (I m sure it wasn t particularly
funny, though; and not even worth re-reading the thread to work out), but it
wound up with me writing a UEFI binary for the punchline. Not to spoil the
ending but it worked - no pesky kernel, no messing around with userland . I
guess the only part of this you really need to know for the setup here is that
it was a Severance joke,
which is some fantastic TV. If you haven t seen it, this post will seem perhaps
weirder than it actually is. I promise I haven t joined any new cults. For
those who have seen it, the payoff to my joke is that I wanted my machine to
boot directly to an image of
Kier Eagan.
As for how to do it I figured I d give the uefi
crate a shot, and see how it is to use,
since this is a low stakes way of trying it out. In general, this isn t the
sort of thing I d usually post about except this wound up being easier and
way cleaner than I thought it would be. That alone is worth sharing, in the
hopes someome comes across this in the future and feels like they, too, can
write something fun targeting the UEFI.
First thing s first gotta create a rust project (I ll leave that part to you
depending on your life choices), and to add the uefi crate to your
Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi = version = "0.33", features = ["panic_handler", "alloc", "global_allocator"]
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml with one (or both) of the UEFI
targets we re interested in:
Unfortunately, I wasn t able to use the
image crate,
since it won t build against the uefi target. This looks like it s
because rustc had no way to compile the required floating point operations
within the image crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm usually, so this isnt
entirely shocking given we re nostd for a non-hardfloat target.
So-called softening requires a software floating point implementation that
the compiler can use to polyfill (feels weird to use the term polyfill here,
but I guess it s spiritually right?) the lack of hardware floating point
operations, which rust hasn t implemented for this target yet. As a result, I
changed tactics, and figured I d use ImageMagick to pre-compute the pixels
from a jpg, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result but it s entirely manageable.
This will take our input file (kier.jpg), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg to a flat array of 4 byte RGBA pixels. Critically, it s also
important to remember that the size of the kier.full.jpg file may not actually
be the requested size it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg file.
Last step with the image is to compile it into our Rust bianary, since we
don t want to struggle with trying to read this off disk, which is thankfully
real easy to do.
Remember to use the width and height from the final kier.full.jpg file as the
values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg image winds up shorter than the requested height
(which is also qemu s default resolution for me) which means we ll get a
semi-annoying black band under the image when we go to run it but it ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and
write the rest of the code to handle moving bytes around from in-memory
as a flat block if pixels, and request that they be displayed using the
UEFI GOP. We ll just need to hack up a container
for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// image::RgbImage , but we can associate the size of
/// the image along with the flat buffer of pixels.
structRgbImage/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
impl RgbImage
/// Create a new RgbImage .
fnnew(width: usize, height: usize) -> Self
RgbImage
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fnwrite(&self, gop: &mut GraphicsOutput) -> Result
gop.blt(BltOp::BufferToVideo
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
)
impl Index<(usize, usize)>for RgbImage
typeOutput= BltPixel;
fnindex(&self, idx: (usize, usize)) -> &BltPixellet (x, y) = idx;
&self.inner[y * self.size.0+ x]
impl IndexMut<(usize, usize)>for RgbImage
fnindex_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel
let (x, y) = idx;
&mut self.inner[y * self.size.0+ x]
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution so we need to do
some capping to ensure that we don t write more pixels than the display can
handle. Writing fewer than the display s maximum seems fine, though.
fnpraise() -> Result
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
letmut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
letmut buffer = RgbImage::new(width, height);
for y in0..height
for x in0..width
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel =&mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r +1];
pixel.blue = KIER[idx_r +2];
buffer.write(&mut gop)?;
Ok(())
Not so bad! A bit tedious we could solve some of this by turning
KIER into an RgbImage at compile-time using some clever Cow and
const tricks and implement blitting a sub-image of the image but this
will do for now. This is a joke, after all, let s not go nuts. All that s
left with our code is for us to write our main function and try and boot
the thing!
#[entry]fnmain() -> Status
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
If you re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo (as is our tradition) by targeting
the UEFI platform.
Testing the UEFI Blob
While I can definitely get my machine to boot these blobs to test, I figured
I d save myself some time by using QEMU to test without a full boot.
If you ve not done this sort of thing before, we ll need two packages,
qemu and ovmf. It s a bit different than most invocations of qemu you
may see out there so I figured it d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it ll create us an EFI partition as a drive and
attach it to the VM off a local directory so let s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven t done this before, and are only interested in running this in a
VM, don t worry too much about it, a lot of it is convention and this layout
should work for you.
With all this in place, we can kick off qemu, booting it in UEFI mode using
the ovmf firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
If all goes well, soon you ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try it all went so boringly
normal. Truly, kudos to the uefi crate maintainers, it s incredibly
well done.
Booting a live system
Sure, we could stop here, but anyone can open up an app window and see a
picture of Kier Eagan, so I knew I needed to finish the job and boot a real
machine up with this. In order to do that, we need to format a USB stick.
BE SURE /dev/sda IS CORRECT IF YOU RE COPY AND PASTING. All my drives
are NVMe, so BE CAREFUL if you use SATA, it may very well be your
hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you ll need to work
out depending on how you ve decided to manage your MOK.
I figured I d leave a signed copy of boot2kier at
/boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I m sure there is a way to do it using
efibootmgr, but I wasn t smart enough to do that quickly. I let er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though but lucky for me, I
have a Minisforum Z83-F sitting around (which, until a few weeks ago was running
the annual http server to control my christmas tree
) so I grabbed it out of the christmas bin, wired it up to a video capture
card I have sitting around, and figured I d grab a video of me booting a
physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted
system which just means our real machine has a larger GOP display
resolution than qemu, which makes sense! We could write some fancy resize code
(sounds annoying), center the image (can t be assed but should be the easy way
out here) or resize the original image (pretty hardware specific workaround).
Additionally, you can make out the image being written to the display before us
(the Minisforum logo) behind Kier, which is really cool stuff. If we were real
fancy we could write blank pixels to the display before blitting Kier, but,
again, I don t think I care to do that much work.
But now I must away
If I wanted to keep this joke going, I d likely try and find a copy of the
original
video when Helly 100%s her file
and boot into that or maybe play a terrible midi PC speaker rendition of
Kier, Chosen One, Kier after
rendering the image. I, unfortunately, don t have any friends involved with
production (yet?), so I reckon all that s out for now. I ll likely stop playing
with this the joke was done and I m only writing this post because of how
great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into but like, good, though, and it s a nice reminder of both how
fun this stuff can be, and how far we ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can t believe how good the uefi crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful .
I'm going to FOSDEM 2025!
As usual, I'll be in the Java Devroom for most of that day, which this
time around is Saturday.
Please recommend me any talks!
This is my shortlist so far:
Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day!
I m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!
What Is apt-eatmydata?
If you ve ever used libeatmydata, you know it s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you d have to remember to wrap apt commands manually, like this:
eatmydata apt install texlive-full
But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged no extra typing required.
How to Get It
Debian
If you re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:
sudo apt install apt-eatmydata
Ubuntu
Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:
And boom! Your apt install times are getting serious upgrade. Let s run some tests
# pre-download package to measure only the installation $ sudo apt install -d linux-headers-6.8.0-53-lowlatency ... # installation time is 9.35s without apt-eatmydata: $ sudo time apt install linux-headers-6.8.0-53-lowlatency ... 2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k 32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps $ sudo apt install apt-eatmydata ... $ sudo apt purge linux-headers-6.8.0-53-lowlatency # installation time is 3.17s with apt-eatmydata: $ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency 2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k 0inputs+205664outputs (0major+198099minor)pagefaults 0swaps
apt-eatmydata just made installing Linux headers 3x faster!
But Wait, There s More!
If you re automating CI builds, there s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here: GitHub Marketplace: apt-eatmydata
Should You Use It?
Warning: apt-eatmydata is not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It s an absolute game-changer. I use it on my laptop, too.
So go forth and install recklessly fast!
If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!
(To accelerate your CI pipeline or local builds, check out Firebuild, that speeds up the builds, too!)
It is a while since I posted a summary of the free software and
open culture activities and projects I have worked on. Here is a
quick summary of the major ones from last year.
I guess the biggest project of the year has been migrating orphaned
packages in Debian without a version control system to have a git
repository on salsa.debian.org. When I started in April around 450
the orphaned packages needed git. I've since migrated around 250 of
the packages to a salsa git repository, and around 40 packages were
left when I took a break. Not sure who did the around 160 conversions
I was not involved in, but I am very glad I got some help on the
project. I stopped partly because some of the remaining packages
needed more disk space to build than I have available on my
development machine, and partly because some had a strange build setup
I could not figure out. I had a time budget of 20 minutes per
package, if the package proved problematic and likely to take longer,
I moved to another package. Might continue later, if I manage to free
up some disk space.
Another rather big project was the translation to Norwegian Bokm l
and publishing of the first book ever published by a S mi woman, the
M ter
vi liv eller d d? book by Elsa Laula, with a PD0 and CC-BY
license. I released it during the summer, and to my surprise it has
already sold several copies. As I suck at marketing, I did not expect
to sell any.
A smaller, but more long term project (for more than 10 years now),
and related to orphaned packages in Debian, is my project to ensure a
simple way to install hardware related packages in Debian when the
relevant hardware is present in a machine. It made a fairly big
advance forward last year, partly because I have been poking and
begging package maintainers and upstream developers to include
AppStream metadata XML in their packages. I've also released a few
new versions of the isenkram system with some robustness improvements.
Today 127 packages in Debian provide such information, allowing
isenkram-lookup to propose them. Will keep pushing until the
around 35 package names currently hard coded in the isenkram package
are down to zero, so only information provided by individual packages
are used for this feature.
As part of the work on AppStream, I have sponsored several packages
into Debian where the maintainer wanted to fix the issue but lacked
direct upload rights. I've also sponsored a few other packages, when
approached by the maintainer.
I would also like to mention two hardware related packages in
particular where I have been involved, the megactl and mfi-util
packages. Both work with the hardware RAID systems in several Dell
PowerEdge servers, and the first one is already available in Debian
(and of course, proposed by isenkram when used on the appropriate Dell
server), the other is waiting for NEW processing since this autumn. I
manage several such Dell servers and would like the tools needed to
monitor and configure these RAID controllers to be available from
within Debian out of the box.
Vaguely related to hardware support in Debian, I have also been
trying to find ways to help out the Debian ROCm team, to improve the
support in Debian for my artificial idiocy (AI) compute node. So far
only uploaded one package, helped test the initial packaging of
llama.cpp and tried to figure out how to get good speech recognition
like Whisper into Debian.
I am still involved in the LinuxCNC project, and organised a
developer gathering in Norway last summer. A new one is planned the
summer of 2025. I've also helped evaluate patches and uploaded new
versions of LinuxCNC into Debian.
After a 10 years long break, we managed to get a new and improved
upstream version of lsdvd released just before Christmas. As
I use it regularly to maintain my DVD archive, I was very happy to
finally get out a version supporting DVDDiscID useful for uniquely
identifying DVDs. I am dreaming of a Internet service mapping DVD IDs
to IMDB movie IDs, to make life as a DVD collector easier.
My involvement in Norwegian archive standardisation and the free
software implementation of the vendor neutral Noark 5 API continued
for the entire year. I've been pushing patches into both the API and
the test code for the API, participated in several editorial meetings
regarding the Noark 5 Tjenestegrensesnitt specification, submitted
several proposals for improvements for the same. We also organised a
small seminar for Noark 5 interested people, and is organising a new
seminar in a month.
Part of the year was spent working on and coordinating a Norwegian
Bokm l translation of the marvellous children's book
Ada and
Zangemann , which focus on the right to repair and control your own
property, and the value of controlling the software on the devices you
own. The translation is mostly complete, and is now waiting for a
transformation of the project and manuscript to use Docbook XML
instead of a home made semi-text based format. Great progress is
being made and the new book build process is almost complete.
I have also been looking at how to companies in Norway can use free
software to report their accounting summaries to the Norwegian
government. Several new regulations make it very hard for companies
to do use free software for accounting, and I would like to change
this. Found a few drafts for opening up the reporting process, and
have read up on some of the specifications, but nothing much is
working yet.
These were just the top of the iceberg, but I guess this blog post
is long enough now. If you would like to help with any of these
projects, please get in touch, either directly on the project mailing
lists and forums, or with me via email, IRC or Signal. :)
As usual, if you use Bitcoin and want to show your support of my
activities, please send Bitcoin donations to my address
15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian. During my studies I took on more things. In January 2008 I joined the Release
Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side. Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.I ended up as Stable Release
Manager for a while, from August 2008 - when Martin Zobel-Helas moved
into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the
creation of -updates in favor of a separate volatile archive and a
change of the update policy to allow for more common sense updates in
the main archive vs. the very strict "breakage or security" policy we
had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged. In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno. Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer. I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.
The future I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.Hopefully I can contribute a bit or two to these efforts in the future.
Well, 2024 will be remembered, won't it? I guess 2025 already wants to
make its mark too, but let's not worry about that right now, and
instead let's talk about me.
A little over a year ago, I was gloating
over how I had such a great blogging year in 2022, and was considering
2023 to be average, then went on to gather more stats and traffic
analysis... Then I said, and I quote:
I hope to write more next year. I've been thinking about a few posts I
could write for work, about how things work behind the scenes at Tor,
that could be informative for many people. We run a rather old setup,
but things hold up pretty well for what we throw at it, and it's worth
sharing that with the world...
What a load of bollocks.
A bad year for this blog
2024 was the second worst year ever in my blogging history, tied with
2009 at a measly 6 posts for the year:
Loads of drafts
It's not that I have nothing to say: I have no less than five drafts
in my working tree here, not counting three actual drafts recorded
in the Git repository here:
I just don't have time to wrap those things up. I think part of me is
disgusted by seeing my work stolen by large corporations to build
proprietary large language models while my idols have been pushed
to suicide for trying to share science with the world.
Another part of me wants to make those things just right. The
"tagged drafts" above are nothing more than a huge pile of chaotic
links, far from being useful for anyone else than me, and even
then.
The on-dying article, in particular, is becoming my nemesis. I've
been wanting to write that article for over 6 years now, I think. It's
just too hard.
Writing elsewhere
There's also the fact that I write for work already. A lot. Here are
the top-10 contributors to our team's wiki:
anarcat@angela:help.torproject.org$ git shortlog --numbered --summary --group="format:%al" head -10
4272 anarcat
423 jerome
117 zen
116 lelutin
104 peter
58 kez
45 irl
43 hiro
18 gaba
17 groente
... but that's a bit unfair, since I've been there half a
decade. Here's the last year:
anarcat@angela:help.torproject.org$ git shortlog --since=2024-01-01 --numbered --summary --group="format:%al" head -10
827 anarcat
117 zen
116 lelutin
91 jerome
17 groente
10 gaba
8 micah
7 kez
5 jnewsome
4 stephen.swift
So I still write the most commits! But to truly get a sense of the
amount I wrote in there, we should count actual changes. Here it is by
number of lines (from commandlinefu.com):
anarcat@angela:help.torproject.org$ git ls-files xargs -n1 git blame --line-porcelain sed -n 's/^author //p' sort -f uniq -ic sort -nr head -10
99046 Antoine Beaupr
6900 Zen Fu
4784 J r me Charaoui
1446 Gabriel Filion
1146 Jerome Charaoui
837 groente
705 kez
569 Gaba
381 Matt Traudt
237 Stephen Swift
That, of course, is the entire history of the git repo, again. We
should take only the last year into account, and probably ignore the
tails directory, as sneaky Zen Fu imported the entire docs from
another wiki there...
anarcat@angela:help.torproject.org$ find [d-s]* -type f -mtime -365 xargs -n1 git blame --line-porcelain 2>/dev/null sed -n 's/^author //p' sort -f uniq -ic sort -nr head -10
75037 Antoine Beaupr
2932 J r me Charaoui
1442 Gabriel Filion
1400 Zen Fu
929 Jerome Charaoui
837 groente
702 kez
569 Gaba
381 Matt Traudt
237 Stephen Swift
Pretty good! 75k lines. But those are the files that were modified in
the last year. If we go a little more nuts, we find that:
I wrote 126,116 words in that wiki, only in the last year. I also
deleted 37k words, so the final total is more like 89k words, but
still: that's about forty (40!) articles of the average size (~2k) I
wrote in 2022.
(And yes, I did go nuts and write a new log parser, essentially from
scratch, to figure out those word diffs. I did get the courage only
after asking GPT-4o for an example first, I must admit.)
Let's celebrate that again: I wrote 90 thousand words in that wiki
in 2024. According to Wikipedia, a "novella" is 17,500 to 40,000
words, which would mean I wrote about a novella and a novel, in the
past year.
But interestingly, if I look at the repository analytics. I
certainly didn't write that much more in the past year. So that
alone cannot explain the lull in my production here.
Arguments
Another part of me is just tired of the bickering and arguing on the
internet. I have at least two articles in there that I suspect is
going to get me a lot of push-back (NixOS and Fish). I know how to
deal with this: you need to write well, consider the controversy,
spell it out, and defuse things before they happen. But that's hard
work and, frankly, I don't really care that much about what people
think anymore.
I'm not writing here to convince people. I have stop evangelizing a
long time ago. Now, I'm more into documenting, and teaching. And,
while teaching, there's a two-way interaction: when you give out a
speech or workshop, people can ask questions, or respond, and you all
learn something. When you document, you quickly get told "where is
this? I couldn't find it" or "I don't understand this" or "I tried
that and it didn't work" or "wait, really? shouldn't we do X instead",
and you learn.
Here, it's static. It's my little soapbox where I scream in the
void. The only thing people can do is scream back.
Collaboration
So.
Let's see if we can work together here.
If you don't like something I say, disagree, or find something wrong
or to be improved, instead of screaming on social media or ignoring
me, try contributing back. This site here is backed by a git
repository and I promise to read everything you send there,
whether it is an issue or a merge request.
I will, of course, still read comments sent by email or IRC or social
media, but please, be kind.
You can also, of course, follow the latest changes on the TPA
wiki. If you want to catch up with the last year, some of the
"novellas" I wrote include:
TPA-RFC-71: Emergency email deployments, phase B: deploy a new
sender-rewriting mail forwarder, migrate mailing lists off the
legacy server to a new machine, migrate the remaining Schleuder list
to the Tails server, upgrade eugeni.
Hi
Cookiecutter is a tool for building coding project templates. It s often used to provide a scaffolding to build lots of similar project. I ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects.
Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user s code. In other words, they wanted their users to apply a diff of the template modification to their code.
At the beginning, all was fine. But problems began to appear during the lifetime of these projects.
What went wrong ?
In both cases, we had the following scenario:
user team:
creates new project with cookiecutter template
makes modification on their code, including on code provided by template
meanwhile, provider team:
makes modifications to cookiecutter template
releases new template version
asks his users to update code brought by template using cruft
user team then:
runs cruft to update template code
discovers a lot of code conflicts (similar to git merge conflicts)
often rolls back cruft update and gives up on template update
User team giving up on updates is a major problem because these update may bring security or compliance fixes.
Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there s no common ancestor, so 3-way merges are not possible.
From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen.
Possible solutions to get out of this tar pit:
Assume that template are one shot. Template update are not practical in the long run.
Make sure that template are as thin as possible. They should contain minimal logic.
Move most if not all logic in separate libraries or scripts that are owned by provider team. This way update coming from provider team can be managed like external dependencies by upgrading the version of a dependency.
Of course your users won t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate).
If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command.
My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn.
All the best