Like each month, have a look at the work funded by Freexian s Debian LTS offering.
Debian project funding
Folks from the LTS team, along with members of the Debian Android Tools team and Phil Morrel, have proposed work on the Java build tool, gradle, which is currently blocked due to the need to build with a plugin not available in Debian. The LTS team reviewed the project submission and it has been approved. After approval we ve created a Request for Bids which is active now.
You ll hear more about this through official Debian channels, but in the meantime, if you feel you can help with this project, please submit a bid. Thanks!
This September, Freexian set aside 2550 EUR to fund Debian projects.
We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.
Debian LTS contributors
In September, 15 contributors have been paid to work on Debian LTS, their reports are available:
Abhijith PA has returned hours and marked themselves inactive, at least for the time being. He did 0h out of 14h, carried over 14h and returned 28h.
Adrian Bunk did 19.5h (out of 24.75h assigned and 12.75 from August), carrying over 18h to October.
Utkarsh Gupta did 24.75h (out of 24.75h assigned) but did not publish his report yet.
Ola Lundqvist did 2 hours (out of 21h carried over from previous months), and is thus carrying 19h for October.
Evolution of the situation
In September we released 30 DLAs. September was also the second month of Jeremiah coordinating LTS contributors.
Also, we would like say that we are always looking for new contributors to LTS. Please contact Jeremiah if you are interested!
The security tracker currently lists 33 packages with a known CVE and the dla-needed.txt file has 26 packages needing an update.
Thanks to our sponsors
Sponsors that joined recently are in bold.
This month I accepted 224 and rejected 47 packages. This is almost thrice the rejects of last month. Please, be more careful and check your package twice before uploading. The overall number of packages that got accepted was 233.
This was my eighty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
This month my all in all workload has been 24.75h. During that time I did LTS and normal security uploads of:
[DLA 2755-1] btrbk security update for one CVE
[DLA 2762-1] grilo security update for one CVE
[DLA 2766-1] openssl security update for one CVE
[DLA 2774-1] openssl1.0 security update for one CVE
[DLA 2773-1] curl security update for two CVEs
I also started to work on exiv2 and faad2.
Last but not least I did some days of frontdesk duties.
This month was the thirty-ninth ELTS month.
Unfortunately during my allocated time I could not process any upload. I worked on openssl, curl and squashfs-tools but for one reason or another the prepared packages didn t pass all tests. In order to avoid regressions, I postponed the uploads (meanwhile an ELA for curl was published ).
Last but not least I did some days of frontdesk duties.
On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.
As Odyx took a break from all Debian activities, I volunteered to take care of the printing packages. Please be merciful when somethings breaks after I did an upload. My first printing upload was hplip
Let s say you re someone who happens to discover an AWS account number, and would
like to take a stab at guessing what IAM users might be valid in that account.
Tricky problem, right? Not with this One Weird Trick!
In your own AWS account, create a KMS key and try to reference an ARN
representing an IAM user in the other account as the principal. If the policy
is accepted by PutKeyPolicy, then that IAM account exists, and if the error
says Policy contains a statement with one or more invalid principals then the
user doesn t exist.
As an example, say you want to guess at IAM users in AWS account 111111111111.
Then make sure this statement is in your key policy:
"Sid": "Test existence of user",
If that policy is accepted, then the account has an IAM user named bob.
Otherwise, the user doesn t exist. Scripting this is left as an exercise for
Sadly, wildcards aren t accepted in the username portion of the ARN, otherwise
you could do some funky searching with ...:user/a*, ...:user/b*, etc. You
can t have everything; where would you put it all?
I did mention this to AWS as an account enumeration risk. They re of the opinion
that it s a good thing you can know what users exist in random other AWS accounts.
I guess that means this is a technique you can put in your toolbox safe in the
knowledge it ll work forever.
Given this is intended behaviour, I assume you don t need to use a key policy
for this, but that s where I stumbled over it. Also, you can probably use it
to enumerate roles and anything else that can be a principal, but since I
don t see as much use for that, I didn t bother exploring it.
There you are, then. If you ever need to guess at IAM users in another AWS account,
now you can!
The goal behind reproducible builds is to ensure that no deliberate flaws have been introduced during compilation processes via promising or mandating that identical results are always generated from a given source. This allowing multiple third-parties to come to an agreement on whether a build was compromised or not by a system of distributed consensus.
In these reports we outline the most important things that have been happening in the world of reproducible builds in the past month:
First mentioned in our March 2021 report, Martin Heinz published two blog posts on sigstore, a project that endeavours to offer software signing as a public good, [the] software-signing equivalent to Let s Encrypt . The two posts, the first entitled Sigstore: A Solution to Software Supply Chain Security outlines more about the project and justifies its existence:
Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.
Some time ago I checked Signal s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn t match either.
BitcoinBinary.org was announced this month, which aims to be a repository of Reproducible Build Proofs for Bitcoin Projects :
Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project s build, others at least have a secondary source of validation.
On our website this month, Bernhard M. Wiedemann fixed some broken links  and Holger Levsen made a number of changes to the Who is Involved? page . On our mailing list, Magnus Ihse Bursie started a thread with the subject Reproducible builds on Java, which begins as follows:
I m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. [ ] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We ve come far, but there are still issues I d like to address. 
diffoscopediffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 183, 184 and 185 as well as performed significant triaging of merge requests and other issues in addition to making the following changes:
Support a newer format version of the R language s .rds files. 
Don t call close_archive when garbage collecting Archive instances, unless open_archive definitely returned successfully. This prevents, for example, an AttributeError where PGPContainer s cleanup routines were rightfully assuming that its temporary directory had actually been created. 
Fix (and test) the comparison of R language s .rdb files after refactoring temporary directory handling. 
Ensure that RPM archives exists in the Debian package description, regardless of whether python3-rpm is installed or not at build time. 
Use our assert_diff routine in tests/comparators/test_rdata.py. 
Move diffoscope.versions to diffoscope.tests.utils.versions. 
Reformat a number of modules with Black. 
However, the following changes were also made:
Fix an autopkgtest caused by the androguard module not being in the (expected) python3-androguard Debian package. 
Appease a shellcheck warning in debian/tests/control.sh. 
Ignore a warning from h5py in our tests that doesn t concern us. 
Drop a trailing .1 from the Standards-Version field as it s required. 
Zbigniew J drzejewski-Szmek:
Stop using the deprecated distutils.spawn.find_executable utility. 
Adjust an LLVM-related test for LLVM version 13. 
Update invocations of llvm-objdump. 
Adjust a test with a one-byte text file for file version 5.40. 
And, finally, Benjamin Peterson added a --diff-context option to control unified diff context size  and Jean-Romain Garnier fixed the Macho comparator for architectures other than x86-64 .
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
#990246 filed against vlc (forwarded upstream )
The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
Drop my package rebuilder prototype as it s not useful anymore. 
Schedule old packages in Debian bookworm. 
Stop scheduling packages for Debian buster. 
Don t include PostgreSQL debug output in package lists. 
Detect Python library mismatches during build in the node health check. 
Update a note on updating the FreeBSD system. 
Silence a warning from Git. 
Update a setting to reflect that Debian bookworm is the new testing. 
Upgrade the PostgreSQL database to version 13. 
Roland Clobus (Debian live image generation):
Workaround non-reproducible config files in the libxml-sax-perl package. 
Use the new DNS for the snapshot service. 
Also note that the armhf architecture also systematically varies by the kernel. 
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
The fifteenth release of littler as a CRAN package just landed, following in the now fifteen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.
littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years.
littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet the build system could be extended see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.
A few examples are highlighted at the Github repo, as well as in the examples vignette.
This release updates the helper scripts to download nighlies of RStudio Server and Desktop to their new naming scheme, adds a downloader for Quarto, extends the roxy.r wrapper with a new option, and updates the configure setting as requestion by CRAN and more. See the NEWS file entry below for more.
Changes in littler version 0.3.14 (2021-10-05)
Changes in examples
Updated RStudio download helper to changed file names
Added a new option to roxy.r wrapper
Added a downloader for Quarto command-line tool
Changes in package
The configure files were updated to the standard of autoconf version 2.69 following a CRAN request
Evolution of the situation
In August we released 30 DLAs.
This is the first month of Jeremiah coordinating LTS contributors. We would like to thank Holger Levsen for his work on this role up to now.
Also, we would like to remark once again that we are constantly looking for new contributors. Please contact Jeremiah if you are interested!
The security tracker currently lists 73 packages with a known CVE and the dla-needed.txt file has 29 packages needing an update.
Thanks to our sponsors
Sponsors that joined recently are in bold.
My home automation setup has been fairly static recently; it does what we need and generally works fine. One area I think could be better is controlling it; we have access Home Assistant on our phones, and the Alexa downstairs can control things, but there are no smart assistants upstairs and sometimes it would be nice to just push a button to turn on the light rather than having to get my phone out. Thanks to the fact the UK generally doesn t have neutral wire in wall switches that means looking at something battery powered. Which means wifi based devices are a poor choice, and it s necessary to look at something lower power like Zigbee or Z-Wave.
Zigbee seems like the better choice; it s a more open standard and there are generally more devices easily available from what I ve seen (e.g. Philips Hue and IKEA TR DFRI). So I bought a couple of Xiaomi Mi Smart Home Wireless Switches, and a CC2530 module and then ignored it for the best part of a year. Finally I got around to flashing the Z-Stack firmware that Koen Kanters kindly provides. (Insert rant about hardware manufacturers that require pay-for tool chains. The CC2530 is even worse because it s 8051 based, so SDCC should be able to compile for it, but the TI Zigbee libraries are only available in a format suitable for IAR s embedded workbench.)
Flashing the CC2530 is a bit of faff. I ended up using the CCLib fork by Stephan Hadinger which supports the ESP8266. The nice thing about the CC2530 module is it has 2.54mm pitch pins so nice and easy to jumper up. It then needs a USB/serial dongle to connect it up to a suitable machine, where I ran Zigbee2MQTT. This scares me a bit, because it s a bunch of node.js pulling in a chunk of stuff off npm. On the flip side, it Just Works and I was able to pair the Xiaomi button with the device and see MQTT messages that I could then use with Home Assistant. So of course I tore down that setup and went and ordered a CC2531 (the variant with USB as part of the chip). The idea here was my test setup was upstairs with my laptop, and I wanted something hooked up in a more permanent fashion.
Once the CC2531 arrived I got distracted writing support for the Desk Viking to support CCLib (and modified it a bit for Python3 and some speed ups). I flashed the dongle up with the Z-Stack Home 1.2 (default) firmware, and plugged it into the house server. At this point I more closely investigated what Home Assistant had to offer in terms of Zigbee integration. It turns out the ZHA integration has support for the ZNP protocol that the TI devices speak (I m reasonably sure it didn t when I first looked some time ago), so that seemed like a better option than adding the MQTT layer in the middle.
I hit some complexity passing the dongle (which turns up as /dev/ttyACM0) through to the Home Assistant container. First I needed an override file in /etc/systemd/nspawn/hass.nspawn:
(I m not clear why the VirtualEthernet needed to exist; without it networking broke entirely but I couldn t see why it worked with no override file.)
A udev rule on the host to change the ownership of the device file so the root user and dialout group in the container could see it was also necessary, so into /etc/udev/rules.d/70-persistent-serial.rules went:
In the container itself I had to switch PrivateDevices=true to PrivateDevices=false in the home-assistant.service file (which took me a while to figure out; yay for locking things down and then needing to use those locked down things).
Finally I added the hass user to the dialout group. At that point I was able to go and add the integration with Home Assistant, and add the button as a new device. Excellent. I did find I needed a newer version of Home Assistant to get support for the button, however. I was still on 2021.1.5 due to upstream dropping support for Python 3.7 and not being prepared to upgrade to Debian 11 until it was actually released, so the version of zha-quirks didn t have the correct info. Upgrading to Home Assistant 2021.8.7 sorted that out.
There was another slight problem. Range. Really I want to use the button upstairs. The server is downstairs, and most of my internal walls are brick. The solution turned out to be a TR DFRI socket, which replaced the existing ESP8266 wifi socket controlling the stair lights. That was close enough to the server to have a decent signal, and it acts as a Zigbee router so provides a strong enough signal for devices upstairs. The normal approach seems to be to have a lot of Zigbee light bulbs, but I have mostly kept overhead lights as uncontrolled - we don t use them day to day and it provides a nice fallback if the home automation has issues.
Of course installing Zigbee for a single button would seem to be a bit pointless. So I ordered up a Sonoff door sensor to put on the front door (much smaller than expected - those white boxes on the door are it in the picture above). And I have a 4 gang wireless switch ordered to go on the landing wall upstairs.
Now I ve got a Zigbee setup there are a few more things I m thinking of adding, where wifi isn t an option due to the need for battery operation (monitoring the external gas meter springs to mind). The CC2530 probably isn t suitable for my needs, as I ll need to write some custom code to handle the bits I want, but there do seem to be some ARM based devices which might well prove suitable
One of the assumptions baked deeply into US society (and many others) is
that people are largely defined by the work they do, and that work is the
primary focus of life. Even in Marxist analysis, which is otherwise
critical of how work is economically organized, work itself reigns
supreme. This has been part of the feminist critique of both capitalism
and Marxism, namely that both devalue domestic labor that has
traditionally been unpaid, but even that criticism is normally framed as
expanding the definition of work to include more of human activity. A
few exceptions aside, we shy away from
fundamentally rethinking the centrality of work to human experience.
The Problem with Work begins as a critical analysis of that
centrality of work and a history of some less-well-known movements against
it. But, more valuably for me, it becomes a discussion of the types and
merits of utopian thinking, including why convincing other people is not
the only purpose for making a political demand.
The largest problem with this book will be obvious early on: the writing
style ranges from unnecessarily complex to nearly unreadable. Here's an
excerpt from the first chapter:
The lack of interest in representing the daily grind of work routines
in various forms of popular culture is perhaps understandable, as is
the tendency among cultural critics to focus on the animation and
meaningfulness of commodities rather than the eclipse of laboring
activity that Marx identifies as the source of their fetishization
(Marx 1976, 164-65). The preference for a level of abstraction that
tends not to register either the qualitative dimensions or the
hierarchical relations of work can also account for its relative
neglect in the field of mainstream economics. But the lack of
attention to the lived experiences and political textures of work
within political theory would seem to be another matter. Indeed,
political theorists tend to be more interested in our lives as
citizens and noncitizens, legal subjects and bearers of rights,
consumers and spectators, religious devotees and family members, than
in our daily lives as workers.
This is only a quarter of a paragraph, and the entire book is written like
I don't mind the occasional use of longer words for their precise meanings
("qualitative," "hierarchical") and can tolerate the academic habit of
inserting mostly unnecessary citations. I have less patience with the
meandering and complex sentences, excessive hedge words ("perhaps," "seem
to be," "tend to be"), unnecessarily indirect phrasing ("can also account
for" instead of "explains"), or obscure terms that are unnecessary to the
sentence (what is "animation of commodities"?). And please have mercy and
throw a reader some paragraph breaks.
The writing style means substantial unnecessary effort for the reader,
which is why it took me six months to read this book. It stalled all of
my non-work non-fiction reading and I'm not sure it was worth the effort.
That's unfortunate, because there were several important ideas in here
that were new to me.
The first was the overview of the "wages for housework" movement, which I
had not previously heard of. It started from the common feminist position
that traditional "women's work" is undervalued and advocated taking the
next logical step of giving it equality with paid work by making it
paid work. This was not successful, obviously, although the increasing
prevalence of day care and cleaning services has made it partly true
within certain economic classes in an odd and more capitalist way. While
I, like Weeks, am dubious this was the right remedy, the observation
that household work is essential to support capitalist activity but is
unmeasured by GDP and often uncompensated both economically and socially
has only become more accurate since the 1970s.
Weeks argues that the usefulness of this movement should not be judged by
its lack of success in achieving its demands, which leads to the second
interesting point: the role of utopian demands in reframing and expanding
a discussion. I normally judge a political demand on its effectiveness at
convincing others to grant that demand, by which standard many activist
campaigns (such as wages for housework) are unsuccessful. Weeks points
out that making a utopian demand changes the way the person making the
demand perceives the world, and this can have value even if the demand
will never be granted. For example, to demand wages for housework
requires rethinking how work is defined, what activities are compensated
by the economic system, how such wages would be paid, and the implications
for domestic social structures, among other things. That, in turn, helps
in questioning assumptions and understanding more about how existing
society sustains itself.
Similarly, even if a utopian demand is never granted by society at large,
forcing it to be rebutted can produce the same movement in thinking in
others. In order to rebut a demand, one has to take it seriously and
mount a defense of the premises that would allow one to rebut it. That
can open a path to discussing and questioning those premises, which can
have long-term persuasive power apart from the specific utopian demand.
It's a similar concept as the Overton Window, but with more nuance: the
idea isn't solely to move the perceived range of accepted discussion, but
to force society to examine its assumptions and premises well enough to
defend them, or possibly discover they're harder to defend than one might
Weeks applies this principle to universal basic income, as a utopian
demand that questions the premise that work should be central to personal
identity. I kept thinking of the Black Lives Matter movement and the
demand to abolish the police, which (at least in popular discussion) is a
more recent example than this book but follows many of the same
principles. The demand itself is unlikely to be met, but to rebut it
requires defending the existence and nature of the police. That in turn
leads to questions about the effectiveness of policing, such as clearance
rates (which are far lower than one might have assumed). Many more
examples came to mind. I've had that experience of discovering problems
with my assumptions I'd never considered when debating others, but had not
previously linked it with the merits of making demands that may be
The book closes with an interesting discussion of the types of utopias,
starting from the closed utopia in the style of Thomas More in which the
author sets up an ideal society. Weeks points out that this sort of
utopia tends to collapse with the first impossibility or inconsistency the
reader notices. The next step is utopias that acknowledge their own
limitations and problems, which are more engaging (she cites Le Guin's
The Dispossessed). More conditional
than that is the utopian manifesto, which only addresses part of society.
The least comprehensive and the most open is the utopian demand, such as
wages for housework or universal basic income, which asks for a specific
piece of utopia while intentionally leaving unspecified the rest of the
society that could achieve it. The demand leaves room to maneuver; one
can discuss possible improvements to society that would approach that
utopian goal without committing to a single approach.
I wish this book were better-written and easier to read, since as it
stands I can't recommend it. There were large sections that I read but
didn't have the mental energy to fully decipher or retain, such as the
extended discussion of Ernst Bloch and Friedrich Nietzsche in the context
of utopias. But that way of thinking about utopian demands and their
merits for both the people making them and for those rebutting them, even
if they're not politically feasible, will stick with me.
Rating: 5 out of 10
Three years ago, I organized a fun and most interesting colloquium at
Facultad de Ingenier a, UNAM about
privacy and anonymity online.
I would have loved to share this earlier with the world, but The
university s processes are quite slow (and, to be fair, I also took
quite a bit of time to push things through). But today, I m finally
happy to share the result of that work with all of you. We managed to
get 11 of the talks in the colloquium as articles. The back-cover text
reads (in Spanish):
We live in an era where human to human interactions are
more and more often mediated by technology. This, of course, means
everything leaves a digital trail, a trail that can follow and us
Privacy is recognized, however, as a human right although one that
is under growing threats. Anonymity is the best tool to secure
it. Throughout history, clear steps have been taken legally,
technically and technologically to defend it. Various studies point
out this is not only a known issue for the network's users, but that a
large majority has searched for alternatives to protect their
This book stems from a colloquium held by *Laboratorio de
Investigaci n y Desarrollo de Software Libre* (LIDSOL) of Facultad de
Ingenier a, UNAM, towards the end of 2018, where we invited experts
from disciplines so far apart as law and systems development,
psychology and economics, to contribute with their experiences to a
If this interests you, you can get the book at our institutional
Oh, and What about the birds?
In Spanish (Mexican only?), we have a saying, hay p jaros en el
alambre , meaning watch your words, as uninvited people might be
listening, as birds resting over the wires over which phone calls
used to be made (back in the day where wiretapping was that easy). I
found the design proposed by our editor ingenious and very fitting for
Another small release of the tidyCpp package arrived on CRAN overnight. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples.
The Protect class now uses the default methods for copy and move constructors and assignment allowing for wide use of the class. The small NumVec class now uses it for its data member.
The NEWS entry (which I failed to update for the releases) follows.
Changes in tidyCpp version 0.0.5 (2021-09-16)
The Protect class uses default copy and move assignments and constructors
Colson Whitehead's latest novel, Harlem Shuffle, was always going to be widely reviewed, if only because his last two books won Pulitzer prizes. Still, after enjoying both The Underground Railroad and The Nickel Boys, I was certainly going to read his next book, regardless of what the critics were saying indeed, it was actually quite agreeable to float above the manufactured energy of the book's launch.
Saying that, I was encouraged to listen to an interview with the author by Ezra Klein. Now I had heard Whitehead speak once before when he accepted the Orwell Prize in 2020, and once again he came across as a pretty down-to-earth guy. Or if I were to emulate the detached and cynical tone Whitehead embodied in The Nickel Boys, after winning so many literary prizes in the past few years, he has clearly rehearsed how to respond to the cliched questions authors must be asked in every interview. With the obligatory throat-clearing of 'so, how did you get into writing?', for instance, Whitehead replies with his part of the catechism that 'It seemed like being a writer could be a cool job. You could work from home and not talk to people.' The response is the right combination of cute and self-effacing... and with its slight tone-deafness towards enforced isolation, it was no doubt honed before Covid-19.
Harlem Shuffle tells three separate stories about Ray Carney, a furniture salesman and 'fence' for stolen goods in New York in the 1960s. Carney doesn't consider himself a genuine criminal though, and there's a certain logic to his relativistic morality. After all, everyone in New York City is on the take in some way, and if some 'lightly used items' in Carney's shop happened to have had 'previous owners', well, that's not quite his problem. 'Nothing solid in the city but the bedrock,' as one character dryly observes. Yet as Ezra pounces on in his NYT interview mentioned abov, the focus on the Harlem underworld means there are very few women in the book, and Whitehead's circular response ah well, it's a book about the criminals at that time! was a little unsatisfying. Not only did it feel uncharacteristically slippery of someone justly lauded for his unflinching power of observation (after all, it was the author who decided what to write about in the first place), it foreclosed on the opportunity to delve into why the heist and caper genres (from The Killing, The Feather Thief, Ocean's 11, etc.) have historically been a 'male' mode of storytelling.
Perhaps knowing this to be the case, the conversation quickly steered towards Ray Carney's wife, Elizabeth, the only woman in the book who could be said possesses some plausible interiority. The following off-hand remark from Whitehead caught my attention:
My wife is convinced that [Elizabeth] knows everything about Carney's criminal life, and is sort of giving him a pass. And I'm not sure if that's true. I have to have to figure out exactly what she knows and when she knows it and how she feels about it.
I was quite taken by this, although not simply due to its effect on the story it self. As in, it immediately conjured up a charming picture of Whitehead's domestic arrangements: not only does Whitehead's wife feel free to disagree with what one of Whitehead's 'own' characters knows or believes, but that Colson has no problem whatsoever sharing that disagreement with the public at large. (It feels somehow natural that Whitehead's wife believes her counterpart knows more than she lets on, whilst Whitehead himself imbues the protagonist's wife with a kind of neo-Victorian innocence.) I'm minded to agree with Whitehead's partner myself, if only due to the passages where Elizabeth is studiously ignoring Carney's otherwise unexplained freak-outs.
But all of these meta-thoughts simply underline just how emancipatory the Death of the Author can be. This product of academic literary criticism (the term was coined by Roland Barthes' 1967 essay of the same name) holds that the original author's intentions, ideas or biographical background carry no especial weight in determining how others should interpret their work. It is usually understood as meaning that a writer's own views are no more valid or 'correct' than the views held by someone else. (As an aside, I've found that most readers who encounter this concept for the first time have been reading books in this way since they were young. But the opposite is invariably true with cinephiles, who often have a bizarre obsession with researching or deciphering the 'true' interpretation of a film.) And with all that in mind, can you think of a more wry example of how freeing (and fun) nature of the Death of the Author than an author's own partner dissenting with their (Pulitzer Prize-winning) husband on the position of a lynchpin character?
As it turns out, the reviews for Harlem Shuffle have been almost universally positive, and after reading it in the two days after its release, I would certainly agree it is an above-average book. But it didn't quite take hold of me in the way that The Underground Railroad or The Nickel Boys did, especially the later chapters of The Nickel Boys that were set in contemporary New York and could thus make some (admittedly fairly explicit) connections from the 1960s to the present day that kind of connection is not there in Harlem Shuffle, or at least I did not pick up on it during my reading.
I can see why one might take exception to that, though. For instance, it is certainly true that the week-long Harlem Riot forms a significant part of the plot, and some events in particular are entirely contingent on the ramifications of this momentous event. But it's difficult to argue the riot's impact are truly integral to the story, so not only is this uprising against police brutality almost regarded as a background event, any contemporary allusion to the murder of George Floyd is subsequently watered down. It's nowhere near the historical rubbernecking of Forrest Gump (1994), of course, but that's not a battle you should ever be fighting.
Indeed, whilst a certain smoothness of affect is to be priced into the Whitehead reading experience, my initial overall reaction to Harlem Shuffle was fairly flat, despite all the action and intrigue on the page. The book perhaps belies its origins as a work conceived during quarantine after all, the book is essentially comprised of three loosely connected novellas, almost as if the unreality and mental turbulence of lockdown prevented the author from performing the psychological 'deep work' of producing a novel-length text with his usual depth of craft. A few other elements chimed with this being a 'lockdown novel' as well, particularly the book's preoccupation with the sheer physicality of the city compared to the usual complex interplay between its architecture and its inhabitants. This felt like it had been directly absorbed into the book from the author walking around his deserted city, and thus being able to take in details for the first time:
The doorways were entrances into different cities no, different entrances into one vast, secret city. Ever close, adjacent to all you know, just underneath. If you know where to look.
And I can't fail to mention that you can almost touch Whitehead's sublimated hunger to eat out again as well:
Stickups were chops they cook fast and hot, you re in and out. A stakeout was ribs fire down low, slow, taking your time.
Sometimes when Carney jumped into the Hudson when he was a kid, some of that stuff got into his mouth. The Big Apple Diner served it up and called it coffee.
More seriously, however, the relatively thin personalities of minor characters then reminded me of the simulacrum of Zoom-based relationships, and the essentially unsatisfactory endings to the novellas felt reminiscent of lockdown pseudo-events that simply fizzle out without a bang. One of the stories ties up loose ends with: 'These things were usually enough to terminate a mob war, and they appeared to end the hostilities in this case as well.' They did? Well, okay, I guess.
Still, it would be unfair to characterise myself as 'disappointed' with the novel, and none of this piece should be taken as really deep criticism. The book certainly was entertaining enough, and pretty funny in places as well:
Carney didn t have an etiquette book in front of him, but he was sure it was bad manners to sit on a man s safe.
The manager of the laundromat was a scrawny man in a saggy undershirt painted with sweat stains. Launderer, heal thyself.
Yet I can't shake the feeling that every book you write is a book that you don't, and so we might need to hold out a little longer for Whitehead's 'George Floyd novel'. (Although it is for others to say how much of this sentiment is the expectations of a White Reader for The Black Author to ventriloquise the pain of 'their' community.)
Some room for personal critique is surely permitted. I dearly missed the junk food energy of the dry and acerbic observations that run through Whitehead's previous work. At one point he had a good line on the model tokenisation that lurks behind 'The First Negro to...' labels, but the callbacks to this idea ceased without any payoff. Similar things happened with the not-so-subtle critiques of the American Dream:
Entrepreneur? Pepper said the last part like manure. That s just a
hustler who pays taxes.
One thing I ve learned in my job is that life is cheap, and when things start getting expensive, it gets cheaper still.
Ultimately, though, I think I just wanted more. I wanted a deeper exploration of how the real power in New York is not wielded by individual street hoodlums or even the cops but in the form of real estate, essentially serving as a synecdoche for Capital as a whole. (A recent take of this can be felt in Jed Rothstein's 2021 documentary, WeWork: Or the Making and Breaking of a $47 Billion Unicorn and it is perhaps pertinent to remember that the US President at the time this novel was written was affecting to be a real estate tycoon.). Indeed, just like the concluding scenes of J. J. Connolly's Layer Cake, although you can certainly pull off a cool heist against the Man, power ultimately resides in those who control the means of production... and a homespun furniture salesman on the corner of 125 & Morningside just ain't that. There are some nods to kind of analysis in the conclusion of the final story ('Their heist unwound as if it had never happened, and Van Wyck kept throwing up buildings.'), but, again, I would have simply liked more.
And when I attempted then file this book away into the broader media landscape, given the current cultural visibility of 1960s pop culture (e.g. One Night in Miami (2020), Judas and the Black Messiah (2021), Summer of Soul (2021), etc.), Harlem Shuffle also seemed like a missed opportunity to critically analyse our (highly-qualified) longing for the civil rights era. I can certainly understand why we might look fondly on the cultural products from a period when politics was less alienated, when society was less atomised, and when it was still possible to imagine meaningful change, but in this dimension at least, Harlem Shuffle seems to merely contribute to this nostalgic escapism.
A week after the 0.2.5 release bringing the recent Google Summer of Code for RcppSMC to CRAN, we have a minor bug-fix release consistently, essentially, of one line. Everybody s favourite OS and toolchain did not know what to make of pow(), and I seemingly failed to test there, so shame on me. But now all is good thanks to proper use of std::pow().
RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.
This release is summarized below.
Changes in RcppSMC version 0.2.5 (2021-09-09)
Compilation under Solaris is aided via std::pow use (Dirk in #65 fixing #64)
Yeah, Bullseye is released, thanks a lot to everybody involved!
This month I accepted 242 and rejected 18 packages. The overall number of packages that got accepted was 253.
This was my eighty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
This month my all in all workload has been 23.75h. During that time I did LTS and normal security uploads of:
[DLA 2738-1] c-ares security update for one CVE
[DLA 2746-1] scrollz security update for one CVE
[DLA 2747-1] ircii security update for one CVE
[DLA 2748-1] tnef security update for one CVE
[DLA 2749-1] gthumb security update for one CVE
[DLA 2752-1] squashfs-tools security update for one CVE
prepared debdiffs for squashfs-tools in Buster and Bullseye, which will result in DSA 4967
prepared debdiffs for btrbk in Buster and Bullseye
I also started to work on openssl, grilo and had to process packages from NEW on security-master.
As the CVE of btrbk was later marked as no-dsa, an upload to stable and oldstable is needed now.
Last but not least I did some days of frontdesk duties.
This month was the thirty-eighth ELTS month.
During my allocated time I uploaded:
ELA-474-1 for c-ares
ELA-480-1 for squashfs-tools
I also started to work on openssl.
Last but not least I did some days of frontdesk duties.
This month I uploaded new upstream versions of:
Debian 11 (codename Bullseye) was recently released. This was the smoothest upgrade I've experienced in some 20 years as a Debian user. In my haste, I completely forgot to first upgrade dpkg and apt, doing a straight dist-upgrade. Nonetheless, everything worked out of the box. No unresolved dependency cycles. Via my last-mile Gigabit connection, it took about 5 minutes to upgrade and reboot. Congratulations to everyone who made this possible!
Since the upgrade, only a handful of bugs were found. I filed bug reports. Over these past few days, maintainers have started responding. In once particular case, my report exposed a CVE caused by copy-pasted code between two similar packages. The source package fixed their code to something more secure a few years ago, while the destination package missed it. The situation has been brought to Debian's security team's attention and should be fixed over the next few days.
Having recently experienced hard-disk problems on my main desktop, upgrading to Bullseye made me revisit a few issues. One of these was the possibility of transiting to BTRFS. Last time I investigated the possibility was back when Ubuntu briefly switched their default filesystem to BRTFS. Back then, my feeling was that BRTFS wasn't ready for mainstream. For instance, the utility to convert an EXT2/3/4 partition to BTRFS corrupted the end of the partition. No thanks. However, in recent years, many large-scale online services have migrated to BRTFS and seem to be extremely happy with the result. Additionally, Linux kernel 5 added useful features such as background defragmentation. This got me pondering whether now would be a good time to migrate to BRTFS. Sadly it seems that the stock kernel shipping with Bullseye doesn't have any of these advanced features enabled in its configuration. Oh well.
The only point that has become problematic is my Geode hosts. For one things, upstream Rust maintainers have decided to ignore the fact that i686 is a specification and arbitrarily added compiler flags for more recent x86-32 CPUs to their i686 target. While Debian Rust maintainers have purposely downgraded the target, RustC still produces binaries that the Geode LX (essentially an i686 without PAE) cannot process. This affects fairly basic packages such as librsvg, which breaks SVG image support for a number of dependencies. Additionally, there's been persistent problems with systemd crashing on my Geode hosts whenever daemon-reload is issued. Then, a few days ago, problems started occurring with C++ binaries, because GCC-11 upstream enabled flags for more recent CPUs in their default i686 target. While I realize that SSE and similar recent CPU features produce better binaries, I cannot help but feel that treating CPU targets as anything else than a specification is a mistake. i686 is a specification. It is not a generic equivalent to x86-32.
I have been using the awesome window manager for 10 years. It is a
tiling window manager, configurable and extendable with the
Lua language. Using a general-purpose programming language to
configure every aspect is a double-edged sword. Due to laziness and
the apparent difficulty of adapting my configuration about 3000 lines to newer releases, I was stuck with the 3.4
version, whose last release is from 2013.
It was time for a rewrite. Instead, I have switched to the i3 window
manager, lured by the possibility to migrate to Wayland and
Sway later with minimal pain. Using an embedded interpreter for
configuration is not as important to me as it was in the past: it
brings both complexity and brittleness.
The window manager is only one part of a desktop environment. There
are several options for the other components. I am also introducing
them in this post.
i3: the window manageri3 aims to be a minimal tiling window manager. Its documentation
can be read from top to bottom in less than an hour. i3 organize
windows in a tree. Each non-leaf node contains one or several
windows and has an orientation and a layout. This information
arbitrates the window positions. i3 features three layouts: split,
stacking, and tabbed. They are demonstrated in the below screenshot:
Most of the other tiling window managers, including the awesome
window manager, use predefined layouts. They usually feature a large
area for the main window and another area divided among the remaining
windows. These layouts can be tuned a bit, but you mostly stick to a
couple of them. When a new window is added, the behavior is quite
predictable. Moreover, you can cycle through the various windows
without thinking too much as they are ordered.
i3 is more flexible with its ability to build any layout on the fly,
it can feel quite overwhelming as you need to visualize the tree in
your head. At first, it is not unusual to find yourself with a complex
tree with many useless nested containers. Moreover, you have to
navigate windows using directions. It takes some time to get used to.
I set up a split layout for Emacs and a few terminals, but most of the
other workspaces are using a tabbed layout. I don t use the stacking
layout. You can find many scripts trying to emulate other tiling
window managers but I did try to get my setup pristine of these
tentatives and get a chance to familiarize myself. i3 can also save
and restore layouts, which is quite a powerful feature.
My configuration is quite similar to the default one and has
less than 200 lines.
i3 companion: the missing bitsi3 philosophy is to keep a minimal core and let the user implements
missing features using the IPC protocol:
Do not add further complexity when it can be avoided. We are
generally happy with the feature set of i3 and instead focus on
fixing bugs and maintaining it for stability. New features will
therefore only be considered if the benefit outweighs the additional
complexity, and we encourage users to implement features using the
IPC whenever possible.
Introduction to the i3 window manager
While this is not as powerful as an embedded language, it is enough
for many cases. Moreover, as high-level features may be opinionated,
delegating them to small, loosely coupled pieces of code keeps them
more maintainable. Libraries exist for this purpose in several
languages. Users have published many scripts to extend i3:
automatic layout and window promotion to mimic the behavior
of other tiling window managers, window swallowing to put a new
app on top of the terminal launching it, and cycling between
windows with Alt+Tab.
Instead of maintaining a script for each feature, I have centralized
everything into a single Python process,
i3-companion using asyncio and the
i3ipc-python library. Each feature is self-contained into a
function. It implements the following components:
make a workspace exclusive to an application
When a workspace contains Emacs or Firefox, I would like other
applications to move to another workspace, except for the terminal
which is allowed to intrude into any workspace. The
workspace_exclusive() function monitors new windows and moves them
if needed to an empty workspace or to one with the same application
implement a Quake console
The quake_console() function implements a drop-down console
available from any workspace. It can be toggled with
Mod+. This is implemented as a scratchpad
back and forth workspace switching on the same output
With the workspace back_and_forth command, we can ask i3 to
switch to the previous workspace. However, this feature is not
restricted to the current output. I prefer to have one keybinding to
switch to the workspace on the next output and one keybinding to
switch to the previous workspace on the same output. This behavior
is implemented in the previous_workspace() function by keeping a
per-output history of the focused workspaces.
create a new empty workspace or move a window to an empty workspace
To create a new empty workspace or move a window to an empty
workspace, you have to locate a free slot and use workspace number
4 or move container to workspace number 4. The new_workspace()
function finds a free number and use it as the target workspace.
restart some services on output change
When adding or removing an output, some actions need to be executed:
refresh the wallpaper, restart some components unable to adapt their
configuration on their own, etc. i3 triggers an event for this
purpose. The output_update() function also takes an extra step to
coalesce multiple consecutive events and to check if there is a real
change with the low-level library xcffib.
I will detail the other features as this post goes on. On the
technical side, each function is decorated with the events it should
@on(CommandEvent("previous-workspace"),I3Event.WORKSPACE_FOCUS)asyncdefprevious_workspace(i3,event):"""Go to previous workspace on the same output."""
The CommandEvent() event class is my way to send a command to the
companion, using either i3-msg -t send_tick or binding a key to a
nop command. The latter is used to avoid spawning a shell and a
i3-msg process just to send a message. The companion listens to
binding events and checks if this is a nop command.
bindsym $mod+Tab nop "previous-workspace"
There are other decorators to avoid code duplication: @debounce() to
coalesce multiple consecutive calls, @static() to define a static
variable, and @retry() to retry a function on failure. The whole
script is a bit more than 1000 lines. I think this is
worth a read as I am quite happy with the result.
dunst: the notification daemon
Unlike the awesome window manager, i3 does not come with a built-in
notification system. Dunst is a lightweight notification daemon. I
am running a modified version with HiDPI support for X11 and
recursive icon lookup. The i3 companion has a helper function,
notify(), to send notifications using DBus. container_info() and
workspace_info() uses it to display information about the container
or the tree for a workspace.
polybar: the status bari3 bundles i3bar, a versatile status bar, but I have opted for
Polybar. A wrapper script runs one instance for
The first module is the built-in support for i3 workspaces. To not
have to remember which application is running in a workspace, the i3
companion renames workspaces to include an icon for each application.
This is done in the workspace_rename() function. The icons are from
the Font Awesome project. I maintain a mapping between applications
and icons. This is a bit cumbersome but it looks great.
For CPU, memory, brightness, battery, disk, and audio volume, I am
relying on the built-in modules. Polybar s wrapper script generates the list of filesystems to monitor and they get only
displayed when available space is low. The battery widget turns red
and blinks slowly when running out of power. Check my Polybar
configuration for more details.
For Bluetooh, network, and notification statuses, I am using Polybar s
ipc module: the next version of Polybar can receive
an arbitrary text on an IPC socket. The module is defined with a
single hook to be executed at the start to restore the latest status.
It can be updated with polybar-msg action "#network.send.XXXX". In
the i3 companion, the @polybar() decorator takes the string
returned by a function and pushes the update through the IPC socket.
The i3 companion reacts to DBus signals to update the Bluetooth and
network icons. The @on() decorator accepts a DBusSignal() object:
@on(StartEvent,DBusSignal(path="/org/bluez",interface="org.freedesktop.DBus.Properties",member="PropertiesChanged",signature="sa sv as",onlyif=lambdaargs:(args=="org.bluez.Device1"and"Connected"inargsorargs=="org.bluez.Adapter1"and"Powered"inargs),),)@retry(2)@debounce(0.2)@polybar("bluetooth")asyncdefbluetooth_status(i3,event,*args):"""Update bluetooth status for Polybar."""
picom: the compositor
I like having slightly transparent backgrounds for terminals and to
reduce the opacity of unfocused windows. This requires a
compositor.1picom is a lightweight compositor. It works
well for me, but it may need some tweaking depending on your graphic
card.2 Unlike the awesome window manager, i3 does not handle
transparency, so the compositor needs to decide by itself the opacity
of each window. Check my configuration for details.
systemd: the service manager
I use systemd to start i3 and the various services around it. My
xsession script only sets some environment variables and lets
systemd handles everything else. Have a look at this article from
Micha G ral for the rationale. Notably, each component can be
easily restarted and their logs are not mangled inside the
I am using a two-stage setup: i3.service depends on
xsession.target to start services before
rofi: the application launcherRofi is an application launcher. Its appearance can be customized
through a CSS-like language and it comes with several themes. Have a
look at my configuration for mine.
It can also act as a generic menu application. I have a script
to control a media player and another one to
select the wifi network. It is quite a flexible
xss-lock and i3lock: the screen lockeri3lock is a simple screen locker. xss-lock invokes it reliably
on inactivity or before a system suspend. For inactivity, it uses the
XScreenSaver events. The delay is configured using the xset s
command. The locker can be invoked immediately with xset s activate.
X11 applications know how to prevent the screen saver from running. I
have also developed a small dimmer application that is executed 20
seconds before the locker to give me a chance to move the mouse if I
am not away.4 Have a look at my configuration
The remaining components
autorandr is a tool to detect the connected display, match them
against a set of profiles, and configure them with xrandr.
Redshift adjusts the color temperature of the screen according
to the time of day.
maim is a utility to take screenshots. I use Prt Scn
to trigger a screenshot of a window or a specific area and
Mod+Prt Scn to capture the whole desktop to a
file. Check the helper script for details.
I have a collection of wallpapers I rotate every hour. A
script selects them using advanced machine learning
algorithms and stitches them together on multi-screen setups. The
selected wallpaper is reused by i3lock.
Apart from the eye candy, a compositor also helps to get
tear-free video playbacks.
My configuration works with both Haswell (2014) and Whiskey
Lake (2018) Intel GPUs. It also works with AMD GPU based on the
Polaris chipset (2017).
You cannot manage two different displays this way e.g.
:0 and :1. In the first implementation, I did try to
parametrize each service with the associated display, but this is
useless: there is only one DBus user session and many services
rely on it. For example, you cannot run two notification daemons.
I have only discovered later that XSecureLock ships
such a dimmer with a similar implementation. But mine has a cool
Another release of the tidyCpp package arrived on CRAN earlier today. The packages offers a clean C++ layer on top of the C API for R which aims to make its use a little easier and more consistent.
The vignette has been extended once more with a new example, and added a table of contents. The package now supports a (truly minimal) C++ class for a numeric vector which is the most likely use case.
The NEWS entry follows and includes the 0.0.3 release earlier in the year which did not get the usual attention of post-release blog post.
Changes in tidyCpp version 0.0.4 (2021-09-05)
Minor updates to DESCRIPTION
New snippet rollminmaxExample with simple vector use
New class NumVec motivated from rolling min/max example
Expand the vignette with C++ example based on NumVec
A central idea behind Candid is that services evolve over time, and so also their interfaces evolve. As they do, it is desirable to keep the interface usable by clients who have not been updated. In particular on a blockchainy platform like the Internet Computer, where some programs are immutable and cannot be changed to accommodate changes in the interface of the services they use, this is of importance.
Therefore, Candid defines which changes to an interface are guaranteed to be backward compatible. Of course it s compatible to add new methods to a service, but some changes to a method signature can also be ok. For example, changing
service A1 :
get_value : (variant current; previous : nat )
-> (record value : int; last_change : nat )
is fine: It doesn t matter that clients that still use the old interface don t know about the new constructor of the argument variant. And the extra field in the result record will silently be ignored by old clients.
In the Candid spec, this relation is written as A2 <: A1 (note the order!), and the formal footing this builds on is subtyping. We thus say that it is safe to upgrade a service to a subtype , and that A2 s interface is a subtype of A1 s.
In small examples, I often use nat and int, because every nat is also a int, but some int values are not not nat values, namely the negative ones. We say nat is a subtype of int, nat <: int. So a function that in the first returns a int can in the new version return a nat without breaking old clients. But the other direction doesn t work: If the old version s method had a return type of nat, then changing that to int would be a breaking change, as old clients would not be prepared to handle negative numbers.
Note that arguments of function types follow different rules. In fact, the rules are simply turned around, and in argument position (also called negative position ), we can evolve the interface to accept supertypes. Concretely, a function that originally expected an nat can be changed to expect an int, but the other direction doesn t work, because there might still be old clients around that send negative numbers. This reversing-of-the-rules is called contravariance.
The vision is that the developer s tooling will warn the developer before they upgrade the code of a running service to a type that is not a subtype of the old interface.
Let me stress, though, that all this is about not breaking existing clients that use the old interface. It does not mean that a client developer who fetches the new interface for your service won t have to change their code to make his programs compile again.
Higher order composition
So far, we considered the simple case of one service with a bunch of clients, i.e. the first order situation. Things get much more interesting if we have multiple services that are composed, and that pass around references to each other, and we still want everything to be nicely typed and never fail even as we upgrade services.
Therefore, Candid s type system can express that a service s method expects or a returns a reference to another service or method, and the type that this service or method should have. For example
service B : add_listener : (text, func (int) -> ()) -> ()
says that the service with interface B has a method called add_listener which takes two arguments, a plain value of type text and a reference to a function that itself expects a int-typed argument.
The contravariance of the subtyping rules explained above also applies to the types of these function reference. And because the int in the above type is on the left of two function arrows, the subtyping rule direction flips twice, and it is therefore safe to upgrade the service to accept the following interface:
service B : add_listener : (text, func (nat) -> ()) -> ()
Soundness theorem and Coq proof
The combination of these higher order features and the safe upgrade mechanism is maybe the unique selling point of Candid, but also what made its design quite tricky sometimes. And although the conventional subtyping rules work well, we wanted to do some less conventional things (more on that below), and more than once thought we had a good solution, only to notice that it did not hold water in complicated higher-order cases.
But what does it mean to hold water? I felt the need to precisely define a soundness criteria for higher order interface description languages, which you can find in the document An IDL Soundness Proposition. This is a general framework which you can instantiate with your concrete IDL language and rules, and then it tells you what you have to prove to consider your language to be sound. Intuitively, it simulates all possible ways in which services can be defined and upgraded, and in which they can pass around references to each other, and the soundness property is that then that for all messages sent between services, they can always be understood.
The document also shows, in general, that any system that builds on canonical subtyping in sound, as expected. That is still a generic theorem that you can instantiate with a concrete system, but now there is less to do.
Because these proofs can get tricky in corner cases, it is valuable to mechanize them in an interactive theorem prover and have the computer check the proofs. So I have created a Coq formalization that defines the soundness criteria, including the reduction to canonical subtyping. It also defines MiniCandid, a greatly simplified variant of Candid, proves various properties about it (transitivity etc.) and finally instantiates the soundness theorem.
I am particularly fond of the use of coinduction to model the equirecursive types, and the use of named cases, as we know them from Isabelle, to make the proofs a bit more readable and maintainable.
I am less fond of how much work it seems to be to extend MiniCandid with more of Candid s types. Already adding vec was more work than it s worth it, and I defensively blame Coq s not-so-great support for nested recursion.
The soundness relies on subtyping, and that is all fine and well as long as we stick to canonical rules. Unfortunately, practical considerations force us to deviate from these a bit, as we see in the next post of this series.
The Debian Janitor is an automated
system that commits fixes for (minor) issues in Debian packages that can be
fixed by software. It gradually started proposing merges in early
December. The first set of changes sent out ran lintian-brush on sid packages maintained in
Git. This post is part of a series about the progress of the Janitor.
Linux distributions like Debian fulfill an important function in the FOSS ecosystem - they are system integrators that take existing free and open source software projects and adapt them where necessary to work well together. They also make it possible for users to install more software in an easy and consistent way and with some degree of quality control and review.
One of the consequences of this model is that the distribution package often lags behind upstream releases. This is especially true for distributions that have tighter integration and standardization (such as Debian), and often new upstream code is only imported irregularly because it is a manual process - both updating the package, but also making sure that it still works together well with the rest of the system.
The process of importing a new upstream used to be (well, back when I started working on
Debian packages) fairly manual and something like this:
Go to the upstream s homepage, find the tarball and signature and verify the tarball
Make modifications so the tarball matches Debian s format
Diff the original and new upstream tarballs and figure out whether changes
are reasonable and which require packaging changes
Update the packaging, changelog, build and manually test the package
However, there have been developments over the last decade that make it easier to import new upstream releases into Debian packages.
Uscan and debian QA watch
debian/watch have been around for a
while and make it possible to find upstream tarballs.
A debian watch file usually looks something like this:
The QA watch service regularly polls
all watch locations in the archive and makes the information available, so it s
possible to know which packages have changed without downloading each one of them.
Git is fairly ubiquitous nowadays, and most upstream projects and packages in Debian use it. There are still exceptions that do not use any version control system or that use a different control system, but they are becoming increasingly rare. 
DEP-12 specifies a file format with metadata about the upstream project that a package was based on. In particular relevant for our case is the fact it has fields for the location of the upstream version control location.
debian/upstream/metadata files look something like this:
While DEP-12 is still a draft, it has already been widely adopted - there are about 10000 packages in Debian that ship a debian/upstream/metadata file with Repository information.
standard and associated tooling provide a way to run a defined set of tests
against an installed package. This makes it possible to verify that a package
is working correctly as part of the system as a whole. ci.debian.net regularly runs these tests against Debian packages to
The Vcs-Git headers in debian/control are the equivalent of the Repository field in debian/upstream/metadata, but for the packaging repositories (as opposed to the upstream ones).
They ve been around for a while and are widely adopted, as can be seen from zack s stats:
The vcswatch service that regularly
polls packaging repositories to see whether they have changed makes it a lot
easier to consume this information in usable way.
Over the last couple of years, Debian has slowly been converging on a single
build tool - debhelper s dh interface.
Being able to rely on a single build tool makes it easier to write code to
update packaging when upstream changes require it.
Debhelper (and its helpers) increasingly can figure out how to do the Right
Thing in many cases without being explicitly configured. This makes packaging
less effort, but also means that it s less likely that importing a new upstream
version will require updates to the packaging.
With all of these improvements in place, it actually becomes feasible in a lot
of situations to update a Debian package to a new upstream version
automatically. Of course, this requires that all of this information is
available, so it won t work for all packages. In some cases, the packaging for
the older upstream version might not apply to the newer upstream version.
The Janitor has attempted to import a new upstream Git snapshot and a new
upstream release for every package in the archive where a debian/watch file or
debian/upstream/metadata file are present.
These are the steps it uses:
Find new upstream version
If release, use debian/watch - or maybe tagged in upstream repository
If snapshot, use debian/upstream/metadata s Repository field
If neither is available, use guess-upstream-metadata from upstream-ontologist to guess the upstream Repository
Merge upstream version into packaging repository, possibly importing tarballs using pristine-tar
Update the changelog file to mention the new upstream version
Run some checks to ensure there are no unintentional changes, e.g.:
Scan diff between old and new for surprising license changes
Today, abort if there are any - in the future, maybe update debian/copyright
Check for obvious compatibility breaks - e.g. sonames changing
Attempt to update the packaging to reflect upstream changes
Attempt to build the package with deb-fix-build, to deal with any missing dependencies
Run the autopkgtests with deb-fix-build to deal with missing dependencies, and abort if any tests fail
When run over all packages in unstable (sid), this process works for a surprising number of them.
For fresh-releases (aka imports of upstream releases), processing all packages maintained in Git for which QA watch reports new releases (about 11,000):
That means about 2300 packages updated, and about 4000 unchanged.
For fresh-snapshots (aka imports of latest Git commit from upstream), processing all packages maintained in Git (about 26,000):
Or 5100 packages updated and 2100 for which there was nothing to do, i.e. no upstream commits since the last Debian upload.
As can be seen, this works for a surprising fraction of packages. It s possible to get the numbers up even higher, by both improving the tooling, the autopkgtests and the metadata that is provided by packages.
Using these packages
All the packages that have been built can be accessed from the Janitor APT repository. More information can be found at https://janitor.debian.net/fresh, but in short - run:
echo deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \ https://janitor.debian.net/ fresh-snapshots main sudo tee /etc/apt/sources.list.d/fresh-snapshots.listecho deb "[arch=amd64 signed-by=/usr/share/keyrings/debian-janitor-archive-keyring.gpg]" \ https://janitor.debian.net/ fresh-releases main sudo tee /etc/apt/sources.list.d/fresh-releases.listsudo curl -o /usr/share/keyrings/debian-janitor-archive-keyring.gpg https://janitor.debian.net/pgp_keysapt update
And then you can install packages from the fresh-snapshots (upstream git snapshots) or fresh-releases suites on a case-by-case basis by running something like:
apt install -t fresh-snapshots r-cran-roxygen2
Most packages are updated based on information provided by vcswatch and qa watch, but it s also possible for upstream repositories to call a web hook to trigger a refresh of a package.
These packages were built against unstable, but should in almost all cases also work for testing.
Of course, since these packages are built automatically without human supervision it s likely that some of them will have bugs in them that would otherwise have been caught by the maintainer.
Like each month, have a look at the work funded by Freexian s Debian LTS offering.
Debian project funding
In July, we put aside 2400 EUR to fund Debian projects. We haven t received proposals of projects to fund in the last months, so we have scheduled a discussion during Debconf to try to to figure out why that is and how we can fix that. Join us on August 26th at 16:00 UTC on this link.
We are pleased to announce that Jeremiah Foster will help out to make this initiative a success : he can help Debian members to come up with solid proposals, he can look for people willing to do the work once the project has been formalized and approved, and he will make sure that the project implementation keeps on track when the actual work has begun.
We re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.
Debian LTS contributors
In July, 12 contributors have been paid to work on Debian LTS, their reports are available:
Abhijith PA did 5.0h (out of 7h assigned and 3h remaining), thus carrying over 5h to August.
Evolution of the situation
In July we released 30 DLAs. Also we were glad to welcome Neil Williams and Lee Garrett who became active contributors.
The security tracker currently lists 63 packages with a known CVE and the dla-needed.txt file has 17 packages needing an update.
We would like to thank Holger Levsen for the years of work where he managed/coordinated the paid LTS contributors. Jeremiah Foster will take over his duties.
Thanks to our sponsors
Sponsors that joined recently are in bold.
With the release of Debian/Bullseye last week, we can now support the stable Debian release (Bullseye) as well as Testing and Unstable releases with our build of KDE/Plasma (frameworks, plasma, gears, related programs) on the OBS builds. We used this switch to also consistently support three architectures: amd64, i386, and aarch64.
To give full details, I repeat (and update) instructions for all here: First of all, you need to add my OBS key say in /etc/apt/trusted.gpg.d/obs-npreining.asc and add a file /etc/apt/sources.lists.d/obs-npreining-kde.list, containing the following lines, replacing the DISTRIBUTION part with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable:
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma522/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2108/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/DISTRIBUTION/ ./
Bullseye and pinning
These packages are shipped with Distribution: unstable and thus might have a too low pin to be upgraded to automatically. One can fix that by manually selecting the packages, or adjusting the pin priority. One example is given in the comment section below, copied here: