Search Results: "cts"

13 June 2024

Russ Allbery: Security review of tag2upload

For some time now, Debian has been discussing a possible enhancement to the way that Debian packages are uploaded to the archive. The basic idea is to allow a package upload to be triggered by pushing a signed tag, with some structured metadata, to Salsa, the instance of GitLab that Debian provides for packaging repositories. This would allow Debian package maintainers to use a more typical Git-first workflow, where releases are triggered by Git tags and the release artifacts are built in a clean CI environment, while still enforcing the existing Debian rules about who is allowed to upload packages. As part of that effort, I recently completed a detailed security review of the tag2upload design. I sent it to debian-vote as part of the ongoing discussion, but have also posted it at the link above to give it a more permanent home. This security review may be revised based on the discussion if people point out things that I missed.

8 June 2024

Reproducible Builds: Reproducible Builds in May 2024

Welcome to the May 2024 report from the Reproducible Builds project! In these reports, we try to outline what we have been up to over the past month and highlight news items in software supply-chain security more broadly. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. A peek into build provenance for Homebrew
  2. Distribution news
  3. Mailing list news
  4. Miscellaneous news
  5. Two new academic papers
  6. diffoscope
  7. Website updates
  8. Upstream patches
  9. Reproducibility testing framework


A peek into build provenance for Homebrew Joe Sweeney and William Woodruff on the Trail of Bits blog wrote an extensive post about build provenance for Homebrew, the third-party package manager for MacOS. Their post details how each bottle (i.e. each release):
[ ] built by Homebrew will come with a cryptographically verifiable statement binding the bottle s content to the specific workflow and other build-time metadata that produced it. [ ] In effect, this injects greater transparency into the Homebrew build process, and diminishes the threat posed by a compromised or malicious insider by making it impossible to trick ordinary users into installing non-CI-built bottles.
The post also briefly touches on future work, including work on source provenance:
Homebrew s formulae already hash-pin their source artifacts, but we can go a step further and additionally assert that source artifacts are produced by the repository (or other signing identity) that s latent in their URL or otherwise embedded into the formula specification.

Distribution news In Debian this month, Johannes Schauer Marin Rodrigues (aka josch) noticed that the Debian binary package bash version 5.2.15-2+b3 was uploaded to the archive twice. Once to bookworm and once to sid but with differing content. This is problem for reproducible builds in Debian due its assumption that the package name, version and architecture triplet is unique. However, josch highlighted that
This example with bash is especially problematic since bash is Essential:yes, so there will now be a large portion of .buildinfo files where it is not possible to figure out with which of the two differing bash packages the sources were compiled.
In response to this, Holger Levsen performed an analysis of all .buildinfo files and found that this needs almost 1,500 binNMUs to fix the fallout from this bug. Elsewhere in Debian, Vagrant Cascadian posted about a Non-Maintainer Upload (NMU) sprint to take place during early June, and it was announced that there is now a #debian-snapshot IRC channel on OFTC to discuss the creation of a new source code archiving service to, perhaps, replace snapshot.debian.org. Lastly, 11 reviews of Debian packages were added, 15 were updated and 48 were removed this month adding to our extensive knowledge about identified issues. A number of issue types have been updated by Chris Lamb as well. [ ][ ]
Elsewhere in the world of distributions, deep within a larger announcement from Colin Percival about the release of version 14.1-BETA2, it was mentioned that the FreeBSD kernels are now built reproducibly.
In Fedora, however, the change proposal mentioned in our report for April 2024 was approved, so, per the ReproduciblePackageBuilds wiki page, the add-determinism tool is now running in new builds for Fedora 41 ( rawhide ). The add-determinism tool is a Rust program which, as its name suggests, adds determinism to files that are given as input by attempting to standardize metadata contained in binary or source files to ensure consistency and clamping to $SOURCE_DATE_EPOCH in all instances . This is essentially the Fedora version of Debian s strip-nondeterminism. However, strip-nondeterminism is written in Perl, and Fedora did not want to pull Perl in the buildroot for every package. The add-determinism tool eliminates many causes of non-determinism and work is ongoing to continue the scope of packages it can operate on.

Mailing list news On our mailing list this month, regular contributor kpcyrd wrote to the list with an update on their source code indexing project, whatsrc.org. The whatsrc.org project, which was launched last month in response to the XZ Utils backdoor, now contains and indexes almost 250,000 unique source code archives. In their post, kpcyrd gives an example of its intended purpose, noting that it shown that whilst there seems to be consensus about [the] source code for zsh 5.9 in various Linux distributions, it does not align with the contents of the zsh Git repository . Holger Levsen also posted to the list with a pre-announcement of sorts for the 2024 Reproducible Builds summit. In particular:
[Whilst] the dates and location are not fixed yet, however if you don help us with finding a suitable location soon, it is very likely that we ll meet again in Hamburg in the 2nd half of September 2024 [ ].
Lastly, Frederic-Emmanuel Picca wrote to the list asking for help understanding the non-reproducible status of the Debian silx package and received replies from both Vagrant Cascadian and Chris Lamb.

Miscellaneous news strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month strip-nondeterminism version 1.14.0-1 was uploaded to Debian unstable by Chris Lamb chiefly to incorporate a change from Alex Muntada to avoid a dependency on Sub::Override to perform monkey-patching and break circular dependencies related to debhelper [ ]. Elsewhere in our tooling, Jelle van der Waa modified reprotest because the pipes module will be removed in Python version 3.13 [ ].
It was also noticed that a new blog post by Daniel Stenberg detailing How to verify a Curl release mentions the SOURCE_DATE_EPOCH environment variable. This is because:
The [curl] release tools document also contains another key component: the exact time stamp at which the release was done using integer second resolution. In order to generate a correct tarball clone, you need to also generate the new version using the old version s timestamp. Because the modification date of all files in the produced tarball will be set to this timestamp.

Furthermore, Fay Stegerman filed a bug against the Signal messenger app for Android to report that their reproducible builds cannot, in fact, be reproduced. However, Fay is quick to note that she has:
found zero evidence of any kind of compromise. Some differences are yet unexplained but everything I found seems to be benign. I am disappointed that Reproducible Builds have been broken for months but I have zero reason to doubt Signal s security in any way.

Lastly, it was observed that there was a concise and diagrammatic overview of supply chain threats on the SLSA website.

Two new academic papers Two new scholarly papers were published this month. Firstly, Mathieu Acher, Beno t Combemale, Georges Aaron Randrianaina and Jean-Marc J z quel of University of Rennes on Embracing Deep Variability For Reproducibility & Replicability. The authors describe their approach as follows:
In this short [vision] paper we delve into the application of software engineering techniques, specifically variability management, to systematically identify and explicit points of variability that may give rise to reproducibility issues (e.g., language, libraries, compiler, virtual machine, OS, environment variables, etc.). The primary objectives are: i) gaining insights into the variability layers and their possible interactions, ii) capturing and documenting configurations for the sake of reproducibility, and iii) exploring diverse configurations to replicate, and hence validate and ensure the robustness of results. By adopting these methodologies, we aim to address the complexities associated with reproducibility and replicability in modern software systems and environments, facilitating a more comprehensive and nuanced perspective on these critical aspects.
(A PDF of this article is available.)
Secondly, Ludovic Court s, Timothy Sample, Simon Tournier and Stefano Zacchiroli have collaborated to publish a paper on Source Code Archiving to the Rescue of Reproducible Deployment. Their paper was motivated because:
The ability to verify research results and to experiment with methodologies are core tenets of science. As research results are increasingly the outcome of computational processes, software plays a central role. GNU Guix is a software deployment tool that supports reproducible software deployment, making it a foundation for computational research workflows. To achieve reproducibility, we must first ensure the source code of software packages Guix deploys remains available.
(A PDF of this article is also available.)

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 266, 267, 268 and 269 to Debian, making the following changes:
  • New features:
    • Use xz --list to supplement output when comparing .xz archives; essential when metadata differs. (#1069329)
    • Include xz --verbose --verbose (ie. double) output. (#1069329)
    • Strip the first line from the xz --list output. [ ]
    • Only include xz --list --verbose output if the xz has no other differences. [ ]
    • Actually append the xz --list after the container differences, as it simplifies a lot. [ ]
  • Testing improvements:
    • Allow Debian testing to fail right now. [ ]
    • Drop apktool from Build-Depends; we can still test APK functionality via autopkgtests. (#1071410)
    • Add a versioned dependency for at least version 5.4.5 for the xz tests as they fail under (at least) version 5.2.8. (#374)
    • Fix tests for 7zip 24.05. [ ][ ]
    • Fix all tests after additon of xz --list. [ ][ ]
  • Misc:
    • Update copyright years. [ ]
In addition, James Addison fixed an issue where the HTML output showed only the first difference in a file, while the text output shows all differences [ ][ ][ ], Sergei Trofimovich amended the 7zip version test for older 7z versions that include the string [64] [ ][ ] and Vagrant Cascadian relaxed the versioned dependency to allow version 5.4.1 for the xz tests [ ] and proposed updates to guix for versions 267, 268 and pushed version 269 to Guix. Furthermore, Eli Schwartz updated the diffoscope.org website in order to explain how to install diffoscope on Gentoo [ ].

Website updates There were a number of improvements made to our website this month, including Chris Lamb making the print CSS stylesheet nicer [ ]. Fay Stegerman made a number of updates to the page about the SOURCE_DATE_EPOCH environment variable [ ][ ][ ] and Holger Levsen added some of their presentations to the Resources page. Furthermore, IOhannes zm lnig stipulated support for SOURCE_DATE_EPOCH in clang version 16.0.0+ [ ], Jan Zerebecki expanded the Formal definition page and fixed a number of typos on the Buy-in page [ ] and Simon Josefsson fixed the link to Trisquel GNU/Linux on the Projects page [ ].

Upstream patches This month, we wrote a number of patches to fix specific reproducibility issues, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In May, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Enable the rebuilder-snapshot API on osuosl4. [ ]
    • Schedule the i386 architecture a bit more often. [ ]
    • Adapt cleanup_nodes.sh to the new way of running our build services. [ ]
    • Add 8 more workers for the i386 architecture. [ ]
    • Update configuration now that the infom07 and infom08 nodes have been reinstalled as real i386 systems. [ ]
    • Make diffoscope timeouts more visible on the #debian-reproducible-changes IRC channel. [ ]
    • Mark the cbxi4a-armhf node as down. [ ][ ]
    • Only install the hdmi2usb-mode-switch package only on Debian bookworm and earlier [ ] and only install the haskell-platform package on Debian bullseye [ ].
  • Misc:
    • Install the ntpdate utility as we need it later. [ ]
    • Document the progress on the i386 architecture nodes at Infomaniak. [ ]
    • Drop an outdated and unnoticed notice. [ ]
    • Add live_setup_schroot to the list of so-called zombie jobs. [ ]
In addition, Mattia Rizzolo reinstalled the infom07 and infom08 nodes [ ] and Vagrant Cascadian marked the cbxi4a node as online [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 June 2024

Freexian Collaborators: Debian Contributions: DebConf Bursaries, /usr-move, sbuild, and more! (by Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Bursary updates, by Utkarsh Gupta Utkarsh is the bursaries team lead for DebConf 24. Bursary requests are dispatched to a team of volunteers to review. The results are collated, adjusted and merged to produce priority lists of requests to fund. Utkarsh raised the team, coordinated the review, and issued bursaries to attendees.

/usr-move, by Helmut Grohne More and more, the /usr-move transition is being carried out by multiple contributors and many performed around a hundred of the requested uploads. Of these, Helmut contributed five patches and two uploads. As a result, there are less than 350 packages left to be converted, and all of the non-trivial cases have patches. We started with three times that number. Thanks to everyone involved for supporting this effort. For people interested in background information of this transition, Helmut gave a presentation at MiniDebConf Berlin 2024 (slides).

sbuild, by Helmut Grohne While unshare mode of sbuild has existed for quite a while, it is now getting significant use in Debian, and new problems are popping up. Helmut looked into an apparmor-related failure and provided a diagnosis. While relevant code would detect the chroot nature of a schroot backend and skip apparmor tests, the unshare environment would be just good enough to run and fail the test. As sbuild exposes fewer special kernel filesystems, the tests will be skipped again. Another problem popped up when gobject-introspection added a dependency on the host architecture Python interpreter in a cross build environment. sbuild would prefer installing (and failing) a host architecture Python to installing the qemu alternative. Attempts to fix this would result in systemd killing sbuild. ischroot as used by libc6.postinst would not classify the unshare environment as a chroot. Therefore libc6.postinst would run telinit which would kill the build process. This is a complex interaction problem that shall eventually be solved by providing triggers from libc6 to be implemented by affected init systems.

Salsa CI updates, by Santiago Ruano Rinc n Several issues arose about Salsa CI last month, and it is probably worth mentioning part of the challenges of defining its framework in YAML. With the upcoming end-of-support of Debian 10 buster as LTS, armel was removed from deb.debian.org, making the jobs that build images for buster/armel to fail. While the removal of buster/armel from the repositories is a natural change, it put some light on the flaws in the Salsa CI design regarding the support of the different Debian releases. Currently, the images are defined like these (from .images-debian.yml):
.all-supported-releases: &all-supported-releases
  - stretch
  - stretch-backports
  - buster
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
And from them, different images are built according to the different jobs and how they are supported, for example:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, all releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE: *all-supported-releases
The removal of buster/armel could be easily reflected as:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, fully supported releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE:
          - stretch
          - buster
          - bullseye
          - bullseye-backports
          - bookworm
          - bookworm-backports
          - trixie
          - sid
          - experimental
      # buster only supports armhf and arm64
      - IMAGE_NAME: base
        ARCH:
          - arm32v7
          - arm64v8
        RELEASE: buster
Evidently, this increases duplication of the release support data, which is of course not optimal and it is error prone when changing the data about supported releases. A better approach would be to have two different YAML lists, such as:
# releases that have partial support. E.g.: buster is transitioning to
# Debian LTS, and buster armel is no longer found in deb.debian.org
.old-releases: &old-releases
  - stretch
  - buster

.currently-supported-releases: &currently-supported-releases
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
and then a unified list:
.all-supported-releases: &all-supported-releases
  - *old-releases
  - *currently-supported-releases
that could be used in the matrix of the jobs that build all the images available in the pipeline container registry. However, due to limitations in GitLab, it is not possible to expand the variables or mapping values in a parallel:matrix context. At least not in an elegant fashion. This is the kind of issue that recently arose and that Santiago is currently working to solve, in the simplest possible way. Astute readers would notice that stretch is listed in the fully supported releases. And there is no problem with stretch, because it is built from archive.debian.org. Otto actually has tried to fix the broken image build job doing the same, but it is still incorrect, because the security repository is not (yet) available in archive.debian.org. Additionally, Santiago has also worked on other merge requests, such as:
  1. support branch/tags as target head in the test projects,
  2. build autopkgtest image on top of stable
  3. Add .yamllint and make it happy in the autopkgtest-lxc project
  4. enable FF_SCRIPT_SECTIONS to log multiline commands, among others.

Archiving DebConf Websites, by Stefano Rivera DebConf, the annual Debian conference, has its own new website every year. These are typically complex dynamic web applications (featuring registration, call for papers, scheduling, etc.) Once the conference is over, there is no need to keep maintaining these applications, so we archive the sites off as static HTML, and serve them from Debian s static CDN. Stefano archived the websites for the last two DebConfs. The schedule system behind DebConf 14 and 15 s websites was a derivative of Canonical s summit system. This was only used for a couple of years before migrating to wafer, the current system. Archiving summit content has been on the nice to have list for years, but nobody has ever tackled it. The machine that served the sites went away a couple of years ago. After much digging, a backup of the database was found, and Stefano got this code running on an ancient Python 2.7. Recently Stefano put this all together and hooked in an archive export to finally get this content preserved.

Python 3.x and pypy3 security bug triage, by Stefano Rivera Stefano Rivera triaged all the open security bugs against the Python 3.x and PyPy3 packages for Debian s stable and LTS releases. Several had been fixed but this wasn t recorded in the security tracker.

Linux livepatching support for Debian, by Santiago Ruano Rinc n In collaboration with Emmanuel Arias, Santiago filed ITP bug #1070494. As stated in the bug, more than an Intent to Package, it is an Intent to Design and Implement live patching support for the Linux kernel in Debian. For now, Emmanuel and Santiago have done exploratory work and they are working to understand the different possibilities to implement livepatching. One possible direction is to rely on kpatch, and the other is to package the modules using regular packaging tools. Also, it is needed to evaluate if it is possible to rely on distributing the modules via packages, or instead as a service, as it is done by some commercial distributions.

Miscellaneous contributions
  • Thorsten Alteholz uploaded cups-bjnp to improve packaging.
  • Colin Watson tracked down a baffling CI issue in openssh to unblock several merge requests, removed the user_readenv=1 option from its PAM configuration, and started on the first stage of his plan to split out GSS-API key exchange support to separate packages.
  • Colin did his usual routine work on the Python team, upgrading 26 packages to new upstream versions, and cherry-picking an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • Colin NMUed a couple of packages to reduce the need for explicit overrides in Packages-arch-specific, and removed some other obsolete entries from there.
  • Emilio managed various library transitions, and helped finish a few of the remaining t64 transitions.
  • Helmut sent a patch for enabling piuparts to work as a regular user building on earlier work.
  • Helmut sent patches for 7 cross build failures, 6 other debian bugs and fixed an infrastructure problem in crossqa.debian.net.
  • Nicholas worked on a sponsored package upload, and discovered the blhc tool for diagnosing build hardening.
  • Stefano Rivera started and completed the re2 transition. The release team suggested moving to a virtual package scheme that includes the absl ABI (as re2 now depends on it). Adopted this.
  • Stefano continued to work on DebConf 24 planning.
  • Santiago continued to work on DebConf24 Content tasks as well as Debconf25 organisation.

1 June 2024

Ian Jackson: What your vote is worth - a back of the envelope calculation

tl;dr: Your vote really counts! Each vote in a UK General Election is worth maybe 100,000 - to you and all your fellow citizens taken together. If you really care about the welfare of everyone affected by actions of the UK government, then it s worth that to you too. Introduction It seems a common perception that one vote, in amongst all those millions, doesn t really matter. So maybe it s not worth voting. But, voting is (largely) what determines what the government does - and the government is big. It s as big as all the people. If you are the kind of person who cares about what happens to everyone in your polity and indeed everyone its actions affect, then even your one vote is very important indeed. A method for back of the envelope calculation It would be nice to give a quantitative estimate. Many things in our society are measured in money, so let s try taking a stab at calculating the money value of your vote. The argument I m going to make is this: the government (by which I include the legislature), which is selected by our votes, decides how to spend the national budget. So, basically, I m going to divide the budget, by the electorate. UK Parliament UK Parliamentary elections decide not only the House of Commons, but, through that, the government. The upper house, the House of Lords, has very limited influence. So I think it s fair to regard the Parliamentary election as, simply, controlling that budget. Being lazy, I m going to use Wikipedia data. We have the size of the electorate, for 2019, 47.6 million. But your influence isn t shared with the whole electorate, only with the other people who also vote. Turnout in 2019 was 67.3%. The 2019 budget isn t listed but I ll just average the 2018 and March 2020 figures 842bn and 873bn, so 857 billion. (Strictly speaking I should add up the budgets for the period of the Parliament, but that seems like a lot of effort.) There s a discrepancy in the timescale we need to account for. Your vote influences the budgets for several years, depending how long it is until the next election. Taking Wikipedia s list of elections this century there ve been 7 in 24 years. So that s an average of about 3.4y. So, multiplying it through, we have ( 857b * (24 / 7)) / (47.6M * 67.3%), giving a guess at the value of your UK General Election vote: 92,000. European Parliament 2022 budget for the European Union (Wikipedia again) was 170.6 bn. The last election, in 2019, had a turnout of 198,352,638. Each EU Parliament lasts 5 years. The Parliament, however, shares responsibility for the budget with the European Council, which is controlled, ultimately, by national governments. We have to pick a numerical value for the Parliament s share of the influence. Over the past years the Parliament has gradually been more willing to exercise its powers in this area. I m going to arbitrarily call its share 50%. The calculation, then, is 170.6 bn * 5 * 50% / 198M, giving a guess at the value of your EU Parliamentary Election vote: 2150. This much smaller figure reflects simply that the EU doesn t spend very much money, for a polity of its size. (Those stories in the British press giving the impression that the EU is massively wasteful are, simply, lies.) The interaction of this calculation with the Council s share of the influence, and with national budgets, is a bit of a question, but given the much smaller amounts involved, it doesn t seem worth thinking about that too hard. Only if you care about other people as much as yourself! All of this is only true for you if you value and want to help everyone in your society. That includes immigrants, women, unemployed people, disabled people, people who are much poorer or richer than you, etc. If you think about it in purely personal terms, your vote is hardly worth anything - because while the effect of your vote, overall, is very large, that effect is shared by everyone in your polity. So if you only care about yourself, voting is a total waste of time. The more selfish and xenophobic and racist and so on you are - caring only about people like yourself - the less your vote is worth. This is why voting is rightly seen as a civic duty. I just spent 30 to courier my EP vote to Den Haag. That only makes sense because I m very willing to spend that 30 to try to improve the spending of the 2000 or so that s my share of the EU budget. This is a very rough analysis These calculations neglect a lot of very important things: politics isn t just about the allocation of resources. It s also about values, and bad politics can seriously harm people. Arguably many of those effects of your vote, are much more important than just how the budget is set and spent. It would be interesting to see an attempt at a similar analysis but for taking into account life and death questions like hate crime, traffic violence, healthcare, refugees welfare, and so on. I m not sure how to approach that. Maybe some real social scientists have done so? References welcome. Also, even on its own terms, this analysis is very rough and ready. We haven t modelled the ability of the government to change its tax rates; perhaps we should be multiplying GDP (or some other better measure) by 90% percentile total tax rate amongst countries like this one . The amount of influence that can be wielded by one vote is probably nonlinear in the size of the political faction, but IDK in which direction. In unfair voting systems like the UK s, some people s votes are worth much more than others. In a very marginal constituency, which is a target seat, your vote might be worth tens of millions. In a safe seat, it might only be worth a few thousand. And in practical terms you don t get to choose precisely the policies you want; you have to pick a party, which is sometimes very much a question of the lesser evil. So, there is much I haven t modelled. But the key point stands: Conclusion Although your vote is diluted by everyone else s votes, together, we control the government, which affects us all. So if you care about the whole of society, the big numbers in the divisor, and the numerator, cancel out. You can think of your vote as controlling one citizen s worth of government activity.
edited 2024-06-01 09:40 Z to fix a grammar botch


comment count unavailable comments

31 May 2024

Dirk Eddelbuettel: RcppArmadillo 0.12.8.4.0 on CRAN: Upstream Bugfix

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1151 other packages on CRAN, downloaded 34.6 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 584 times according to Google Scholar. Conrad released a new upstream bugfix yesterday (to improve views of sparse matrices). We uploaded it yesterday too but it once agfain took a day for the hard-working CRAN maintainers to concur that the two NOTEs from reverse-dependency checking over 1100 packages were in a fact false positves. And so it appeared on CRAN earlier today. We also increased the versioned dependency on Rcpp to match the use of optional entry-point headers Rcpp/Light, Rcpp/Lighter and Rcpp/Lightest. No other changes were made. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.4.0 (2024-05-30)
  • Upgraded to Armadillo release 12.8.4 (Cortisol Injector)
    • Faster handling of sparse submatrix views
  • Update versioned Depends on Rcpp to 1.0.8 or later to match use of Light/Lighter/Lightest headers.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 May 2024

Antoine Beaupr : Playing with fonts again

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

Commit Mono This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code). So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!). One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults. This is what it looks like, before:
A dark terminal showing the test sheet in Fira Mono Fira Mono
After:
A dark terminal showing the test sheet in Fira Mono Commit Mono
(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.) They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it. All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the and are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all. I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes ( , GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

A UTF-8 test file Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.
US keyboard coverage:
abcdefghijklmnopqrstuvwxyz 1234567890-=[]\;',./
ABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+ :"<>?
latin1 coverage:  
EURO SIGN, TRADE MARK SIGN:  
ambiguity test:
e coC0ODQ iI71lL! 
b6G&0B83  []() /\. 
zs$S52Z%   '" 
all characters in a sentence, uppercase:
the quick fox jumps over the lazy dog
THE QUICK FOX JUMPS OVER THE LAZY DOG
same, in french:
Portez ce vieux whisky au juge blond qui fume.
d s no l, o  un z phyr ha  me v t de gla ons w rmiens, je d ne
d exquis r tis de b uf au kir,   l a  d ge m r, &c tera.
D S NO L, O  UN Z PHYR HA  ME V T DE GLA ONS W RMIENS, JE D NE
D EXQUIS R TIS DE B UF AU KIR,   L A  D GE M R, &C TERA.
Ligatures test:
-<< -< -<- <-- <--- <<- <- -> ->> --> ---> ->- >- >>-
=<< =< =<= <== <=== <<= <= => =>> ==> ===> =>= >= >>=
<-> <--> <---> <----> <=> <==> <===> <====> :: ::: __
<~~ </ </> /> ~~> == != /= ~= <> === !== !=== =/= =!=
<: := *= *+ <* <*> *> <  < >  > <. <.> .> +* =* =: :>
(* *) /* */ [   ]     ++ +++ \/ /\  - -  <!-- <!---
Box drawing alignment tests:
                                                                    
                                 
                           
                                                   
                                       
                                                 
                               
                                
Dashes alignment test:
HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE
--------------------------------------------------
 
 
 
 
__________________________________________________
So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again. Sources and inspiration for the above:
  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and (U+2212 MINUS SIGN, a math symbol)
  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this
  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters
  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes
  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"
  • UTF-8 sampler - unused, similar

Other fonts In my previous blog post about fonts, I had a list of alternative fonts, but it seems people are not digging through this, so I figured I would redo the list here to preempt "but have you tried Jetbrains mono" kind of comments. My requirements are:
  • no ligatures: yes, in the previous post, I wanted ligatures but I have changed my mind. after testing this, I find them distracting, confusing, and they often break the monospace nature of the display
  • monospace: this is to display code
  • italics: often used when writing Markdown, where I do make use of italics... Emacs falls back to underlining text when lacking italics which is hard to read
  • free-ish, ultimately should be packaged in Debian
Here is the list of alternatives I have considered in the past and why I'm not using them:
  • agave: recommended by tarzeau, not sure I like the lowercase a, a bit too exotic, packaged as fonts-agave
  • Cascadia code: optional ligatures, multilingual, not liking the alignment, ambiguous parenthesis (look too much like square brackets), new default for Windows Terminal and Visual Studio, packaged as fonts-cascadia-code
  • Fira Code: ligatures, was using Fira Mono from which it is derived, lacking italics except for forks, interestingly, Fira Code succeeds the alignment test but Fira Mono fails to show the X signs properly! packaged as fonts-firacode
  • Hack: no ligatures, very similar to Fira, italics, good alternative, fails the X test in box alignment, packaged as fonts-hack
  • Hermit: no ligatures, smaller, alignment issues in box drawing and dashes, packaged as fonts-hermit somehow part of cool-retro-term
  • IBM Plex: irritating website, replaces Helvetica as the IBM corporate font, no ligatures by default, italics, proportional alternatives, serifs and sans, multiple languages, partial failure in box alignment test (X signs), fancy curly braces contrast perhaps too much with the rest of the font, packaged in Debian as fonts-ibm-plex
  • Inconsolata: no ligatures, maybe italics? more compressed than others, feels a little out of balance because of that, packaged in Debian as fonts-inconsolata
  • Intel One Mono: nice legibility, no ligatures, alignment issues in box drawing, not packaged in Debian
  • Iosevka: optional ligatures, italics, multilingual, good legibility, has a proportional option, serifs and sans, line height issue in box drawing, fails dash test, not in Debian
  • Jetbrains Mono: (mandatory?) ligatures, good coverage, originally rumored to be not DFSG-free (Debian Free Software Guidelines) but ultimately packaged in Debian as fonts-jetbrains-mono
  • Monoid: optional ligatures, feels much "thinner" than Jetbrains, not liking alignment or spacing on that one, ambiguous 2Z, problems rendering box drawing, packaged as fonts-monoid
  • Mononoki: no ligatures, looks good, good alternative, suggested by the Debian fonts team as part of fonts-recommended, problems rendering box drawing, em dash bigger than en dash, packaged as fonts-mononoki
  • Source Code Pro: italics, looks good, but dash metrics look whacky, not in Debian
  • spleen: bitmap font, old school, spacing issue in box drawing test, packaged as fonts-spleen
  • sudo: personal project, no ligatures, zero originally not dotted, relied on metrics for legibility, spacing issue in box drawing, not in Debian
So, if I get tired of Commit Mono, I might probably try, in order:
  1. Hack
  2. Jetbrains Mono
  3. IBM Plex Mono
Iosevka, Monoki and Intel One Mono are also good options, but have alignment problems. Iosevka is particularly disappointing as the EM DASH metrics are just completely wrong (much too wide). This was tested using the Programming fonts site which has all the above fonts, which cannot be said of Font Squirrel or Google Fonts, amazingly. Other such tools:

27 May 2024

Thomas Koch: Minimal overhead VMs with Nix and MicroVM

Posted on March 17, 2024
Joachim Breitner wrote about a Convenient sandboxed development environment and thus reminded me to blog about MicroVM. I ve toyed around with it a little but not yet seriously used it as I m currently not coding. MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager. The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system. The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots. I defined the VM as a nix flake since this is how I started from the MicroVM projects example:
 
  description = "Haskell dev MicroVM";
  inputs.impermanence.url = "github:nix-community/impermanence";
  inputs.microvm.url = "github:astro/microvm.nix";
  inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
  outputs =   self, impermanence, microvm, nixpkgs  :
    let
      persistencePath = "/persistent";
      system = "x86_64-linux";
      user = "thk";
      vmname = "haskell";
      nixosConfiguration = nixpkgs.lib.nixosSystem  
          inherit system;
          modules = [
            microvm.nixosModules.microvm
            impermanence.nixosModules.impermanence
            ( pkgs, ...  :  
            environment.persistence.$ persistencePath  =  
                hideMounts = true;
                users.$ user  =  
                  directories = [
                    "git" ".stack"
                  ];
                 ;
               ;
              environment.sessionVariables =  
                TERM = "screen-256color";
               ;
              environment.systemPackages = with pkgs; [
                ghc
                git
                (haskell-language-server.override   supportedGhcVersions = [ "94" ];  )
                htop
                stack
                tmux
                tree
                vcsh
                zsh
              ];
              fileSystems.$ persistencePath .neededForBoot = nixpkgs.lib.mkForce true;
              microvm =  
                forwardPorts = [
                    from = "host"; host.port = 2222; guest.port = 22;  
                    from = "guest"; host.port = 5432; guest.port = 5432;   # postgresql
                ];
                hypervisor = "qemu";
                interfaces = [
                    type = "user"; id = "usernet"; mac = "00:00:00:00:00:02";  
                ];
                mem = 4096;
                shares = [  
                  # use "virtiofs" for MicroVMs that are started by systemd
                  proto = "9p";
                  tag = "ro-store";
                  # a host's /nix/store will be picked up so that no
                  # squashfs/erofs will be built for it.
                  source = "/nix/store";
                  mountPoint = "/nix/.ro-store";
                   
                  proto = "virtiofs";
                  tag = "persistent";
                  source = "~/.local/share/microvm/vms/$ vmname /persistent";
                  mountPoint = persistencePath;
                  socket = "/run/user/1000/microvm-$ vmname -persistent";
                 
                ];
                socket = "/run/user/1000/microvm-control.socket";
                vcpu = 3;
                volumes = [];
                writableStoreOverlay = "/nix/.rwstore";
               ;
              networking.hostName = vmname;
              nix.enable = true;
              nix.nixPath = ["nixpkgs=$ builtins.storePath <nixpkgs> "];
              nix.settings =  
                extra-experimental-features = ["nix-command" "flakes"];
                trusted-users = [user];
               ;
              security.sudo =  
                enable = true;
                wheelNeedsPassword = false;
               ;
              services.getty.autologinUser = user;
              services.openssh =  
                enable = true;
               ;
              system.stateVersion = "24.11";
              systemd.services.loadnixdb =  
                description = "import hosts nix database";
                path = [pkgs.nix];
                wantedBy = ["multi-user.target"];
                requires = ["nix-daemon.service"];
                script = "cat $ persistencePath /nix-store-db-dump nix-store --load-db";
               ;
              time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
              users.users.$ user  =  
                extraGroups = [ "wheel" "video" ];
                group = "user";
                isNormalUser = true;
                openssh.authorizedKeys.keys = [
                  "ssh-rsa REDACTED"
                ];
                password = "";
               ;
              users.users.root.password = "";
              users.groups.user =  ;
             )
          ];
         ;
    in  
      packages.$ system .default = nixosConfiguration.config.microvm.declaredRunner;
     ;
 
I start the microVM with a templated systemd user service:
[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix
[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ]   nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"
The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack. Finally the microvm is started inside a tmux session named microvm . This way I can use the VM with SSH or through the console and also access the qemu console. And for completeness the virtiofsd service:
[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent
[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
 --socket-path=$ XDG_RUNTIME_DIR /microvm-%i-persistent \
 --shared-dir=%h/.local/share/microvm/vms/%i/persistent \
 --gid-map :995:%G:1: \
 --uid-map :1000:%U:1:

Thomas Koch: Using nix package manager in Debian

Posted on January 16, 2024
The nix package manager is available in Debian since May 2020. Why would one use it in Debian? Especially the last point nagged me every time I set up a new Debian installation. My emacs configuration and my Desktop setup expects certain software to be installed. Please be aware that I m a beginner with nix and that my config might not follow best practice. Additionally many nix users are already using the new flakes feature of nix that I m still learning about. So I ve got this file at .config/nixpkgs/config.nix1:
with (import <nixpkgs>  );
 
  packageOverrides = pkgs: with pkgs;  
    thk-emacsWithPackages = (pkgs.emacsPackagesFor emacs-gtk).emacsWithPackages (
      epkgs:
      (with epkgs.elpaPackages; [
        ace-window
        company
        org
        use-package
      ]) ++ (with epkgs.melpaPackages; [
        editorconfig
        flycheck
        haskell-mode
        magit
        nix-mode
        paredit
        rainbow-delimiters
        treemacs
        visual-fill-column
        yasnippet-snippets
      ]) ++ [    # From main packages set
      ]
    );

    userPackages = buildEnv  
      extraOutputsToInstall = [ "doc" "info" "man" ];
      name = "user-packages";
      paths = [
        ghc
        git
        (pkgs.haskell-language-server.override   supportedGhcVersions = [ "94" ];  )
        nix
        stack
        thk-emacsWithPackages
        tmux
        vcsh
        virtiofsd
      ];
     ;
   ;
 
Every time I change the file or want to receive updates, I do:
nix-env --install --attr nixpkgs.userPackages --remove-all
You can see that I install nix with nix. This gives me a newer version than the one available in Debian stable. However, the nix-daemon still runs as the older binary from Debian. My dirty hack is to put this override in /etc/systemd/system/nix-daemon.service.d/override.conf:
[Service]
ExecStart=
ExecStart=@/home/thk/.local/state/nix/profile/bin/nix-daemon nix-daemon --daemon
I m not too interested in a cleaner way since I hope to fully migrate to Nix anyways.

  1. Note the nixpkgs in the path. This is not a config file for nix the package manager but for the nix package collection. See the nixpkgs manual.

Sahil Dhiman: A Late, Late Debconf23 Post

After much procrastination, I have gotten around to complete my DebConf23 (DC23), Kochi blog post. I lost the original etherpad which was started before DebConf23, for jotting down things. Now, I have started afresh with whatever I can remember, months after the actual conference ended. So things might be as accurate as my memory. DebConf23, the 24th annual Debian Conference, happened in Infopark, Kochi, India from 10th September to 17th September 2023. It was preceded by DebCamp from 3rd September to 9th September 2023. First formal bid to host DebConf in India was made during DebConf18 in Hsinchu, Taiwan by Raju Dev, which didn t came our way. In next DebConf, DebConf19 in Curitiba, Brazil, another bid was made by him with help and support from Sruthi, Utkarsh and the whole team.This time, India got the opportunity to host DebConf22, which eventually became DebConf23 for the reasons you all know. I initially met the local team on the sidelines of DebConf20, which was also my first DebConf. Having recently switched to Debian, DC20 introduced me to how things work in Debian. Video team s call for volunteers email pulled me in. Things stuck, and I kept hanging out and helping the local Indian DC team with various stuff. We did manage to organize multiple events leading to DebConf23 including MiniDebConf India 2021 Online, MiniDebConf Palakkad 2022, MiniDebConf Tamil Nadu 2023 and DebUtsav Kochi 2023, which gave us quite a bit of experience and workout. Many local organizers from these conferences later joined various DebConf teams during the conference to help out. For DebConf23, originally I was part of publicity team because that was my usual thing. After a team redistribution exercise, Sruthi and Praveen moved me to sponsorship team, as anyhow we didn t had to do much publicity and sponsorship was one of those things I could get involved remotely. Sponsorship team had to take care of raising funds by reaching out to sponsors, managing invoices and fulfillment. Praveen joined as well in sponsorship team. We also had international sponsorship team, Anisa, Daniel and various Debian Trusted Organizations (TO)s which took care of reaching out to international organizations, and we took care of reaching out to Indian organizations for sponsorship. It was really proud moment when my present employer, Unmukti (makers of hopbox) came aboard as Bronze sponsor. Though fundraising seem to be hit hard from tech industry slowdown and layoffs. Many of our yesteryear sponsors couldn t sponsor. We had biweekly local team meetings, which were turned to weekly as we neared the event. This was done in addition to biweekly global team meeting. Pathu
Pathu, DebConf23 mascot
To describe the conference venue, it happened in InfoPark, Kochi with the main conference hall being Athulya Hall and food, accommodation and two smaller halls in Four Point Hotel, right outside Infopark. We got Athulya Hall as part of venue sponsorship from Infopark. The distance between both of them was around 300 meters. Halls were named Anamudi, Kuthiran and Ponmudi based on hills and mountain areas in host state of Kerala. Other than Annamudi hall which was the main hall, I couldn t remember the names of the hall, I still can t. Four Points was big and expensive, and we had, as expected, cost overruns. Due to how DebConf function, an Indian university wasn t suitable to host a conference of this scale. Infinity Pool at Night
Four Point's Infinity Pool at Night
I landed in Kochi on the first day of DebCamp on 3rd September. As usual, met Abraham first, and the better part of the next hour was spent on meet and greet. It was my first IRL DebConf so met many old friends and new folks. I got a room to myself. Abraham lived nearby and hadn t taken the accommodation, so I asked him to join. He finally joined from second day onwards. All through the conference, room 928 became in-famous for various reasons, and I had various roommates for company. In DebCamp days, we would get up to have breakfast and go back to sleep and get active only past lunch for hacking and helping in the hack lab for the day, followed by fun late night discussions and parties. Nilesh, Chirag and Apple at DC23
Nilesh, Chirag and Apple at DC23
The team even managed to get a press conference arranged as well, and we got an opportunity to go to Press Club, Ernakulam. Sruthi and Jonathan gave the speech and answered questions from journalists. The event was covered by media as well due to this. Ernakulam Press Club
Ernakulam Press Club
During the conference, every night the team use to have 9 PM meetings for retrospection and planning for next day, which was always dotted with new problems. Every day, we used to hijack Silent Hacklab for the meeting and gently ask the only people there at the time to give us space. DebConf, it itself is a well oiled machine. Network was brought up from scratch. Video team built the recording, audio mixing, live-streaming, editing and transcoding infrastructure on site. A gaming rig served as router and gateway. We got internet uplinks, a 1 Gbps sponsored leased line from Kerala Vision and a paid backup 100 Mbps connection from a different provider. IPv6 was added through HE s Tunnelbroker. Overall the network worked fine as additionally we had hotel Wi-Fi, so the conference network wasn t stretched much. I must highlight, DebConf is my only conference where almost everything and every piece of software in developed in-house, for the conference and modified according to need on the fly. Even event recording cameras, audio check, direction, recording and editing is all done on in-house software by volunteer-attendees (in some cases remote ones as well), all trained on the sideline of the conference. The core recording and mixing equipment is owned by Debian and travels to each venue. The rest is sourced locally. Gaming Rig which served as DC23 gateway router
Gaming Rig which served as DC23 gateway router
It was fun seeing how almost all the things were coordinated over text on Internet Relay Chat (IRC). If a talk/event was missing a talkmeister or a director or a camera person, a quick text on #debconf channel would be enough for someone to volunteer. Video team had a dedicated support channel for each conference venue for any issues and were quick to respond and fix stuff. Network information. Screengrab from closing ceremony
Network information. Screengrab from closing ceremony
It rained for the initial days, which gave us a cool weather. Swag team had decided to hand out umbrellas in swag kit which turned out to be quite useful. The swag kit was praised for quality and selection - many thanks to Anupa, Sruthi and others. It was fun wearing different color T-shirts, all designed by Abraham. Red for volunteers, light green for Video team, green for core-team i.e. staff and yellow for conference attendees. With highvoltage
With highvoltage
We were already acclimatized by the time DebConf really started as we had been talking, hacking and hanging out since last 7 days. Rush really started with the start of DebConf. More people joined on the first and second day of the conference. As has been the tradition, an opening talk was prepared by the Sruthi and local team (which I highly recommend watching to get more insights of the process). DebConf day 1 also saw job fair, where Canonical and FOSSEE, IIT Bombay had stalls for community interactions, which judging by the crowd itself turned out to be quite a hit. For me, association with DebConf (and Debian) started due to volunteering with video team, so anyhow I was going to continue doing that this conference as well. I usually volunteer for talks/events which anyhow I m interested in. Handling the camera, talkmeister-ing and direction are fun activities, though I didn t do sound this time around. Sound seemed difficult, and I didn t want to spoil someone s stream and recording. Talk attendance varied a lot, like in Bits from DPL talk, the hall was full but for some there were barely enough people to handle the volunteering tasks, but that s what usually happens. DebConf is more of a place to come together and collaborate, so talk attendance is an afterthought sometimes. Audience in highvoltage's Bits from DPL talk
Audience in highvoltage's Bits from DPL talk
I didn t submit any talk proposals this time around, as just being in the orga team was too much work already, and I knew, the talk preparation would get delayed to the last moment and I would have to rush through it. Enrico's talk
Enrico's talk
From Day 2 onward, more sponsor stalls were introduced in the hallway area. Hopbox by Unmukti , MostlyHarmless and Deeproot (joint stall) and FOSEE. MostlyHarmless stall had nice mechanical keyboards and other fun gadgets. Whenever I got the time, I would go and start typing racing to enjoy the nice, clicky keyboards. As the DebConf tradition dictates, we had a Cheese and Wine party. Everyone brought in cheese and other delicacies from their region. Then there was yummy Sadya. Sadya is a traditional vegetarian Malayalis lunch served over banana leaves. There were loads of different dishes served, the names of most I couldn t pronounce or recollect properly, but everything was super delicious. Day 4 was day trip and I chose to go to Athirappilly Waterfalls and Jungle safari. Pictures would describe the beauty better than words. The journey was bit long though. Athirappilly Falls
Athirappilly Falls

Pathu Pathu Tea Gardens
Tea Gardens
Late that day, we heard the news of Abraham gone missing. We lost Abraham. He had worked really hard all through the years for Debian and making this conference possible. Talks were cancelled for the next day and Jonathan addressed everyone. We went to Abraham s home the next day to meet his family. Team had arranged buses to Abraham s place. It was an unfortunate moment that I only got an opportunity to visit his place after he was gone. Days went by slowly after that. The last day was marked by a small conference dinner. Some of the people had already left. All through the day and next, we kept saying goodbye to friends, with whom we spent almost a fortnight together. Athirappilly Falls
Group photo with all DebConf T-shirts chronologically
This was 2nd trip to Kochi. Vistara Airway s UK886 has become the default flight now. I have almost learned how to travel in and around Kochi by Metro, Water Metro, Airport Shuttle and auto. Things are quite accessible in Kochi but metro is a bit expensive compared to Delhi. I left Kochi on 19th. My flight was due to leave around 8 PM, so I had the whole day to myself. A direct option would have taken less than 1 hour, but as I had time and chose to take the long way to the airport. First took an auto rickshaw to Kakkanad Water Metro station. Then sailed in the water metro to Vyttila Water Metro station. Vyttila serves as intermobility hub which connects water metro, metro, bus at once place. I switched to Metro here at Vyttila Metro station till Aluva Metro station. Here, I had lunch and then boarded the Airport feeder bus to reach Kochi Airport. All in all, I did auto rickshaw > water metro > metro > feeder bus to reach Airport. I was fun and scenic. I must say, public transport and intermodal integration is quite good and once can transition seamlessly from one mode to next. Kochi Water Metro
Kochi Water Metro

Scenes from Kochi Water Metro Scenes from Kochi Water Metro
Scenes from Kochi Water Metro
DebConf23 served its purpose of getting existing Debian people together, as well as getting new people interested and contributing to Debian. People who came are still contributing to Debian, and that s amazing. Streaming video stats
Streaming video stats. Screengrab from closing ceremony
The conference wasn t without its fair share of troubles. There were multiple money transfer woes, and being in India didn t help. Many thanks to multiple organizations who were proactive in helping out. On top of this, there was conference visa uncertainty and other issues which troubled visa team a lot. Kudos to everyone who made this possible. Surely, I m going to miss the name, so thank you for it, you know how much you have done to make this event possible. Now, DebConf24 is scheduled for Busan, South Korea, and work is already in full swing. As usual, I m helping with the fundraising part and plan to attend too. Let s see if I can make it or not. DebConf23 Group Photo
DebConf23 Group Photo. Click to enlarge.
Credit - Aigars Mahinovs
In the end, we kept on saying, no DebConf at this scale would come back to India for the next 10 or 20 years. It s too much trouble to be frank. It was probably the peak that we might not reach again. I would be happy to be proven wrong though :)

25 May 2024

Gunnar Wolf: How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting

This post is a review for Computing Reviews for How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting , a book published in Manning
If we look at the age-old process of creating books, how many different areas can a computer help us with? And how can each of them be used to teach computer science (CS) fundamentals to a nontechnical audience? This is the premise of John Whitington s enticing book and the result is quite amazing. The book immediately drew my attention when looking at the titles available for review. After all, my initiation into computing as a kid was learning the LaTeX typesetting system while my father worked on his first book on scientific language and typography [1]. Whitington picks 11 different technical aspects of book production, from how dots of ink are transferred to a white page and how they are made into controllable, recognizable shapes, all the way to forming beautiful typefaces and the nuances of properly addressing white-space to present aesthetically pleasing paragraphs, building it all into specific formats aimed at different ends. But if we dig beyond just the chapter titles, we will find a very interesting book on CS that, without ever using technical language or notation, presents aspects as varied as anti-aliasing, vector and raster images, character sets such as ASCII and Unicode, an introduction to programming, input methods for different writing systems, efficient encoding (compression) methods, both for text and images, lossless and lossy, and recursion and dithering methods. To my absolute surprise, while the author thankfully spared the reader the syntax usually associated with LISP-related languages, the programming examples clearly stem from the LISP school, presenting solutions based on tail recursion. Of course, it is no match for Donald Knuth s classic book on this same topic [2], but could very well be a primer for readers to approach it. The book is light and easy to read, and keeps a very informal, nontechnical tone throughout. My only complaint relates to reading it in PDF format; the topic of this book, and the care with which the images were provided by the author, warrant high resolution. The included images are not only decorative but an integral part of the book. Maybe this is specific to my review copy, but all of the raster images were in very low resolution. This book is quite different from what readers may usually expect, as it introduces several significant topics in the field. CS professors will enjoy it, of course, but also readers with a humanities background, students new to the field, or even those who are just interested in learning a bit more.

References
  1. S nchez y G ndara, A.; Magari os Lamas, F.; Wolf, K. B., Manual de lenguaje y tipograf a cient fica en castellano. Trillas, Mexico City, Mexico, 1986, https://www.fis.unam.mx/~bwolf/manual.html
  2. Knuth, D. E. Digital typographyCSLI Lecture Notes: CSLI Lecture Notes. CSLI Publications, Stanford, CA, 1999, https://www-cs-faculty.stanford.edu/~knuth/dt.html

24 May 2024

Julian Andres Klode: Observations in Debian dependency solving

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs. You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don t actually have any pure literals in there). We can control it in a bunch of ways:
  1. We can mark packages as install or reject
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.
This is about all that we really want to do, we can t go if we reach a conflict, say oh but this conflict was introduced by that upgrade, and it seems more important, so let s not backtrack on the upgrade request but on this dependency instead. . This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a which of these packages should I flip the opposite way to break the conflict kind of thinking. Now our test suite has a whole bunch of these semantics encoded in it, and I m going to share some problems and ideas for how to solve them. I can t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let s be honest).

apt upgrade is hard The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages. Now, consider the following package is installed:
X Depends: A (= 1)   B
An upgrade from A=1 to A=2 is available. What should happen? The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it s answer is quite clear: Keep back the upgrade of A. The new solver however sees two possible solutions:
  1. Install B to satisfy X Depends A (= 1) B.
  2. Keep back the upgrade of A
Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So
  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) A (= 1) and sees it is satisfied already and is content.
  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) B, sees that A (= 1) is not satisfiable, and picks B.
We have two ways to approach this issue:
  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases. But let s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:
  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)
This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, promotions :
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.
This neatly solves the problem for us. We will never break Recommends that are satisfied. Likewise, we already have a Recommends demotion rule:
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).
Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn t autoremove them, but treat them as optional?

tightening of versioned dependencies Another case of versioned dependencies with alternatives that has complex behavior is something like
X Depends: A (>= 2)   B
X Recommends: A (>= 2)   B
In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B. We can solve this again as in the previous example by ordering the keep A installed requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate
Depends: A (>= 2)
into two rules:
  1. The package selection rule:
     Depends: A
    
    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) A (= 2) in an example with two versions for A.
  2. The version narrowing rule:
     Conflicts: A (<< 2)
    
    This outright would reject a choice of A (= 1).
So now we have 3 kinds of clauses:
  1. package selection
  2. version narrowing
  3. version selection
If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions. This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) B but e.g. Depends: A (= 3) B A (= 2). He d expect us to fall back to B if A (= 3) is not installable, and not to B. But we d like to enqueue A and reject all choices other than 3 and 2. I think it s fair to say: Don t do that, then here.

Implementing strict pinning correctly APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions. But of course, APT allows you to specify a non-candidate version of a package to install, for example:
apt install foo/oracular-proposed
The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy. The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I d really like to get rid of it. But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache. The current implementation of allowed version is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) A (= 1). However this has two disadvantages. (1) It means if we show you why A could not be installed, you don t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides. So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost One of the common issues we have is that when we have a dependency group
 A   B   C   D 
we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn t perhaps the best choice of operation. I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don t do this here: We have already lowered the representation of the dependency group into a list of versions, so we d need to extract the package back out of it. This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:
  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A
Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway). The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly #A * (#B+#C+#D) Each dependency group we need to check i.e. is X Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X Y and Y X are different dependencies of course, but that is to be expected they are. But any dependency of the same order will have the same memory layout. So really the cost is roughly N^4. This isn t nice. You can apply various heuristics here on how to improve that, or you can even apply binary logic:
  1. Enqueue common dependencies of A B C D
  2. Move into the left half, enqueue of A B
  3. Again divide and conquer and select A.
This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one. Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

Freexian Collaborators: Discover release 0.3.0 of the debusine software factory (by Colin Watson)

Debusine is a Free Software project developed by Freexian to manage scheduling and distribution of Debian-related tasks to a network of worker machines. It was started some time back, but its development pace has recently increased significantly thanks to funding from the Sovereign Tech Fund. You can read more about it in its documentation. For more background, Enrico Zini and Carles Pina i Estany gave a talk on Debusine in November 2023 at the mini-DebConf in Cambridge. We described the work from our first funded milestone in a post to debian-devel-announce in March. We ve recently finished work on our second funded milestone, culminating in releasing version 0.3.0 to unstable. Our focus on this milestone was on new building blocks to allow us to automatically orchestrate QA tasks in bulk. Full details are in our release history document. As usual, debusine.debian.net is up to date with our latest work.

Collections In the previous milestone, debusine could store artifacts and run tasks against those artifacts. However, on its own this required the user to do a lot of manual work, because the only way to refer to an artifact was by its ID. We now have the concept of a collection, which can store references to other artifacts (or indeed to other collections) with some attached metadata. These are structured by category, so for example a debian:suite collection contains references to source and binary package artifacts with their names, versions, and architectures as metadata. This allows us to look up artifacts using a simple query language instead of just by ID. At the moment, the main visible effect of this is that our Getting started with debusine tutorial no longer needs users of debusine.debian.net to create their own build environments before being able to submit other work requests: they can refer to existing environments using something like debian/match:codename=trixie:variant=sbuild instead. We also have a basic user interface allowing you to browse existing collections, accessible via the relevant workspace (such as the default System workspace).

Workflows We ve always known that individual tasks were just a starting point: real-world requirements often involve chaining many tasks together, as many Debian developers already do using the Salsa CI pipeline. debusine intends to approach a similar problem from a different angle, defining common workflows that can be applied at the scale of a whole distribution without being tightly coupled to where each package s code is hosted. In time we intend to define a way for users to specify their own workflows, but rather than getting too bogged down in this we started by building a couple of predefined workflows into debusine. The update_environments workflow is used to create multiple build environments in bulk, while the sbuild workflow builds a source package for all the architectures that it supports and for which debusine has workers. (debusine.debian.net currently has amd64 and arm64 workers, supporting the amd64, arm64, armel, armhf, and i386 architectures between them.) Upcoming work will build on this by adding more workflows that chain tasks together in various ways, such as workflows that build a package and run QA tasks on the results, or a workflow that builds a package and uploads the result to an upload queue.

Next steps Our next planned milestone involves expanding debusine s capability as a build daemon. For that, we already know that there are a number of specific extra workflow steps we need to add, and we ve reached out to some members of Debian s buildd team to ask for feedback on what they consider necessary. We hope to be able to replace some of Freexian s own build infrastructure with debusine in the near future.

22 May 2024

Evgeni Golov: Upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp

Warning to the Planet Debian readers: the following post might shock you, if you're used to Debian's smooth upgrades using only the package manager. Leapp?! Contrary to distributions like Debian and Fedora, RHEL can't be upgraded using the package manager alone. Instead there is a tool called Leapp that takes care of orchestrating the update and also includes a set of checks whether a system can be upgraded at all. Have a look at the RHEL documentation about upgrading if you want more details on the process itself. You might have noticed that the title of this post says "CentOS Stream" but here I am talking about RHEL. This is mostly because Leapp was originally written with RHEL in mind. Upgrading CentOS 7 to EL8 When people started pondering upgrading their CentOS 7 installations, AlmaLinux started the ELevate project to allow upgrading CentOS 7 to CentOS Stream 8 but also to AlmaLinux 8, Rocky 8 or Oracle Linux 8. ELevate was essentially Leapp with patches to allow working on CentOS, which has different package signature keys, different OS release versioning, etc. Sadly these patches were never merged back into Leapp. Making Leapp work with CentOS Stream 8 (and other distributions) At some point I noticed that things weren't moving and EL8 to EL9 upgrades were coming closer (and I had my own systems that I wanted to be able to upgrade in place). Annoyed-Evgeni-Development is best development? Not sure, but it produced a set of patches that allowed some movement: However, this is not yet the end of the story. At least convert dot-less CentOS versions to X.999 is open, and another followup would be needed if we go that route. But I don't expect this to be merged soon, as the patch is technically wrong - yet it makes things mostly work. The big problem here is that CentOS Stream doesn't have X.Y versioning, just X as it's a constant stream with no point releases. Leapp however relies on X.Y versioning to know which package changes it needs to perform. Pretending CentOS Stream 8 is "RHEL" 8.999 works if you assume that Stream is always ahead of RHEL. This is however a CentOS only problem. I still need to properly test that, but I'd expect things to work fine with upstream Leapp on AlmaLinux/Rocky if you feed it the right signature and repository data. Actually upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp Like I've already teased in my HPE rant, I've actually used that code to upgrade virt01.conova.theforeman.org to CentOS Stream 9. I've also used it to upgrade a server at home that's responsible for running important containers like Home Assistant and UniFi. So it's absolutely battle tested and production grade! It's also hungry for kittens. As mentioned above, you can't just use upstream Leapp, but I have a Copr: evgeni/leapp.
# dnf copr enable evgeni/leapp
# dnf install leapp leapp-upgrade-el8toel9
Apart from the software, we'll also need to tell it which repositories to use for the upgrade.
# vim /etc/leapp/files/leapp_upgrade_repositories.repo
[c9-baseos]
name=CentOS Stream $releasever - BaseOS
metalink=https://mirrors.centos.org/metalink?repo=centos-baseos-9-stream&arch=$basearch&protocol=https,http
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
gpgcheck=1
repo_gpgcheck=0
metadata_expire=6h
countme=1
enabled=1
[c9-appstream]
name=CentOS Stream $releasever - AppStream
metalink=https://mirrors.centos.org/metalink?repo=centos-appstream-9-stream&arch=$basearch&protocol=https,http
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
gpgcheck=1
repo_gpgcheck=0
metadata_expire=6h
countme=1
enabled=1
Depending on the setup and installed packages, more repositories might be needed. Just make sure that the $stream substitution is not used as Leapp doesn't override that and you'd end up with CentOS Stream 8 repos again. Once all that is in place, we can call leapp preupgrade and let it analyze the system. Ideally, the output will look like this:
# leapp preupgrade
 
============================================================
                      REPORT OVERVIEW                       
============================================================
Reports summary:
    Errors:                      0
    Inhibitors:                  0
    HIGH severity reports:       0
    MEDIUM severity reports:     0
    LOW severity reports:        3
    INFO severity reports:       3
Before continuing consult the full report:
    A report has been generated at /var/log/leapp/leapp-report.json
    A report has been generated at /var/log/leapp/leapp-report.txt
============================================================
                   END OF REPORT OVERVIEW                   
============================================================
But trust me, it won't ;-) As mentioned above, Leapp analyzes the system before the upgrade. Some checks can completely inhibit the upgrade, while others will just be logged as "you better should have a look". Firewalld Configuration AllowZoneDrifting Is Unsupported EL7 and EL8 shipped with AllowZoneDrifting=yes, but since EL9 this is not supported anymore. As this can potentially break the networking of the system, the upgrade gets inhibited. Newest installed kernel not in use Admit it, you also don't reboot into every new kernel available! Well, Leapp won't let that pass and inhibits the upgrade. Cannot perform the VDO check of block devices In EL8 there are two ways to manage VDO: using the dedicated vdo tool and via LVM. If your system uses LVM (it should!) but not VDO, you probably don't have the vdo package installed. But then Leapp can't check if your LVM devices really aren't VDO without the vdo tooling and will inhibit the upgrade. So you gotta install vdo for it to find out that you don't use VDO LUKS encrypted partition detected Yeah. Sorry. Using LUKS? Straight into the inhibit corner! But hey, if you don't use LUKS for / you can probably get away by deleting the inhibitwhenluks actor. That worked for me, but remember the kittens! Really upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp The headings are getting silly, huh? Anyway, once leapp preupgrade is happy and doesn't throw any inhibitors anymore, the actual (real?) upgrade can be done by calling leapp upgrade. This will download all necessary packages and create an intermediate initramfs that contains all the things needed for the upgrade and ask you to reboot. Once booted, the upgrade itself takes somewhere between 5 and 10 minutes. Then another minute or 5 to relabel your disks with the new SELinux policy. And three reboots (into the upgrade initramfs, into SELinux relabel, into real OS) of a ProLiant DL325 - 5 minutes each? And then for good measure another one, to flip SELinux from permissive to enforcing. Are we done yet? Nope. There are a few post-upgrade tasks you get to do yourself. Yes, the switching of SELinux back to enforcing is one of them. Please don't forget it. Using the system after the upgrade A customer once said "We're not running those systems for the sake of running systems, but for the sake of running some application ontop of them". This is very true. libvirt doesn't support Spice/QXL In EL9, support for Spice/QXL was dropped, so if you try to boot a VM using it, libvirt will nicely error out with
Error starting domain: unsupported configuration: domain configuration does not support video model 'qxl'
Interestingly, because multiple parts of the VM are invalid, you can't edit it in virt-manager (at least the one in Fedora 39) as removing/fixing one part requires applying the new configuration which is still invalid. So virsh edit <vm> it is! Look for entries like
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <audio id='1' type='spice'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <redirdev bus='usb' type='spicevmc'> 
      <address type='usb' bus='0' port='2'/> 
    </redirdev> 
    <redirdev bus='usb' type='spicevmc'> 
      <address type='usb' bus='0' port='3'/> 
    </redirdev>
and either just delete the or (better) replace them with VNC/cirrus
    <graphics type='vnc' port='-1' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
Podman needs re-login to private registries One of the machines I've updated runs Podman and pulls containers from GitHub which are marked as private. To do so, I have a personal access token that I've used to login to ghcr.io. After the CentOS Stream 9 upgrade (which included an upgrade to Podman 5), pulls stopped working with authentication/permission errors. No idea what exactly happened, but a simple podman login fixed this issue quickly.
$ echo ghp_token   podman login ghcr.io -u <user> --password-stdin
shim has an el8 tag One of the documented post-upgrade tasks is to verify that no EL8 packages are installed, and to remove those if there are any. However, when you do this, you'll notice that the shim-x64 package has an EL8 version: shim-x64-15-15.el8_2.x86_64. That's because the same build is used in both CentOS Stream 8 and CentOS Stream 9. Confusing, but should really not be uninstalled if you want the machine to boot ;-) Are we done yet? Yes! That's it. Enjoy your CentOS Stream 9!

20 May 2024

Russell Coker: Respect and Children

I attended the school Yarra Valley Grammer (then Yarra Valley Anglican School which I will refer to as YV ) and completed year 12 in 1990. The school is currently in the news for a spreadsheet some boys made rating girls where unrapeable was one of the ratings. The school s PR team are now making claims like Respect for each other is in the DNA of this school . I d like to know when this DNA change allegedly occurred because respect definitely wasn t in the school DNA in 1990! Before I go any further I have to note that if the school threatens legal action against me for this post it will be clear evidence that they don t believe in respect. The actions of that school have wronged me, several of my friends, many people who aren t friends but who I wish they hadn t had to suffer and I hadn t had to witness it, and presumably countless others that I didn t witness. If they have any decency they would not consider legal action but I have learned that as an institution they have no decency so I have to note that they should read the Wikipedia page about the Streisand Effect [1] and keep it in mind before deciding on a course of action. I think it is possible to create a school where most kids enjoy being there and enjoy learning, where hardly any students find it a negative experience and almost no-one finds it traumatic. But it is not possible to do that with the way schools tend to be run. When I was at high school there was a general culture that minor sex crimes committed by boys against boys weren t a problem, this probably applied to all high schools. Things like ripping a boy s pants off (known as dakking ) were considered a big joke. If you accept that ripping the pants off an unwilling boy is a good thing (as was the case when I was at school) then that leads to thinking that describing girls as unrapeable is acceptable. The Wikipedia page for Pantsing [2] has a reference for this issue being raised as a serious problem by the British Secretary of State for Education and Skills Alan Johnson in 2007. So this has continued to be a widespread problem around the world. Has YV become better than other schools in dealing with it or is Dakking and Wedgies as well accepted now as it was when I attended? There is talk about schools preparing kids for the workforce, but grabbing someone s underpants without consent will result in instant dismissal from almost all employment. There should be more tolerance for making mistakes at school than at work, but they shouldn t tolerate what would be serious crimes in other contexts. For work environments there have been significant changes to what is accepted, so it doesn t seem unreasonable to expect that schools can have a similar change in culture. One would hope that spending 6 years wondering who s going to grab your underpants next would teach boys the importance of consent and some sympathy for victims of other forms of sexual assault. But that doesn t seem to happen, apparently it s often the opposite. When I was young Autism wasn t diagnosed for anyone who was capable of having a normal life. Teachers noticed that I wasn t like other kids, some were nice, but some encouraged other boys to attack me as a form of corporal punishment by proxy not a punishment for doing anything wrong (detentions were adequate for that) but for being different. The lesson kids will take from that sort of thing is that if you are in a position of power you can mistreat other people and get away with it. There was a girl in my year level at YV who would probably be diagnosed as Autistic by today s standards, the way I witnessed her being treated was considerably worse than what was described in the recent news reports but it is quite likely that worse things have been done recently which haven t made the news yet. If this issue is declared to be over after 4 boys were expelled then I ll count that as evidence of a cover-up. These things don t happen in a vacuum, there s a culture that permits and encourages it. The word respect has different meanings, it can mean treat a superior as the master or treat someone as a human being . The phrase if you treat me with respect I ll treat you with respect usually means if you treat me as the boss then I ll treat you as a human being . The distinction is very important when discussing respect in schools. If teachers are considered the ultimate bosses whose behaviour can never be questioned then many boys won t need much help from Andrew Tate in developing the belief that they should be the boss of girls in the same way. Do any schools have a process for having students review teachers? Does YV have an ombudsman to take reports of misbehaving teachers in the way that corporations typically have an ombudsman to take reports about bad managers? Any time you have people whose behaviour is beyond scrutiny or oversight you will inevitably have bad people apply for jobs, then bad things will happen and it will create a culture of bad behaviour. If teachers can treat kids badly then kids will treat other kids badly, and this generally ends with girls being treated badly by boys. My experience at YV was that kids barely had the status of people. It seemed that the school operated more as a caretaker of the property of parents than as an organisation that cares for people. The current YV website has a Whistleblower policy [3] that has only one occurrence of the word student and that is about issues that endanger the health or safety of students. Students are the people most vulnerable to reprisal for complaining and not being listed as an eligible whistleblower shows their status. The web site also has a flowchart for complaints and grievances [4] which doesn t describe any policy for a complaint to be initiated by a student. One would hope that parents would advocate for their children but that often isn t the case. When discussing the possibility of boys being bullied at school with parents I ve had them say things like my son wouldn t be so weak that he would be bullied , no boy will tell his parents about being bullied if that s their attitude! I imagine that there are similar but different issues of parents victim-blaming when their daughter is bullied (presumably substituting immoral for weak) but don t have direct knowledge of the topic. The experience of many kids is being disrespected by their parents, the school system, and often siblings too. A school can t solve all the world s problems but can ideally be a refuge for kids who have problems at home. When I was at school the culture in the country and the school was homophobic. One teacher when discussing issues such as how students could tell him if they had psychological problems and no-one else to talk to said some things like the Village People make really good music which was the only time any teacher said anything like It s OK to be gay (the Village People were the gayest pop group at the time). A lot of the bullying at school had a sexual component to it. In addition to the wedgies and dakking (which while not happening often was something you had to constantly be aware of) I routinely avoided PE classes where a shower was necessary because of a thug who hung around by the showers and looked hungrily at my penis, I don t know if he had a particular liking to mine or if he stared at everyone that way. Flashing and perving was quite common in change rooms. Presumably as such boy-boy sexual misbehaviour was so accepted that led to boys mistreating girls. I currently work for a company that is active in telling it s employees about the possibility of free psychological assistance. Any employee can phone a psychologist to discuss problems (whether or not they are work related) free of charge and without their manager or colleagues knowing. The company is billed and is only given a breakdown of the number of people who used the service and roughly what the issue was (work stress, family, friends, grief, etc). When something noteworthy happens employees are given reminders about this such as if you need help after seeing a homeless man try to steal a laptop from the office then feel free to call the assistance program . Do schools offer something similar? With the school fees paid to a school like YV they should be able to afford plenty of psychologist time. Every day I was at YV I saw something considerably worse than laptop theft, most days something was done to me. The problems with schools are part of larger problems with society. About half of the adults in Australia still support the Liberal party in spite of their support of Christian Porter, Cardinal Pell, and Bruce Lehrmann. It s not logical to expect such parents to discourage their sons from mistreating girls or to encourage their daughters to complain when they are mistreated. The Anglican church has recently changed it s policy to suggesting that victims of sexual abuse can contact the police instead of or in addition to the church, previously they had encouraged victims to only contact the church which facilitated cover-ups. One would hope that schools associated with the Anglican church have also changed their practices towards such things. I approve of the respect is in our DNA concept, it s like Google s former slogan of Don t be evil which is something that they can be bound to. Here s a list of questions that could be asked of schools (not just YV but all schools) by journalists when reporting on such things:
  1. Do you have a policy of not trying to silence past students who have been treated badly?
  2. Do you take all sexual assaults seriously including wedgies and dakking?
  3. Do you take all violence at school seriously? Even if there s no blood? Even if the victim says they don t want to make an issue of it?
  4. What are your procedures to deal with misbehaviour from teachers? Do the students all know how to file complaints? Do they know that they can file a complaint if they aren t the victim?
  5. Does the school have policies against homophobia and transphobia and are they enforced?
  6. Does the school offer free psychological assistance to students and staff who need it? NB This only applies to private schools like YV that have huge amounts of money, public schools can t afford that.
  7. Are serious incidents investigated by people who are independent of the school and who don t have a vested interest in keeping things quiet?
  8. Do you encourage students to seek external help from organisations like the ones on the resources list of the Grace Tame Foundation [5]? Having your own list of recommended external organisations would be good too.
Counter Arguments I ve had practice debating such things, here s some responses to common counter arguments. Conclusion I don t think that YV is necessarily worse than other schools, although I m sure that representatives of other private schools are now working to assure parents of students and prospective students that they are. I don t think that all the people who were employed as teachers there when I attended were bad people, some of them were nice people who were competent teachers. But a few good people can t turn around a bad system. I will note that when I attended all the sports teachers were decent people, it was the only department I could say such things about. But sports involves situations that can lead to a bad result, issues started at other times and places can lead to violence or harassment in PE classes regardless of how good the teachers are. Teachers who know that there are problems need to be able to raise issues with the administration. When a teacher quits teaching to join the clergy and another teacher describes it as a loss for the clergy but a gain for YV it raises the question of why the bad teacher in question couldn t have been encouraged to leave earlier. A significant portion of the population will do whatever is permitted. If you say no teacher would ever bully a student so we don t need to look out for that then some teacher will do exactly that. I hope that this will lead to changes both in YV and in other schools. But if they declare this issue as resolved after expelling 4 students then something similar or worse will happen again. At least now students know that when this sort of thing happens they can send evidence to journalists to get some action.

18 May 2024

Russell Coker: Kogan 5120*2160 40 Monitor

I ve just got a new Kogan 5120*2160 40 curved monitor. It cost $599 including shipping etc which is much cheaper than the Dell monitor with similar specs selling for about $2500. For monitors with better than 4K resolution (by which I don t mean 5K*1440) this is the cheapest option. The nearest competitors are the 27 monitors that do 5120*2880 from Apple and some companies copying Apple s specs. While 5120*2880 is a significantly better resolution than what I got it s probably not going to help me at 27 size. I ve had a Dell 32 4K monitor since the 1st of July 2022 [1]. It is a really good monitor and I had no complaints at all about it. It was clearer than the Samsung 27 4K monitor I used before it and I m not sure how much of that is due to better display technology (the Samsung was from 2017) and how much was due to larger size. But larger size was definitely a significant factor. I briefly owned a Phillips 43 4K monitor [2] and determined that a 43 flat screen was definitely too big. At the time I thought that about 35 would have been ideal but after a couple of years using a flat 32 screen I think that 32 is about the upper limit for a flat screen. This is the first curved monitor I ve used but I m already thinking that maybe 40 is too big for a 21:9 aspect ratio even with a curved screen. Maybe if it was 4:4 or even 16:9 that would be ok. Otherwise the ideal for a curved screen for me would be something between about 36 and 38 . Also 43 is awkward to move around my desk. But this is still quite close to ideal. The first system I tested this on was a work laptop, a Dell Latitude 7400 2in1. On the Dell dock that did 4K resolution and on a HDMI cable it did 1440p which was a disappointment as that laptop has talked to many 4K monitors at native resolution on the HDMI port with the same cable. This isn t an impossible problem, as I work in the IT department I can just go through all the laptops in the store room until I find one that supports it. But the 2in1 is a very nice laptop, so I might even just keep using it in 4K resolution when WFH. The laptop in question is deemed an executive laptop so I have to wait another 2 years for the executives to get new laptops before I can get a newer 2in1. On my regular desktop I had the problem of the display going off for a few seconds every minute or so and also occasionally giving a white flicker. That was using 5120*2160 with a DisplayPort switch as described in the blog post about the Dell 32 monitor. When I ran it in 4K resolution with the DisplayPort switch from my desktop it was fine. I then used the DisplayPort cable that came with the monitor directly connecting the video card to the display and it was fine at 5120*2160 with 75Hz. The monitor has the joystick thing that seems to have become some sort of standard for controlling modern monitors. It s annoying that pressing it in powers it off. I think there should be a separate button for that. Also the UI in general made me wonder if one of the vendors of expensive monitors had paid whoever designed it to make the UI suck. The monitor had a single dead pixel in the center of the screen about 1/4 the way down from the top when I started writing this post. Now it s gone away which is a concern as I don t know which pixels might have problems next or if the number of stuck pixels will increase. Also it would be good if there was a dark mode for the WordPress editor. I use dark mode wherever possible so I didn t notice the dead pixel for several hours until I started writing this blog post. I watched a movie on Netflix and it took the entire screen area, I don t know if they are storing movies in 64:27 ratio or if the clipped the top and bottom, it was probably clipped but still looked OK. The monitor has different screen modes which make it look different, I can t see much benefit to the different modes. The standard mode is what I usually use and it s brighter and the movie mode seems OK for the one movie I ve watched so far. In other news BenQ has just announced a 3840*2560 28 monitor specifically designed for programming [3]. This is the first time I ve heard of a monitor with 3:2 ratio with modern resolution, we still aren t at the 4:3 type ratio that we were used to when 640*480 was high resolution but it s a definite step in the right direction. It s also the only time I recall ever seeing a monitor advertised as being designed for programming. In the 80s there were home computers advertised as being computers for kids to program, but at that time it was either TV sets for monitors or monitors sold with computers. It was only after the IBM PC compatible market took off that having a choice of different monitors for one computer was a thing. In recent years monitors advertised as being for office use (meaning bright and expensive) have become common as are monitors designed for gamer use (meaning high refresh rate). But BenQ seems to be the first to advertise a monitor for the purpose of programming. They have a desktop partition feature (which could be software or hardware the article doesn t make it clear) to give some of the benefits of a tiled window manager to people who use OSs that don t support such things. The BenQ monitor is a bit small for my taste, I don t know if my vision is good enough to take advantage of 3840*2560 in a 28 monitor nowadays. I think at least 32 would be better. Google seems to be really into buying good monitors for their programmers, if every Google programmer got one of those BenQ monitors then that would be enough sales to make it worth-while for them. I had hoped that we would have 6K monitors become affordable this year and 8K become less expensive than most cars. Maybe that won t happen and we will instead have a wider range of products like the ultra wide monitor I just bought and the BenQ programmer s monitor. If so I don t think that will be a bad result. Now the question is whether I can use this monitor for 2 years before finding something else that makes me want to upgrade. I can afford to spend the equivalent of a bit under $1/day on monitor upgrades.

14 May 2024

Evgeni Golov: Using Packit to build RPMs for projects that depend on or vendor your code

I am a huge fan of Packit as it allows us to provide RPMs to our users and testers directly from a pull-request, thus massively tightening the feedback loop and involving people who otherwise might not be able to apply the changes (for whatever reason) and "quickly test" something out. It's also a great way to validate that a change actually builds in a production environment, where no unnecessary development and test dependencies are installed. You can also run tests of the built packages on Testing Farm and automate pushing releases into Fedora/CentOS Stream, but this is neither a (plain) Packit advertisement post, nor is that functionality that I can talk about with a certain level of experience. Adam recently asked why we don't have Packit builds for our our Puppet modules and my first answer was: "well, puppet-* doesn't produce a thing we ship directly, so nobody dared to do it". My second answer was that I had blogged how to test a Puppet module PR with Packit, but I totally agree that the process was a tad cumbersome and could be improved. Now some madman did it and we all get to hear his story! ;-) What is the problem anyway? The Foreman Installer is a bit of Ruby code1 that provides a CLI to puppet apply based on a set of Puppet modules. As the Puppet modules can also be used outside the installer and have their own lifecycle, they live in separate git repositories and their releases get uploaded to the Puppet Forge. Users however do not want to (and should not have to) install the modules themselves. So we have to ship the modules inside the foreman-installer package. Packaging 25 modules for two packaging systems (we support Enterprise Linux and Debian/Ubuntu) seems like a lot of work. Especially if you consider that the main foreman-installer package would need to be rebuilt after each module change as it contains generated files based on the modules which are too expensive to generate at runtime. So we can ship the modules inside the foreman-installer source release, thus vendoring those modules into the installer release. To do so we use librarian-puppet with a Puppetfile and either a Puppetfile.lock for stable releases or by letting librarian-puppet fetch latest for nightly snapshots. This works beautifully for changes that land in the development and release branches of our repositories - regardless if it's foreman-installer.git or any of the puppet-*.git ones. It also works nicely for pull-requests against foreman-installer.git. But because the puppet-* repositories do not map to packages, we assumed it wouldn't work well for pull-requests against those. How can we solve this? Well, the "obvious" solution is to build the foreman-installer package via Packit also for pull-requests against the puppet-* repositories. However, as usual, the devil is in the details. Packit by default clones the repository of the pull-request and tries to create a source tarball from that using git archive. As this might be too simple for many projects, one can define a custom create-archive action that runs after the pull-request has been cloned and produces the tarball instead. We already use that in the Packit configuration for foreman-installer to run the pkg:generate_source rake target which executes librarian-puppet for us. But now the pull-request is against one of the Puppet modules, so Packit will clone that, not the installer. We gotta clone foreman-installer on our own. And then point librarian-puppet at the pull-request. Fun. Cloning is relatively simple, call git clone -- sorry Packit/Copr infrastructure. But the Puppet module pull-request? One can use :git => 'https://git.example.com/repo.git' in the Puppetfile to fetch a git repository. In fact, that's what we already do for our nightly snapshots. It also supports :ref => 'some_branch_or_tag_name', if the remote HEAD is not what you want. My brain first went "I know this! GitHub has this magic refs/pull/1/head and refs/pull/1/merge refs you can checkout to get the contents of the pull-request without bothering to add a remote for the source of the pull-request". Well, this requires to know the ID of the pull-request and Packit does not expose that in the environment variables available during create-archive. Wait, but we already have a checkout. Can we just say :git => '../.git'? Cloning a .git folder is totally possible after all.
[Librarian]     --> fatal: repository '../.git' does not exist
Could not checkout ../.git: fatal: repository '../.git' does not exist
Seems librarian disagrees. Damn. (Yes, I checked, the path exists.) does it maybe just not like relative paths?! Yepp, using an absolute path absolutely works! For some reason it ends up checking out the default HEAD of the "real" (GitHub) remote, not of ../. Luckily this can be fixed by explicitly passing :ref => 'origin/HEAD', which resolves to the branch Packit created for the pull-request. Now we just need to put all of that together and remember to execute all commands from inside the foreman-installer checkout as that is where all our vendoring recipes etc live. Putting it all together Let's look at the diff between the packit.yaml for foreman-installer and the one I've proposed for puppet-pulpcore:
--- a/foreman-installer/.packit.yaml    2024-05-14 21:45:26.545260798 +0200
+++ b/puppet-pulpcore/.packit.yaml  2024-05-14 21:44:47.834162418 +0200
@@ -18,13 +18,15 @@
 actions:
   post-upstream-clone:
     - "wget https://raw.githubusercontent.com/theforeman/foreman-packaging/rpm/develop/packages/foreman/foreman-installer/foreman-installer.spec -O foreman-installer.spec"
+    - "git clone https://github.com/theforeman/foreman-installer"
+    - "sed -i '/theforeman.pulpcore/ s@:git.*@:git => \"# __dir__ /../.git\", :ref => \"origin/HEAD\"@' foreman-installer/Puppetfile"
   get-current-version:
-    - "sed 's/-develop//' VERSION"
+    - "sed 's/-develop//' foreman-installer/VERSION"
   create-archive:
-    - bundle config set --local path vendor/bundle
-    - bundle config set --local without development:test
-    - bundle install
-    - bundle exec rake pkg:generate_source
+    - bash -c "cd foreman-installer && bundle config set --local path vendor/bundle"
+    - bash -c "cd foreman-installer && bundle config set --local without development:test"
+    - bash -c "cd foreman-installer && bundle install"
+    - bash -c "cd foreman-installer && bundle exec rake pkg:generate_source"
  1. It clones foreman-installer (in post-upstream-clone, as that felt more natural after some thinking)
  2. It adjusts the Puppetfile to use # __dir__ /../.git as the Git repository, abusing the fact that a Puppetfile is really just a Ruby script (sorry Ben!) and knows the __dir__ it lives in
  3. It fetches the version from the foreman-installer checkout, so it's sort-of reasonable
  4. It performs all building inside the foreman-installer checkout
Can this be used in other scenarios? I hope so! Vendoring is not unheard of. And testing your "consumers" (dependents? naming is hard) is good style anyway!

  1. three Ruby modules in a trench coat, so to say

Julian Andres Klode: The new APT 3.0 solver

APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the solver 3.0 option. The new solver works fundamentally different from the old one.

How does it work? Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies. Deferring the choices is implemented multiple ways: First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package. Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a b before a b c. Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not nest in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue. Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all. Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

Comparison to SAT solver design. If you have studied SAT solver design, you ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy). As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B). Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

What changes can you expect in behavior? The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other. Implementing that policy is rather trivial: We just need to queue obsolete replacement as a dependency to solve, rather than mark the obsolete package for install. Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc c-compiler is enough to keep them around.

New features The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible. The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

What is left to do? At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful. Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely. The test suite is not passing yet, I haven t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn t remove those. We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed. Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make. Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to. At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

Taavi V n nen: Wikimedia Hackathon Tallinn 2024

This year's Wikimedia Hackathon was held in early May in Tallinn, Estonia. Like last year, it was a great opportunity to both see people I work with regularly, including people in my own team that I had not seen in person before, and to work with and help people that I have had very limited interactions with before. Me talking with Addshore at the Wikimedia Hackathon 2024 hacking room.
Image by Olari Pilnik is licensed under CC BY-SA 4.0.
I presented a session about Puppet (slides), the configuration management tool used on Wikimedia infrastructure (and some other projects I've been involved on) which I think went quite well. I also organized (read: picked a spot for in the schedule) the cuteness meetup. In addition to the sessions, the main focus of the event was, of course, hacking. As usual, I didn't make any major plans beforehand, and instead ended up working on several smaller projects as they popped up. Here is a list of things I can remember working on: Finally, a conversation I had at the hackathon resulted in me nominating Novem Linguae for mediawiki/* +2 access a few days after the hackathon. I had a great time, and the ferry trip to Tallinn was much nicer than the very early flight I had last year. I can't wait to see you all again :-) Disclosure: I am currently a Wikimedia Foundation contractor, and the Foundation did pay for my travel to Tallinn. This is my personal blog and these are my own opinions.

  1. Since backporting this change felt too risky to do on the weekend, and also I have a feeling I'd get in troble if I ran an unapproved bot that edited on random wikis on our production wiki farm.
  2. Anyone who got 5 or more patches to core.git merged during the Hackathon got a cool MediaWiki T-shirt.

Matthew Palmer: "Is This Project Still Maintained?"

If you wander around a lot of open source repositories on the likes of GitHub, you ll invariably stumble over repos that have an issue (or more than one!) with a title like the above. Sometimes sitting open and unloved, often with a comment or two from the maintainer and a bunch of I ll help out! followups that never seemed to pan out. Very rarely, you ll find one that has been closed, with a happy ending. These issues always fascinate me, because they say a lot about what it means to maintain an open source project, the nature of succession (particularly in a post-Jia Tan world), and the expectations of users and the impedence mismatch between maintainers, contributors, and users. I ve also recently been thinking about pre-empting this sort of issue, and opening my own issue that answers the question before it s even asked.

Why These Issues Are Created As both a producer and consumer of open source software, I completely understand the reasons someone might want to know whether a project is abandoned. It s comforting to be able to believe that there s someone on the other end of the line , and that if you have a problem, you can ask for help with a non-zero chance of someone answering you. There s also a better chance that, if the maintainer is still interested in the software, that compatibility issues and at least show-stopper bugs might get fixed for you. But often there s more at play. There is a delusion that maintained open source software comes with entitlements an expectation that your questions, bug reports, and feature requests will be attended to in some fashion. This comes about, I think, in part because there are a lot of open source projects that are energetically supported, where generous volunteers do answer questions, fix reported bugs, and implement things that they don t personally need, but which random Internet strangers ask for. If you ve had that kind of user experience, it s not surprising that you might start to expect it from all open source projects. Of course, these wonders of cooperative collaboration are the exception, rather than the rule. In many (most?) cases, there is little practical difference between most projects that are maintained and those that are formally declared unmaintained . The contributors (or, most often, contributor singular) are unlikely to have the time or inclination to respond to your questions in a timely and effective manner. If you find a problem with the software, you re going to be paddling your own canoe, even if the maintainer swears that they re still maintaining it.

A Thought Appears With this in mind, I ve been considering how to get ahead of the problem and answer the question for the software projects I ve put out in the world. Nothing I ve built has anything like what you d call a community ; most have never seen an external PR, or even an issue. The last commit date on them might be years ago. By most measures, almost all of my repos look unmaintained . Yet, they don t feel unmaintained to me. I m still using the code, sometimes as often as every day, and if something broke for me, I d fix it. Anyone who needs the functionality I ve developed can use the code, and be pretty confident that it ll do what it says in the README. I m considering creating an issue in all my repos, titled Is This Project Still Maintained? , pinning it to the issues list, and pasting in something I m starting to think of as The Open Source Maintainer s Manifesto . It goes something like this:

Is This Project Still Maintained? Yes. Maybe. Actually, perhaps no. Well, really, it depends on what you mean by maintained . I wrote the software in this repo for my own benefit to solve the problems I had, when I had them. While I could have kept the software to myself, I instead released it publicly, under the terms of an open licence, with the hope that it might be useful to others, but with no guarantees of any kind. Thanks to the generosity of others, it costs me literally nothing for you to use, modify, and redistribute this project, so have at it!

OK, Whatever. What About Maintenance? In one sense, this software is maintained , and always will be. I fix the bugs that annoy me, I upgrade dependencies when not doing so causes me problems, and I add features that I need. To the degree that any on-going development is happening, it s because I want that development to happen. However, if maintained to you means responses to questions, bug fixes, upgrades, or new features, you may be somewhat disappointed. That s not maintenance , that s support , and if you expect support, you ll probably want to have a support contract , where we come to an agreement where you pay me money, and I help you with the things you need help with.

That Doesn t Sound Fair! If it makes you feel better, there are several things you are entitled to:
  1. The ability to use, study, modify, and redistribute the contents of this repository, under the terms stated in the applicable licence(s).
  2. That any interactions you may have with myself, other contributors, and anyone else in this project s spaces will be in line with the published Code of Conduct, and any transgressions of the Code of Conduct will be dealt with appropriately.
  3. actually, that s it.
Things that you are not entitled to include an answer to your question, a fix for your bug, an implementation of your feature request, or a merge (or even review) of your pull request. Sometimes I may respond, either immediately or at some time long afterwards. You may luck out, and I ll think hmm, yeah, that s an interesting thing and I ll work on it, but if I do that in any particular instance, it does not create an entitlement that I will continue to do so, or that I will ever do so again in the future.

But I ve Found a Huge and Terrible Bug! You have my full and complete sympathy. It s reasonable to assume that I haven t come across the same bug, or at least that it doesn t bother me, otherwise I d have fixed it for myself. Feel free to report it, if only to warn other people that there is a huge bug they might need to avoid (possibly by not using the software at all). Well-written bug reports are great contributions, and I appreciate the effort you ve put in, but the work that you ve done on your bug report still doesn t create any entitlement on me to fix it. If you really want that bug fixed, the source is available, and the licence gives you the right to modify it as you see fit. I encourage you to dig in and fix the bug. If you don t have the necessary skills to do so yourself, you can get someone else to fix it everyone has the same entitlements to use, study, modify, and redistribute as you do. You may also decide to pay me for a support contract, and get the bug fixed that way. That gets the bug fixed for everyone, and gives you the bonus warm fuzzies of contributing to the digital commons, which is always nice.

But My PR is a Gift! If you take the time and effort to make a PR, you re doing good work and I commend you for it. However, that doesn t mean I ll necessarily merge it into this repository, or even work with you to get it into a state suitable for merging. A PR is what is often called a gift of work . I ll have to make sure that, at the very least, it doesn t make anything actively worse. That includes introducing bugs, or causing maintenance headaches in the future (which includes my getting irrationally angry at indenting, because I m like that). Properly reviewing a PR takes me at least as much time as it would take me to write it from scratch, in almost all cases. So, if your PR languishes, it might not be that it s bad, or that the project is (dum dum dummmm!) unmaintained , but just that I don t accept this particular gift of work at this particular time. Don t forget that the terms of licence include permission to redistribute modified versions of the code I ve released. If you think your PR is all that and a bag of potato chips, fork away! I won t be offended if you decide to release a permanent fork of this software, as long as you comply with the terms of the licence(s) involved. (Note that I do not undertake support contracts solely to review and merge PRs; that reeks a little too much of pay to play for my liking)

Gee, You Sound Like an Asshole I prefer to think of myself as forthright and plain-speaking , but that brings to mind that third thing you re entitled to: your opinion. I ve written this out because I feel like clarifying the reality we re living in, in the hope that it prevents misunderstandings. If what I ve written makes you not want to use the software I ve written, that s fine you ve probably avoided future disappointment.

Opinions Sought What do you think? Too harsh? Too wishy-washy? Comment away!

12 May 2024

Elana Hashman: I am very sick

I have not been able to walk since February 18, 2023. When people ask me how I'm doing, this is the first thing that comes to mind. "Well, you know, the usual, but also I still can't walk," I think to myself. If I dream at night, I often see myself walking or running. In conversation, if I talk about going somewhere, I'll imagine walking there. Even though it's been over a year, I remember walking to the bus, riding to see my friends, going out for brunch, cooking community dinners. But these days, I can't manage going anywhere except by car, and I can't do the driving, and I can't dis/assemble and load my chair. When I'm resting in bed and follow a guided meditation, I might be asked to imagine walking up a staircase, step by step. Sometimes, I do. Other times, I imagine taking a little elevator in my chair, or wheeling up ramps. I feel like there is little I can say that can express the extent of what this illness has taken from me, but it's worth trying. To an able-bodied person, seeing me in a power wheelchair is usually "enough." One of my acquaintances cried when they last saw me in person. But frankly, I love my wheelchair. I am not "wheelchair-bound" I am bed-bound, and the wheelchair gets me out of bed. My chair hasn't taken anything from me. *** In October of 2022, I was diagnosed with myalgic encephalomyelitis. Scientists and doctors don't really know what myalgic encephalomyelitis (ME) is. Diseases like it have been described for over 200 years.1 It primarily affects women between the ages of 10-39, and the primary symptom is "post-exertional malaise" or PEM: debilitating, disproportionate fatigue following activity, often delayed by 24-72 hours and not relieved by sleep. That fatigue has earned the illness the misleading name of "Chronic Fatigue Syndrome" or CFS, as though we're all just very tired all the time. But tired people respond to exercise positively. People with ME/CFS do not.2 Given the dearth of research and complete lack of on-label treatments, you may think this illness is at least rare, but it is actually quite common: in the United States, an estimated 836k-2.5m people3 have ME/CFS. It is frequently misdiagnosed, and it is estimated that as many as 90% of cases are missed,4 due to mild or moderate symptoms that mimic other diseases. Furthermore, over half of Long COVID cases likely meet the diagnostic criteria for ME,5 so these numbers have increased greatly in recent years. That is, ME is at least as common as rheumatoid arthritis,6 another delightful illness I have. But while any doctor knows what rheumatoid arthritis is, not enough7 have heard of "myalgic encephalomylitis." Despite a high frequency and disease burden, post-viral associated conditions (PASCs) such as ME have been neglected for medical funding for decades.8 Indeed, many people, including medical care workers, find it hard to believe that after the acute phase of illness, severe symptoms can persist. PASCs such as ME and Long COVID defy the typical narrative around common illnesses. I was always told that if I got sick, I should expect to rest for a bit, maybe take some medications, and a week or two later, I'd get better, right? But I never got better. These are complex, multi-system diseases that do not neatly fit into the Western medical system's specializations. I have seen nearly every specialty because ME/CFS affects nearly every system of the body: cardiology, nephrology, pulmonology, neurology, opthalmology, and, many, many more. You'd think they'd hand out frequent flyer cards, or a medical passport with fun stamps, but nope. Just hundreds of pages of medical records. And when I don't fit neatly into one particular specialist's box, then I'm sent back to my primary care doctor to regroup while we try to troubleshoot my latest concerning symptoms. "Sorry, can't help you. Not my department." With little available medical expertise, a lot of my disease management has been self-directed in partnership with primary care. I've read hundreds of articles, papers, publications, CME material normally reserved for doctors. It's truly out of necessity, and I'm certain I would be much worse off if I lacked the skills and connections to do this; there are so few ME/CFS experts in the US that there isn't one in my state or any adjacent state.9 So I've done a lot of my own work, much of it while barely being able to read. (A text-to-speech service is a real lifesaver.) To facilitate managing my illness, I've built a mental model of how my particular flavour of ME/CFS works based on the available research I've been able to read and how I respond to treatments. Here is my best attempt to explain it: The best way I have learned to manage this is to prevent myself from doing activities where I will exceed that aerobic threshold by wearing a heartrate monitor,12 but the amount of activity that permits in my current state of health is laughably restrictive. Most days I'm unable to spend more than one to two hours out of bed. Over time, this has meant worsening from a persistent feeling of tiredness all the time and difficulty commuting into an office or sitting at a desk, to being unable to sit at a desk for an entire workday even while working from home and avoiding physically intense chores or exercise without really understanding why, to being unable to leave my apartment for days at a time, and finally, being unable to stand for more than a minute or two or walk. But it's not merely that I can't walk. Many folks in wheelchairs are able to live excellent lives with adaptive technology. The problem is that I am so fatigued, any activity can destroy my remaining quality of life. In my worst moments, I've been unable to read, move my arms or legs, or speak aloud. Every single one of my limbs burned, as though I had caught fire. Food sat in my stomach for hours, undigested, while my stomach seemingly lacked the energy to do its job. I currently rely on family and friends for full-time caretaking, plus a paid home health aide, as I am unable to prep meals, shower, or leave the house independently. This assistance has helped me slowly improve from my poorest levels of function. While I am doing better than I was at my worst, I've had to give up essentially all of my hobbies with physical components. These include singing, cooking, baking, taking care of my houseplants, cross-stitching, painting, and so on. Doing any of these result in post-exertional malaise so I've had to stop; this reduction of activity to prevent worsening the illness is referred to as "pacing." I've also had to cut back essentially all of my volunteering and work in open source; I am only cleared by my doctor to work 15h/wk (from bed) as of writing. *** CW: severe illness, death, and suicide (skip this section) The difficulty of living with a chronic illness is that there's no light at the end of the tunnel. Some diseases have a clear treatment path: you take the medications, you complete the procedures, you hit all the milestones, and then you're done, perhaps with some long-term maintenance work. But with ME, there isn't really an end in sight. The median duration of illness reported in one 1997 study was over 6 years, with some patients reporting 20 years of symptoms.13 While a small number of patients spontaneously recover, and many improve, the vast majority of patients are unable to regain their baseline function.14 My greatest fear since losing the ability to walk is getting worse still. Because, while I already require assistance with nearly every activity of daily living, there is still room for decline. The prognosis for extremely ill patients is dismal, and many require feeding tubes and daily nursing care. This may lead to life-threatening malnutrition;15 a number of these extremely severe patients have died, either due to medical neglect or suicide.16 Extremely severe patients cannot tolerate light, sound, touch, or cognitive exertion,17 and often spend most of their time lying flat in a darkened room with ear muffs or an eye mask.18 This is all to say, my prognosis is not great. But while I recognize that the odds aren't exactly in my favour, I am also damn stubborn. (A friend once cheerfully described me as "stubbornly optimistic!") I only get one shot at life, and I do not want to spend the entirety of it barely able to perceive what's going on around me. So while my prognosis is uncertain, there's lots of evidence that I can improve somewhat,19 and there's also lots of evidence that I can live 20+ years with this disease. It's a bitter pill to swallow, but it also means I might have the gift of time something that not all my friends with severe complex illnesses have had. I feel like I owe it to myself to do the best I can to improve; to try to help others in a similar situation; and to enjoy the time that I have. I already feel like my life has been moving in slow motion for the past 4 years there's no need to add more suffering. Finding joy, as much as I can, every day, is essential to keep up my strength for this marathon. Even if it takes 20 years to find a cure, I am convinced that the standard of care is going to improve. All the research and advocacy that's been happening over the past decade is plenty to feel hopeful about.20 Hope is a discipline,21 and I try to remind myself of this on the hardest days. *** I'm not entirely sure why I decided to write this. Certainly, today is International ME/CFS Awareness Day, and I'm hoping this post will raise awareness in spaces that aren't often thinking about chronic illnesses. But I think there is also a part of me that wants to share, reach out in some way to the people I've lost contact with while I've been treading water, managing the day to day of my illness. I experience this profound sense of loss, especially when I think back to the life I had before. Everyone hits limitations in what they can do and accomplish, but there is so little I can do with the time and energy that I have. And yet, I understand even this precious little could still be less. So I pace myself. Perhaps I can inspire you to take action on behalf of those of us too fatigued to do the advocacy we need and deserve. Should you donate to a charity or advocacy organization supporting ME/CFS research? In the US, there are many excellent organizations, such as ME Action, the Open Medicine Foundation, SolveME, the Bateman Horne Center, and the Workwell Foundation. I am also happy to match any donations through the end of May 2024 if you send me your receipts. But charitable giving only goes so far, and I think this problem deserves the backing of more powerful organizations. Proportionate government funding and support is desperately needed. It's critical for us to push governments22 to provide the funding required for research that will make an impact on patients' lives now. Many organizers are running campaigns around the world, advocating for this investment. There is a natural partnership between ME advocacy and Long COVID advocacy, for example, and we have an opportunity to make a great difference to many people by pushing for research and resources inclusive of all PASCs. Some examples I'm aware of include: But outside of collective organizing, there are a lot of sick individuals out there that need help, too. Please, don't forget about us. We need you to visit us, care for us, be our confidantes, show up as friends. There are a lot of people who are very sick out here and need your care. I'm one of them.

Next.