Search Results: "will"

21 June 2024

Bits from Debian: Looking for the artwork for Trixie the next Debian release

Each release of Debian has a shiny new theme, which is visible on the boot screen, the login screen and, most prominently, on the desktop wallpaper. Debian plans to release Trixie, the next release, next year. As ever, we need your help in creating its theme! You have the opportunity to design a theme that will inspire thousands of people while working in their Debian systems. For the most up to date details, please refer to the wiki. We would also like to take this opportunity to thank Juliette Taka Belin for doing the Emerald theme for bookworm. The deadlines for submissions is: 2024-09-19 The artwork is usually picked based on which themes look the most: If you'd like more information or details, please post to the Debian Desktop mailing list.

20 June 2024

C.J. Collier: Signed NVIDIA drivers on Google Cloud Dataproc 2.2

Hello folks, I ve been working this year on better integrating NVIDIA hardware with the Google Cloud Dataproc product (Hadoop on Google Cloud) running the default cluster node image. We have an open bug[1] in the initialization-actions repo regarding creation failures upon enabling secure boot. This is because with secure boot, kernel driver code has its signature verified before insmod places the symbols into kernel memory. The verification process involves reading trust root certificates from EFI variables, and validating that the signatures on the kernel driver either a) were made directly by one of the certificates in the boot sector or b) were made by certificates which chain up to one of them. This means that Dataproc disk images must have a certificate installed into them. My work on the internals will likely start producing images which have certificates from Google in them. In the meantime, however, our users are left without a mechanism to have both secure boot enabled and install out-of-tree kernel modules such as the NVIDIA GPU drivers. To that end, I ve got PR #83[2] open with the GoogleCloudDataproc/custom-images github repository. This PR introduces a new argument to the custom image creation script, trusted-cert , the argument of which is the path to a DER-encoded certificate to be included in the certificate database in the EFI variables of the disk s boot sector. I ve written up the instructions on creating a custom image with a trusted certificate here: https://github.com/cjac/custom-images/blob/secure-boot-custom-image/examples/secure-boot/README.md Here is a set of commands that can be used to create a Dataproc custom image with certificate installed to the EFI s db variable. You can run these commands from the root directory of a checkout such as this:

git clone https://github.com/cjac/custom-images.git --branch secure-boot-custom-image --single-branch
pushd custom-images
PROJECT_ID=your-project-here
PROJECT_NUMBER=your-project-nnnn-here
my_bucket=your-bucket-here
custom_image_zone=your-zone-here
gcloud projects add-iam-policy-binding $ PROJECT_ID  \
        --member=serviceAccount:$ PROJECT_NUMBER -compute@developer.gserviceaccount.com \
        --role=roles/secretmanager.secretAccessor
gcloud config set project $ PROJECT_ID 
gcloud auth login
eval $(bash examples/secure-boot/create-key-pair.sh)
metadata="public_secret_name=$ public_secret_name "
metadata="$ metadata ,private_secret_name=$ private_secret_name "
metadata="$ metadata ,secret_project=$ secret_project "
metadata="$ metadata ,secret_version=$ secret_version "
#dataproc_version=2.1-debian11
dataproc_version=2.2-debian12
#customization_script=examples/secure-boot/install-nvidia-driver-debian11.sh
customization_script=examples/secure-boot/install-nvidia-driver-debian12.sh
#image_name="nvidia-open-kernel-bullseye-$(date +%F)"
image_name="nvidia-open-kernel-bookworm-$(date +%F)"
disk_size_gb="50"
python generate_custom_image.py \
    --image-name $ image_name  \
    --dataproc-version $ dataproc_version  \
    --trusted-cert "tls/db.der" \
    --customization-script $ customization_script  \
    --metadata "$ metadata " \
    --zone "$ custom_image_zone " \
    --disk-size "$ disk_size_gb " \
    --no-smoke-test \
    --gcs-bucket "$ my_bucket "
popd
I d love to hear your feedback! [1] https://github.com/GoogleCloudDataproc/initialization-actions/issues/1058
[2] https://github.com/GoogleCloudDataproc/custom-images/pull/83

Daniel Lange: Fixing esptool read_flash above 2MB on some cheap ESP32 boards

esptool, the Espressif SoC serial bootloader utility, tends to dislike cheap Flash chips attached to the various incarnations of the ESP32 chip family. And it seems to dislike them even more when running esptool on Linux than on other OSs. The common error mode is seeing it break at the 2MB barrier when trying to dump (esptool read_flash) a 4MB flash configuration.
esptool -p /dev/ttyUSB0 -b 921600 read_flash 0 0x400000 flash_dump.bin
will fail with
esptool.py v4.7.0
Serial port /dev/ttyUSB0
Connecting....
Detecting chip type... ESP32
Chip is ESP32-D0WD-V3 (revision v3.1)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
[..]
Detected flash size: 4MB
[..]
2097152 (50 %)
A fatal error occurred: Failed to read flash block (result was 01090000: CRC or checksum was invalid)
typically at the 2MB barrier. I found the solution in a rather unrelated esptool Github issue: Create an esptool.cfg file in the project directory (from where you will run esptool):
[esptool]
timeout = 30
max_timeout = 240
erase_write_timeout_per_mb = 40
mem_end_rom_timeout = 0.2
serial_write_timeout = 10
The timeout = 30 is the setting that fixed reading flash memory via esptool read_flash for me. When your esptool.cfg is read, esptool will tell you so in its second line of output:
$ esptool flash_id
esptool.py v4.7.0
Loaded custom configuration from /home/dl/[..]/Embedded_dev/ESP-32_Wemos/esptool.cfg
Found 1 serial ports
Serial port /dev/ttyUSB0
Connecting......
[..]
Animated GIF of an ESP32 board Thank you Radim Karnis and wibbit from the Github issue linked above.

14 June 2024

Matthew Palmer: Information Security: "We Can Do It, We Just Choose Not To"

Whenever a large corporation disgorges the personal information of millions of people onto the Internet, there is a standard playbook that is followed. Security is our top priority . Passwords were hashed . No credit card numbers were disclosed . record scratch Let s talk about that last one a bit.

A Case Study This post could have been written any time in the past well, decade or so, really. But the trigger for my sitting down and writing this post is the recent breach of wallet-finding and criminal-harassment-enablement platform Tile. As reported by Engadget, a statement attributed to Life360 CEO Chris Hulls says
The potentially impacted data consists of information such as names, addresses, email addresses, phone numbers, and Tile device identification numbers.
But don t worry though; even though your home address is now public information
It does not include more sensitive information, such as credit card numbers
Aaaaaand here is where I get salty.

Why Credit Card Numbers Don t Matter Describing credit card numbers as more sensitive information is somewhere between disingenuous and a flat-out lie. It was probably included in the statement because it s part of the standard playbook. Why is it part of the playbook, though? Not being a disaster comms specialist, I can t say for sure, but my hunch is that the post-breach playbook includes this line because (a) credit cards are less commonly breached these days (more on that later), and (b) it s a way to insinuate that all your financial data is safe, no need to worry without having to say that (because that statement would absolutely be a lie). The thing that not nearly enough people realise about credit card numbers is:
  1. The credit card holder is not usually liable for most fraud done via credit card numbers; and
  2. In terms of actual, long-term damage to individuals, credit card fraud barely rates a mention. Identity fraud, Business Email Compromise, extortion, and all manner of other unpleasantness is far more damaging to individuals.

Why Credit Card Numbers Do Matter Losing credit card numbers in a data breach is a huge deal but not for the users of the breached platform. Instead, it s a problem for the company that got breached. See, going back some years now, there was a wave of huge credit card data breaches. If you ve been around a while, names like Target and Heartland will bring back some memories. Because these breaches cost issuing banks and card brands a lot of money, the Payment Card Industry Security Standards Council (PCI-SSC) and the rest of the ecosystem went full goblin mode. Now, if you lose credit card numbers in bulk, it will cost you big. Massive fines for breaches (typically levied by the card brands via the acquiring bank), increased transaction fees, and even the Credit Card Death Penalty (being banned from charging credit cards), are all very big sticks.

Now Comes the Finding Out In news that should not be surprising, when there are actual consequences for failing to do something, companies take the problem seriously. Which is why no credit card numbers were disclosed is such an interesting statement. Consider why no credit card numbers were disclosed. It s not that credit card numbers aren t valuable to criminals because they are. Instead, it s because the company took steps to properly secure the credit card data. Next, you ll start to consider why, if the credit card numbers were secured, why wasn t the personal information that did get disclosed similarly secured? Information that is far more damaging to the individuals to whom that information relates than credit card numbers. The only logical answer is that it wasn t deemed financially beneficial to the company to secure that data. The consequences of disclosure for that information isn t felt by the company which was breached. Instead, it s felt by the individuals who have to spend weeks of their life cleaning up from identity fraud committed against them. It s felt by the victim of intimate partner violence whose new address is found in a data dump, letting their ex find them again. Until there are real, actual consequences for the companies which hemorrhage our personal data (preferably ones that have percentage of global revenue at the end), data breaches will continue to happen. Not because they re inevitable because as credit card numbers show, data can be secured but because there s no incentive for companies to prevent our personal data from being handed over to whoever comes along.

Support my Salt My salty takes are powered by refreshing beverages. If you d like to see more of the same, buy me one.

12 June 2024

Matthew Garrett: SSH agent extensions as an arbitrary RPC mechanism

A while back, I wrote about using the SSH agent protocol to satisfy WebAuthn requests. The main problem with this approach is that it required starting the SSH agent with a special argument and also involved being a little too friendly with the implementation - things worked because I could provide an arbitrary public key and the implementation never validated that, but it would be legitimate for it to start doing so and then break everything. And it also only worked for keys stored on tokens that ssh supports - there was no way to extend this to other keystores on the client (such as the Secure Enclave on Macs, or TPM-backed keys on PCs). I wanted a better solution.

It turns out that it was far easier than I expected. The ssh agent protocol is documented here, and the interesting part is the extension support extension mechanism. Basically, you can declare an extension and then just tunnel whatever you want over it. As before, my goto was the go ssh agent package which conveniently implements both the client and server side of this. Implementing the local agent is trivial - look up SSH_AUTH_SOCK, connect to it, create a new agent client that can communicate with that by calling NewClient, and then implement the ExtendedAgent interface, create a new socket, and call ServeAgent against that. Most of the ExtendedAgent functions should simply call through to the original agent, with the exception of Extension(). Just add a case statement against extensionType, define some reasonably namespaced extension, and you're done.

Now you need to use this agent. You probably don't want to use this for arbitrary hosts (agent forwarding should only be enabled for remote systems you trust, not arbitrary machines you connect to - if you enabled agent forwarding for github and github got compromised, github would be able to use any private keys loaded into your agent, and you probably don't want that). So the right approach is to add a Host entry to the ssh config with a ForwardAgent stanza pointing at the socket you created in your new agent. This way the configured subset of remote hosts will automatically talk to this new custom agent, while forwarding for anything else will still be at the user's discretion.

For the remote end things are even easier. Look up SSH_AUTH_SOCK and call NewClient as before, and then simply call client.Extension(). Whatever you stick in the contents argument will simply end up being received at the client end. You now have a communication channel between a the remote system and the local client, and what you do with that is up to you. I'm using it to allow a remote system to obtain auth tokens from Okta and forward WebAuthn challenges that can either be satisfied via a local WebAuthn token or by passing the query off to Mac TouchID, but there's fundamentally no constraints whatsoever on what can be done here.

(If you want to do this on Windows and still have everything work with existing clients you'll need to take this into account - Windows didn't really do Unix sockets until recently so everything there is awful)

comment count unavailable comments

Freexian Collaborators: Monthly report about Debian Long Term Support, May 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In May, 17 contributors have been paid to work on Debian LTS, their reports are available:
  • Adrian Bunk did 34.25h (out of 24.0h assigned and 22.0h from previous period), thus carrying over 11.75h to the next month.
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 16.0h (out of 24.0h assigned), thus carrying over 8.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 8.0h (out of 10.0h assigned), thus carrying over 2.0h to the next month.
  • Emilio Pozuelo Monfort did 35.5h (out of 46.0h assigned), thus carrying over 10.5h to the next month.
  • Guilhem Moulin did 13.0h (out of 14.75h assigned and 5.25h from previous period), thus carrying over 7.0h to the next month.
  • Lee Garrett did 11.0h (out of 37.25h assigned and 8.75h from previous period), thus carrying over 35.0h to the next month.
  • Lucas Kanashiro did 10.0h (out of 20.0h assigned), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 6.5h (out of 22.5h assigned and 1.5h from previous period), thus carrying over 17.5h to the next month.
  • Roberto C. S nchez did 7.75h (out of 11.0h assigned and 1.0h from previous period), thus carrying over 4.25h to the next month.
  • Santiago Ruano Rinc n did 8.0h (out of 16.0h assigned), thus carrying over 8.0h to the next month.
  • Sean Whitton did 5.5h (out of 5.5h assigned and 0.5h from previous period), thus carrying over 0.5h to the next month.
  • Sylvain Beucler did 10.5h (out of 0.75h assigned and 45.25h from previous period), thus carrying over 35.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 7.75h (out of 10.0h assigned and 2.0h from previous period), thus carrying over 4.25h to the next month.

Evolution of the situation In May, we have released 20 DLAs. Notable security updates in May included:
  • apache2: multiple vulnerabilities which may result in HTTP response splitting, denial of service, or authorization bypass (by Bastien Roucari s, in collaboration with apache2 maintainer Yadd)
  • bind9: two vulnerabilities, called KeyTrap and NSEC3, which may result in denial of service (by Santiago Ruano Rinc n)
  • python-pymysql: potential SQL injection attack (by Chris Lamb)
The aforementioned apache2 was prepared by its Debian maintainer Yadd. This update also involved work on the package test suite in buster, which contributor Bastien Roucari s then forwarded to the apache2 package in unstable. More importantly, a regression in fossil was reported, and Bastien prepared a fix for it. Bastien coordinated the upload of both packages to minimize the introduction of regressions. Contributor Daniel Leidert also prepared an upload of runc to Debian 11 in order fix a number of CVEs still affecting that package. Finally, contributor Thorsten Alteholz prepared uploads for qtbase-opensource-src, libjwt, and libmicrohttpd in Debian 11. Note that Debian 11 will pass into the LTS phase of support in August and these updates will improve the state and long-term supportability of Debian 11. Debian 10 is presently in its final month of LTS support (as announced on the debian-lts-announce mailing list, support will end on June 30th), after which no new security updates will be made available on security.debian.org. However, Freexian and its team of paid Debian contributors will continue to maintain Debian 10 going forward for the customers of the Extended LTS offer. Subscribe right away if you sill have Debian 10 which must be kept secure (and which cannot yet be upgraded).

Thanks to our sponsors Sponsors that joined recently are in bold.

8 June 2024

Reproducible Builds: Reproducible Builds in May 2024

Welcome to the May 2024 report from the Reproducible Builds project! In these reports, we try to outline what we have been up to over the past month and highlight news items in software supply-chain security more broadly. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. A peek into build provenance for Homebrew
  2. Distribution news
  3. Mailing list news
  4. Miscellaneous news
  5. Two new academic papers
  6. diffoscope
  7. Website updates
  8. Upstream patches
  9. Reproducibility testing framework


A peek into build provenance for Homebrew Joe Sweeney and William Woodruff on the Trail of Bits blog wrote an extensive post about build provenance for Homebrew, the third-party package manager for MacOS. Their post details how each bottle (i.e. each release):
[ ] built by Homebrew will come with a cryptographically verifiable statement binding the bottle s content to the specific workflow and other build-time metadata that produced it. [ ] In effect, this injects greater transparency into the Homebrew build process, and diminishes the threat posed by a compromised or malicious insider by making it impossible to trick ordinary users into installing non-CI-built bottles.
The post also briefly touches on future work, including work on source provenance:
Homebrew s formulae already hash-pin their source artifacts, but we can go a step further and additionally assert that source artifacts are produced by the repository (or other signing identity) that s latent in their URL or otherwise embedded into the formula specification.

Distribution news In Debian this month, Johannes Schauer Marin Rodrigues (aka josch) noticed that the Debian binary package bash version 5.2.15-2+b3 was uploaded to the archive twice. Once to bookworm and once to sid but with differing content. This is problem for reproducible builds in Debian due its assumption that the package name, version and architecture triplet is unique. However, josch highlighted that
This example with bash is especially problematic since bash is Essential:yes, so there will now be a large portion of .buildinfo files where it is not possible to figure out with which of the two differing bash packages the sources were compiled.
In response to this, Holger Levsen performed an analysis of all .buildinfo files and found that this needs almost 1,500 binNMUs to fix the fallout from this bug. Elsewhere in Debian, Vagrant Cascadian posted about a Non-Maintainer Upload (NMU) sprint to take place during early June, and it was announced that there is now a #debian-snapshot IRC channel on OFTC to discuss the creation of a new source code archiving service to, perhaps, replace snapshot.debian.org. Lastly, 11 reviews of Debian packages were added, 15 were updated and 48 were removed this month adding to our extensive knowledge about identified issues. A number of issue types have been updated by Chris Lamb as well. [ ][ ]
Elsewhere in the world of distributions, deep within a larger announcement from Colin Percival about the release of version 14.1-BETA2, it was mentioned that the FreeBSD kernels are now built reproducibly.
In Fedora, however, the change proposal mentioned in our report for April 2024 was approved, so, per the ReproduciblePackageBuilds wiki page, the add-determinism tool is now running in new builds for Fedora 41 ( rawhide ). The add-determinism tool is a Rust program which, as its name suggests, adds determinism to files that are given as input by attempting to standardize metadata contained in binary or source files to ensure consistency and clamping to $SOURCE_DATE_EPOCH in all instances . This is essentially the Fedora version of Debian s strip-nondeterminism. However, strip-nondeterminism is written in Perl, and Fedora did not want to pull Perl in the buildroot for every package. The add-determinism tool eliminates many causes of non-determinism and work is ongoing to continue the scope of packages it can operate on.

Mailing list news On our mailing list this month, regular contributor kpcyrd wrote to the list with an update on their source code indexing project, whatsrc.org. The whatsrc.org project, which was launched last month in response to the XZ Utils backdoor, now contains and indexes almost 250,000 unique source code archives. In their post, kpcyrd gives an example of its intended purpose, noting that it shown that whilst there seems to be consensus about [the] source code for zsh 5.9 in various Linux distributions, it does not align with the contents of the zsh Git repository . Holger Levsen also posted to the list with a pre-announcement of sorts for the 2024 Reproducible Builds summit. In particular:
[Whilst] the dates and location are not fixed yet, however if you don help us with finding a suitable location soon, it is very likely that we ll meet again in Hamburg in the 2nd half of September 2024 [ ].
Lastly, Frederic-Emmanuel Picca wrote to the list asking for help understanding the non-reproducible status of the Debian silx package and received replies from both Vagrant Cascadian and Chris Lamb.

Miscellaneous news strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month strip-nondeterminism version 1.14.0-1 was uploaded to Debian unstable by Chris Lamb chiefly to incorporate a change from Alex Muntada to avoid a dependency on Sub::Override to perform monkey-patching and break circular dependencies related to debhelper [ ]. Elsewhere in our tooling, Jelle van der Waa modified reprotest because the pipes module will be removed in Python version 3.13 [ ].
It was also noticed that a new blog post by Daniel Stenberg detailing How to verify a Curl release mentions the SOURCE_DATE_EPOCH environment variable. This is because:
The [curl] release tools document also contains another key component: the exact time stamp at which the release was done using integer second resolution. In order to generate a correct tarball clone, you need to also generate the new version using the old version s timestamp. Because the modification date of all files in the produced tarball will be set to this timestamp.

Furthermore, Fay Stegerman filed a bug against the Signal messenger app for Android to report that their reproducible builds cannot, in fact, be reproduced. However, Fay is quick to note that she has:
found zero evidence of any kind of compromise. Some differences are yet unexplained but everything I found seems to be benign. I am disappointed that Reproducible Builds have been broken for months but I have zero reason to doubt Signal s security in any way.

Lastly, it was observed that there was a concise and diagrammatic overview of supply chain threats on the SLSA website.

Two new academic papers Two new scholarly papers were published this month. Firstly, Mathieu Acher, Beno t Combemale, Georges Aaron Randrianaina and Jean-Marc J z quel of University of Rennes on Embracing Deep Variability For Reproducibility & Replicability. The authors describe their approach as follows:
In this short [vision] paper we delve into the application of software engineering techniques, specifically variability management, to systematically identify and explicit points of variability that may give rise to reproducibility issues (e.g., language, libraries, compiler, virtual machine, OS, environment variables, etc.). The primary objectives are: i) gaining insights into the variability layers and their possible interactions, ii) capturing and documenting configurations for the sake of reproducibility, and iii) exploring diverse configurations to replicate, and hence validate and ensure the robustness of results. By adopting these methodologies, we aim to address the complexities associated with reproducibility and replicability in modern software systems and environments, facilitating a more comprehensive and nuanced perspective on these critical aspects.
(A PDF of this article is available.)
Secondly, Ludovic Court s, Timothy Sample, Simon Tournier and Stefano Zacchiroli have collaborated to publish a paper on Source Code Archiving to the Rescue of Reproducible Deployment. Their paper was motivated because:
The ability to verify research results and to experiment with methodologies are core tenets of science. As research results are increasingly the outcome of computational processes, software plays a central role. GNU Guix is a software deployment tool that supports reproducible software deployment, making it a foundation for computational research workflows. To achieve reproducibility, we must first ensure the source code of software packages Guix deploys remains available.
(A PDF of this article is also available.)

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 266, 267, 268 and 269 to Debian, making the following changes:
  • New features:
    • Use xz --list to supplement output when comparing .xz archives; essential when metadata differs. (#1069329)
    • Include xz --verbose --verbose (ie. double) output. (#1069329)
    • Strip the first line from the xz --list output. [ ]
    • Only include xz --list --verbose output if the xz has no other differences. [ ]
    • Actually append the xz --list after the container differences, as it simplifies a lot. [ ]
  • Testing improvements:
    • Allow Debian testing to fail right now. [ ]
    • Drop apktool from Build-Depends; we can still test APK functionality via autopkgtests. (#1071410)
    • Add a versioned dependency for at least version 5.4.5 for the xz tests as they fail under (at least) version 5.2.8. (#374)
    • Fix tests for 7zip 24.05. [ ][ ]
    • Fix all tests after additon of xz --list. [ ][ ]
  • Misc:
    • Update copyright years. [ ]
In addition, James Addison fixed an issue where the HTML output showed only the first difference in a file, while the text output shows all differences [ ][ ][ ], Sergei Trofimovich amended the 7zip version test for older 7z versions that include the string [64] [ ][ ] and Vagrant Cascadian relaxed the versioned dependency to allow version 5.4.1 for the xz tests [ ] and proposed updates to guix for versions 267, 268 and pushed version 269 to Guix. Furthermore, Eli Schwartz updated the diffoscope.org website in order to explain how to install diffoscope on Gentoo [ ].

Website updates There were a number of improvements made to our website this month, including Chris Lamb making the print CSS stylesheet nicer [ ]. Fay Stegerman made a number of updates to the page about the SOURCE_DATE_EPOCH environment variable [ ][ ][ ] and Holger Levsen added some of their presentations to the Resources page. Furthermore, IOhannes zm lnig stipulated support for SOURCE_DATE_EPOCH in clang version 16.0.0+ [ ], Jan Zerebecki expanded the Formal definition page and fixed a number of typos on the Buy-in page [ ] and Simon Josefsson fixed the link to Trisquel GNU/Linux on the Projects page [ ].

Upstream patches This month, we wrote a number of patches to fix specific reproducibility issues, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In May, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Enable the rebuilder-snapshot API on osuosl4. [ ]
    • Schedule the i386 architecture a bit more often. [ ]
    • Adapt cleanup_nodes.sh to the new way of running our build services. [ ]
    • Add 8 more workers for the i386 architecture. [ ]
    • Update configuration now that the infom07 and infom08 nodes have been reinstalled as real i386 systems. [ ]
    • Make diffoscope timeouts more visible on the #debian-reproducible-changes IRC channel. [ ]
    • Mark the cbxi4a-armhf node as down. [ ][ ]
    • Only install the hdmi2usb-mode-switch package only on Debian bookworm and earlier [ ] and only install the haskell-platform package on Debian bullseye [ ].
  • Misc:
    • Install the ntpdate utility as we need it later. [ ]
    • Document the progress on the i386 architecture nodes at Infomaniak. [ ]
    • Drop an outdated and unnoticed notice. [ ]
    • Add live_setup_schroot to the list of so-called zombie jobs. [ ]
In addition, Mattia Rizzolo reinstalled the infom07 and infom08 nodes [ ] and Vagrant Cascadian marked the cbxi4a node as online [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 June 2024

Freexian Collaborators: Debian Contributions: DebConf Bursaries, /usr-move, sbuild, and more! (by Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Bursary updates, by Utkarsh Gupta Utkarsh is the bursaries team lead for DebConf 24. Bursary requests are dispatched to a team of volunteers to review. The results are collated, adjusted and merged to produce priority lists of requests to fund. Utkarsh raised the team, coordinated the review, and issued bursaries to attendees.

/usr-move, by Helmut Grohne More and more, the /usr-move transition is being carried out by multiple contributors and many performed around a hundred of the requested uploads. Of these, Helmut contributed five patches and two uploads. As a result, there are less than 350 packages left to be converted, and all of the non-trivial cases have patches. We started with three times that number. Thanks to everyone involved for supporting this effort. For people interested in background information of this transition, Helmut gave a presentation at MiniDebConf Berlin 2024 (slides).

sbuild, by Helmut Grohne While unshare mode of sbuild has existed for quite a while, it is now getting significant use in Debian, and new problems are popping up. Helmut looked into an apparmor-related failure and provided a diagnosis. While relevant code would detect the chroot nature of a schroot backend and skip apparmor tests, the unshare environment would be just good enough to run and fail the test. As sbuild exposes fewer special kernel filesystems, the tests will be skipped again. Another problem popped up when gobject-introspection added a dependency on the host architecture Python interpreter in a cross build environment. sbuild would prefer installing (and failing) a host architecture Python to installing the qemu alternative. Attempts to fix this would result in systemd killing sbuild. ischroot as used by libc6.postinst would not classify the unshare environment as a chroot. Therefore libc6.postinst would run telinit which would kill the build process. This is a complex interaction problem that shall eventually be solved by providing triggers from libc6 to be implemented by affected init systems.

Salsa CI updates, by Santiago Ruano Rinc n Several issues arose about Salsa CI last month, and it is probably worth mentioning part of the challenges of defining its framework in YAML. With the upcoming end-of-support of Debian 10 buster as LTS, armel was removed from deb.debian.org, making the jobs that build images for buster/armel to fail. While the removal of buster/armel from the repositories is a natural change, it put some light on the flaws in the Salsa CI design regarding the support of the different Debian releases. Currently, the images are defined like these (from .images-debian.yml):
.all-supported-releases: &all-supported-releases
  - stretch
  - stretch-backports
  - buster
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
And from them, different images are built according to the different jobs and how they are supported, for example:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, all releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE: *all-supported-releases
The removal of buster/armel could be easily reflected as:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, fully supported releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE:
          - stretch
          - buster
          - bullseye
          - bullseye-backports
          - bookworm
          - bookworm-backports
          - trixie
          - sid
          - experimental
      # buster only supports armhf and arm64
      - IMAGE_NAME: base
        ARCH:
          - arm32v7
          - arm64v8
        RELEASE: buster
Evidently, this increases duplication of the release support data, which is of course not optimal and it is error prone when changing the data about supported releases. A better approach would be to have two different YAML lists, such as:
# releases that have partial support. E.g.: buster is transitioning to
# Debian LTS, and buster armel is no longer found in deb.debian.org
.old-releases: &old-releases
  - stretch
  - buster

.currently-supported-releases: &currently-supported-releases
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
and then a unified list:
.all-supported-releases: &all-supported-releases
  - *old-releases
  - *currently-supported-releases
that could be used in the matrix of the jobs that build all the images available in the pipeline container registry. However, due to limitations in GitLab, it is not possible to expand the variables or mapping values in a parallel:matrix context. At least not in an elegant fashion. This is the kind of issue that recently arose and that Santiago is currently working to solve, in the simplest possible way. Astute readers would notice that stretch is listed in the fully supported releases. And there is no problem with stretch, because it is built from archive.debian.org. Otto actually has tried to fix the broken image build job doing the same, but it is still incorrect, because the security repository is not (yet) available in archive.debian.org. Additionally, Santiago has also worked on other merge requests, such as:
  1. support branch/tags as target head in the test projects,
  2. build autopkgtest image on top of stable
  3. Add .yamllint and make it happy in the autopkgtest-lxc project
  4. enable FF_SCRIPT_SECTIONS to log multiline commands, among others.

Archiving DebConf Websites, by Stefano Rivera DebConf, the annual Debian conference, has its own new website every year. These are typically complex dynamic web applications (featuring registration, call for papers, scheduling, etc.) Once the conference is over, there is no need to keep maintaining these applications, so we archive the sites off as static HTML, and serve them from Debian s static CDN. Stefano archived the websites for the last two DebConfs. The schedule system behind DebConf 14 and 15 s websites was a derivative of Canonical s summit system. This was only used for a couple of years before migrating to wafer, the current system. Archiving summit content has been on the nice to have list for years, but nobody has ever tackled it. The machine that served the sites went away a couple of years ago. After much digging, a backup of the database was found, and Stefano got this code running on an ancient Python 2.7. Recently Stefano put this all together and hooked in an archive export to finally get this content preserved.

Python 3.x and pypy3 security bug triage, by Stefano Rivera Stefano Rivera triaged all the open security bugs against the Python 3.x and PyPy3 packages for Debian s stable and LTS releases. Several had been fixed but this wasn t recorded in the security tracker.

Linux livepatching support for Debian, by Santiago Ruano Rinc n In collaboration with Emmanuel Arias, Santiago filed ITP bug #1070494. As stated in the bug, more than an Intent to Package, it is an Intent to Design and Implement live patching support for the Linux kernel in Debian. For now, Emmanuel and Santiago have done exploratory work and they are working to understand the different possibilities to implement livepatching. One possible direction is to rely on kpatch, and the other is to package the modules using regular packaging tools. Also, it is needed to evaluate if it is possible to rely on distributing the modules via packages, or instead as a service, as it is done by some commercial distributions.

Miscellaneous contributions
  • Thorsten Alteholz uploaded cups-bjnp to improve packaging.
  • Colin Watson tracked down a baffling CI issue in openssh to unblock several merge requests, removed the user_readenv=1 option from its PAM configuration, and started on the first stage of his plan to split out GSS-API key exchange support to separate packages.
  • Colin did his usual routine work on the Python team, upgrading 26 packages to new upstream versions, and cherry-picking an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • Colin NMUed a couple of packages to reduce the need for explicit overrides in Packages-arch-specific, and removed some other obsolete entries from there.
  • Emilio managed various library transitions, and helped finish a few of the remaining t64 transitions.
  • Helmut sent a patch for enabling piuparts to work as a regular user building on earlier work.
  • Helmut sent patches for 7 cross build failures, 6 other debian bugs and fixed an infrastructure problem in crossqa.debian.net.
  • Nicholas worked on a sponsored package upload, and discovered the blhc tool for diagnosing build hardening.
  • Stefano Rivera started and completed the re2 transition. The release team suggested moving to a virtual package scheme that includes the absl ABI (as re2 now depends on it). Adopted this.
  • Stefano continued to work on DebConf 24 planning.
  • Santiago continued to work on DebConf24 Content tasks as well as Debconf25 organisation.

6 June 2024

Debian Brasil: MiniDebConf Belo Horizonte 2024 - a brief report

From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city. MiniDebConf BH 2024 banners This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Bras lia 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history. Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities. MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event. MiniDebConf BH 2024 flisol 2024 edition numbers During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of: MiniDebConf BH 2024 palestra The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants. Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors. The organization was carried out by 14 people who started working at the end of 2023, including Prof. Lo c Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event. As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees. See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024. The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses. The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters. MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo Activities The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities. MiniDebConf BH 2024 palestra On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cacha as and souvenirs, as well as tasting some local foods. MiniDebConf BH 2024 mercado After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant. MiniDebConf BH 2024 palestra With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela S o Francisco de Assis, better known as Igrejinha da Pampulha. MiniDebConf BH 2024 palestra We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas. MiniDebConf BH 2024 palestra Crowdfunding For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people! Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated. MiniDebConf BH 2024 doadores Food, accommodation and/or travel grants for participants Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help. In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request. Number of people benefited: The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaragu Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike). Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request. In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him. The experience of offering the bursaries was really good because it allowed several people to come from other cities. MiniDebConf BH 2024 grupo Photos and videos You can watch recordings of the talks at the links below: And see the photos taken by several collaborators in the links below: Thanks We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024. MiniDebConf BH 2024 grupo Sponsors Gold: Silver: Bronze: Supporters Organizers

5 June 2024

Scarlett Gately Moore: Kubuntu, KDE, Debian: I am still here, in loving memory of my brother.

I am still here, busy as ever, I just haven t found the inspiration to blog. So soon after the loss of my son, I have lost my only brother a couple weeks ago. It has been a tough year for our family. Thank you everyone for you love and support during this difficult time. I will do my best in re-capping my work, there has been quite a bit as I am keeping busy with work so I don t dwell to much on the sadness. KDE Snaps: Trying to debug the unable to save files breakage in the latest Krita builds without luck. KisOpenGLCanvas
Renderer::reportFailedShaderCompilation\[0m: Shad
er Compilation Failure: "Failed to add vertex sh
ader source from file: matrix_transform.vert - Ca
use: "
I have implemented everything from https://snapcraft.io/docs/gpu-support , it has worked for years and now suddenly it just stopped. I have had to put it on hold for now, it is unpaid work and I simply don t have time. With the help of my GSOC student we are improving the Qt6 snap MR: https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/merge_requests/3 and many improvements on top of that. This exposed many issues with the kf6 snap and the linking to static libs. Those are being worked on now. Updated qt to 6.7.1 Qt6 apps in the works: okular, ark, gwenview, kwrited, elisa Kubuntu: So many SRu s for the Noble release, I will probably miss a few. https://bugs.launchpad.net/ubuntu/+source/ark/+bug/2068491 Ark cannot open 7-zip files. Sadly the patches were for qt6, waiting for a qt5 port upstream. https://bugs.launchpad.net/ubuntu/noble/+source/merkuro/+bug/2065063 Crash due to missing qml. Fix is in git, no upload rights. Requested sponsor. https://bugs.launchpad.net/ubuntu/+source/tellico/+bug/2065915 Several applications no longer work on architectures that are not amd64 due to hard coded paths. All fixed in git. Several uploaded to oracular, several sponsorship has been requested. Noble updates rejected despite SRU, going to retry. https://bugs.launchpad.net/ubuntu/+source/sddm/+bug/2066275 The dreaded black screen on second boot bug is fixed in git and oracular. Noble was rejected despite the SRU. Will retry. https://bugs.launchpad.net/ubuntu/+source/kubuntu-meta/+bug/2066028 Broken systray submenus. Fixed in git and oracular. Noble rejected despite SRU. Will retry. https://bugs.launchpad.net/ubuntu/+source/plasma-workspace/+bug/2067747 Long standing bug with plasma not loading with lightdm. Fixed in git and oracular. Noble rejected will retry. https://bugs.launchpad.net/ubuntu/+source/plasma-workspace/+bug/2067742 CVE-2024-36041Fixed in git and oracular, noble rejected, will retry. And many more I am applying for MOTU in hopes it will reduce all of my uploading issues. https://wiki.ubuntu.com/scarlettmoore/MOTUApplication Debian: kf6-knotifications and kapidox. Will jump into Plasma 6 next week ! Misc: Went to LinuxFest Northwest with Valorie! We had a great time and it was a huge success, we had many people stop by our booth.
As usual, if you like my work and want to see Plasma 6 in Kubuntu it all depends on you! Kubuntu will be out of funds soon and needs donations! Thank you for your consideration. https://kubuntu.org/donate/ Personal: Support for my grandson: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

Alberto Garc a: More ways to install software in SteamOS: Distrobox and Nix

Introduction In my previous post I talked about how to use systemd-sysext to add software to the Steam Deck without modifying the root filesystem. In this post I will give a brief overview of two additional methods. Distrobox distrobox is a tool that uses containers to create a mutable environment on top of your OS. Distrobox running in SteamOS With distrobox you can open a terminal with your favorite Linux distro inside, with full access to the package manager and the ability to install additional software. Containers created by distrobox are integrated with the system so apps running inside have normal access to the user s home directory and the Wayland/X11 session. Since these containers are not stored in the root filesystem they can survive an OS update and continue to work fine. For this reason they are particularly suited to systems with an immutable root filesystem such as Silverblue, Endless OS or SteamOS. Starting from SteamOS 3.5 the system comes with distrobox (and podman) preinstalled and it can be used right out of the box without having to do any previous setup. For example, in order to create a Debian bookworm container simply open a terminal and run this:
$ distrobox create -i debian:bookworm debbox
Here debian:bookworm is the image that this container is created from (debian is the name and bookworm is the tag, see the list of supported tags here) and debbox is the name that is given to this new container. Once the container is created you can enter it:
$ distrobox enter debbox
Or from the Debian entry in the desktop menu -> Lost & Found. Once inside the container you can run your Debian commands normally:
$ sudo apt update
$ sudo apt install vim-gtk3
Nix Nix is a package manager for Linux and other Unix-like systems. It has the property that it can be installed alongside the official package manager of any distribution, allowing the user to add software without affecting the rest of the system. Nix running in SteamOS Nix installs everything under the /nix directory, and packages are made available to the user through a new entry in the PATH and a ~/.nix-profile symlink stored in the home directory. Nix is more things, including the basis of the NixOS operating system. Explaning Nix in more detail is beyond the scope of this blog post, but for SteamOS users these are perhaps its most interesting properties: The only thing that Nix needs from SteamOS is help to set up the /nix directory so its contents are not stored in the root filesystem. This is already happening starting from SteamOS 3.5 so you can install Nix right away in single-user mode:
$ sudo chown deck:deck /nix
$ wget https://nixos.org/nix/install
$ sh ./install --no-daemon
This installs Nix and adds a line to ~/.bash_profile to set up the necessary environment variables. After that you can log in again and start using it. Here s a very simple example (refer to the official documentation for more details):
# Install and run Midnight Commander
$ nix-env -iA nixpkgs.mc
$ mc
# List installed packages
$ nix-env -q
mc-4.8.31
nix-2.21.1
# Uninstall Midnight Commander
$ nix-env -e mc-4.8.31
What we have seen so far is how to install Nix in single-user mode, which is the simplest one and probably good enough for a single-user machine like the Steam Deck. The Nix project however recommends a multi-user installation, see here for the reasons. Unfortunately the official multi-user installer does not work out of the box on the Steam Deck yet, but if you want to go the multi-user way you can use the Determinate Systems installer: https://github.com/DeterminateSystems/nix-installer Conclusion Distrobox and Nix are useful tools and they give SteamOS users the ability to add additional software to the system without having to modify the base operating system. While for graphical applications the recommended way to install third-party software is still Flatpak, Distrobox and Nix give the user additional flexibility and are particularly useful for installing command-line utilities and other system tools.

2 June 2024

Colin Watson: Free software activity in May 2024

My Debian contributions this month were all sponsored by Freexian. The bulk of my Debian time this month went towards trying to haul more Python packages up to current versions, but I got a few other bits and pieces done as well. You can support my work directly via Liberapay.

Jacob Adams: What to Do When You Forget Your Root Password

Forgetting your root password would initially seem like a problem requiring a full re-install, one that you can t easily recover from without wiping everything away. Forgetting your user password can of course be solved by changing it as root, as in the following, which changes the password for user jacob:
# passwd jacob
but only the root user can change their own password, so you need to somehow get root access in order to do so.

Changing Root s Password with Sudo This one is probably obvious, but if you have a user with the ability to use sudo, then you can change root s password without access to the root account by running:
$ sudo passwd
which will reset the password for the root account without requiring the existing password.

Boot Directly to a Shell Getting root access to any Linux machine you have physical access to is surprisingly simple. You can just boot the machine directly into a root shell without any access control, i.e. passwords.

Why You Should Always Encrypt Your Storage1 To boot directly to a shell you need to append the following text to the kernel command line:
init=/bin/sh
(You could use pretty much any program here, but you re putting your system into a weird state doing this, and so I d recommend the simplest approach.)

GRUB GRUB will allow you to edit boot parameters on startup using the e key. You ll then be presented with a editor2 that you can use to change the kernel command line by appending to the linux line. E.g. If your editor looks like this:
        load_video
        insmod gzio
        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
        insmod part_gpt
        insmod ext2
        search --no-floppy --fs-uuid --set=root abcd1234-5678-0910-1112-abcd12345678
        echo    'Loading Linux 6.1.0-21-amd64 ...'
        linux   /boot/vmlinuz-6.1.0-21-amd64 root=UUID=abcd1234-5678-0910-1112-abcd12345678 ro  quiet
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initrd.img-6.1.0-21-amd64
Then you would add init=/bin/sh like so:
        load_video
        insmod gzio
        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
        insmod part_gpt
        insmod ext2
        search --no-floppy --fs-uuid --set=root abcd1234-5678-0910-1112-abcd12345678
        echo    'Loading Linux 6.1.0-21-amd64 ...'
        linux   /boot/vmlinuz-6.1.0-21-amd64 root=UUID=abcd1234-5678-0910-1112-abcd12345678 ro  quiet init=/bin/sh
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initrd.img-6.1.0-21-amd64
Once you ve edited it you can start your machine with Ctrl+x, as you can see from the prompt text under the editor.

Raspberry Pi cmdline.txt On a Raspberry Pi you ll want to append the above to only line of the cmdline.txt file on the boot partition of the SD card. This is the first partition of the disk, the one that is FAT32. You ll need to do this on another machine, since if you had root access to edit cmdline.txt you could also just change your password. As it is a FAT32 partition on an SD card, it should be editable on any other machine that supports SD cards. E.g. If your cmdline.txt looks like this
console=serial0,115200 console=tty1 root=PARTUUID=fb33757d-02 rootfstype=ext4 fsck.repair=yes rootwait quiet
Then you would add init=/bin/sh like so:
console=serial0,115200 console=tty1 root=PARTUUID=fb33757d-02 rootfstype=ext4 fsck.repair=yes rootwait quiet init=/bin/sh

Mount Read / Write Since you re replacing the init process of the machine with a shell, no other processes will be running. Also, your root filesystem will be mounted read-only, as init is expected to remount it read-write as needed.
# mount -o remount,rw /

Change Root Password Once you ve remounted the root filesystem, all that s needed is to run the passwd command.
# passwd
Since you re running the command as root you won t need to provide your existing password, and will only need to type a new password twice. Now of course you simply need to remember that password in order to ensure you don t need to do this again.

Reboot Safely You now cannot follow the standard reboot process here, as you re only running one process. Therefore it s important to put your root filesystem back into read-only before powering off your machine:
# mount -o remount,ro /
Once you ve done that you just need to hold down the power button until the machine completely powers off or pull the plug. And then you re done! Boot the computer again and you ll have everything working as normal, with a root password you remember.
  1. Not that this is the only reason, anyone with physical access to your machine could also boot it into another operating system they control, or just remove your storage device and put it into another computer, or probably other things I m not thinking of now. You should always encrypt your devices.
  2. The editor uses emacs-like keybindings. The manual includes a list of all the options available.

1 June 2024

Ian Jackson: What your vote is worth - a back of the envelope calculation

tl;dr: Your vote really counts! Each vote in a UK General Election is worth maybe 100,000 - to you and all your fellow citizens taken together. If you really care about the welfare of everyone affected by actions of the UK government, then it s worth that to you too. Introduction It seems a common perception that one vote, in amongst all those millions, doesn t really matter. So maybe it s not worth voting. But, voting is (largely) what determines what the government does - and the government is big. It s as big as all the people. If you are the kind of person who cares about what happens to everyone in your polity and indeed everyone its actions affect, then even your one vote is very important indeed. A method for back of the envelope calculation It would be nice to give a quantitative estimate. Many things in our society are measured in money, so let s try taking a stab at calculating the money value of your vote. The argument I m going to make is this: the government (by which I include the legislature), which is selected by our votes, decides how to spend the national budget. So, basically, I m going to divide the budget, by the electorate. UK Parliament UK Parliamentary elections decide not only the House of Commons, but, through that, the government. The upper house, the House of Lords, has very limited influence. So I think it s fair to regard the Parliamentary election as, simply, controlling that budget. Being lazy, I m going to use Wikipedia data. We have the size of the electorate, for 2019, 47.6 million. But your influence isn t shared with the whole electorate, only with the other people who also vote. Turnout in 2019 was 67.3%. The 2019 budget isn t listed but I ll just average the 2018 and March 2020 figures 842bn and 873bn, so 857 billion. (Strictly speaking I should add up the budgets for the period of the Parliament, but that seems like a lot of effort.) There s a discrepancy in the timescale we need to account for. Your vote influences the budgets for several years, depending how long it is until the next election. Taking Wikipedia s list of elections this century there ve been 7 in 24 years. So that s an average of about 3.4y. So, multiplying it through, we have ( 857b * (24 / 7)) / (47.6M * 67.3%), giving a guess at the value of your UK General Election vote: 92,000. European Parliament 2022 budget for the European Union (Wikipedia again) was 170.6 bn. The last election, in 2019, had a turnout of 198,352,638. Each EU Parliament lasts 5 years. The Parliament, however, shares responsibility for the budget with the European Council, which is controlled, ultimately, by national governments. We have to pick a numerical value for the Parliament s share of the influence. Over the past years the Parliament has gradually been more willing to exercise its powers in this area. I m going to arbitrarily call its share 50%. The calculation, then, is 170.6 bn * 5 * 50% / 198M, giving a guess at the value of your EU Parliamentary Election vote: 2150. This much smaller figure reflects simply that the EU doesn t spend very much money, for a polity of its size. (Those stories in the British press giving the impression that the EU is massively wasteful are, simply, lies.) The interaction of this calculation with the Council s share of the influence, and with national budgets, is a bit of a question, but given the much smaller amounts involved, it doesn t seem worth thinking about that too hard. Only if you care about other people as much as yourself! All of this is only true for you if you value and want to help everyone in your society. That includes immigrants, women, unemployed people, disabled people, people who are much poorer or richer than you, etc. If you think about it in purely personal terms, your vote is hardly worth anything - because while the effect of your vote, overall, is very large, that effect is shared by everyone in your polity. So if you only care about yourself, voting is a total waste of time. The more selfish and xenophobic and racist and so on you are - caring only about people like yourself - the less your vote is worth. This is why voting is rightly seen as a civic duty. I just spent 30 to courier my EP vote to Den Haag. That only makes sense because I m very willing to spend that 30 to try to improve the spending of the 2000 or so that s my share of the EU budget. This is a very rough analysis These calculations neglect a lot of very important things: politics isn t just about the allocation of resources. It s also about values, and bad politics can seriously harm people. Arguably many of those effects of your vote, are much more important than just how the budget is set and spent. It would be interesting to see an attempt at a similar analysis but for taking into account life and death questions like hate crime, traffic violence, healthcare, refugees welfare, and so on. I m not sure how to approach that. Maybe some real social scientists have done so? References welcome. Also, even on its own terms, this analysis is very rough and ready. We haven t modelled the ability of the government to change its tax rates; perhaps we should be multiplying GDP (or some other better measure) by 90% percentile total tax rate amongst countries like this one . The amount of influence that can be wielded by one vote is probably nonlinear in the size of the political faction, but IDK in which direction. In unfair voting systems like the UK s, some people s votes are worth much more than others. In a very marginal constituency, which is a target seat, your vote might be worth tens of millions. In a safe seat, it might only be worth a few thousand. And in practical terms you don t get to choose precisely the policies you want; you have to pick a party, which is sometimes very much a question of the lesser evil. So, there is much I haven t modelled. But the key point stands: Conclusion Although your vote is diluted by everyone else s votes, together, we control the government, which affects us all. So if you care about the whole of society, the big numbers in the divisor, and the numerator, cancel out. You can think of your vote as controlling one citizen s worth of government activity.
edited 2024-06-01 09:40 Z to fix a grammar botch


comment count unavailable comments

Russ Allbery: Review: I Shall Wear Midnight

Review: I Shall Wear Midnight, by Terry Pratchett
Series: Discworld #38
Publisher: Harper
Copyright: 2010
Printing: 2011
ISBN: 0-06-143306-3
Format: Trade paperback
Pages: 355
I Shall Wear Midnight is the 38th Discworld novel and the 4th Tiffany Aching novel. This is not a good place to start reading. Tiffany has finished her training and has returned to her home on the Chalk, taking up her duties as the local witch. There are a lot of those, because there's a lot that needs doing. In some cases, such as taking away the pain of the old Duke, they involve things that require magic and that only Tiffany can do. In many other cases, other people could pick up some of the work, but they lack Tiffany's sense of duty and willingness to pay attention. The people of the Chalk have always been a bit suspicious of witches, in part because the job was done for so long by Tiffany's grandmother and no one thought she was a witch. (She was a witch.) Of late, however, that suspicion seems to be getting worse. It comes to a head when Tiffany is accused of theft and worse by the old Duke's maid, a woman with very fixed ideas about the evils of witches. Tiffany has to sort out what's going on and clear herself, all while navigating her now-awkward relationship with the Duke's son Roland, his unimpressive fiancee, and his spectacularly annoying aunt. Ah, this is the stuff. This is exactly the Tiffany Aching novel that I have been hoping Pratchett would write. It's pure, snarky competence porn from start to finish.
"I'm a witch. It's what we do. When it's nobody else's business, it's my business."
One of the things that I adore about this series is how well Pratchett shows the different ways in which one can be a witch. Granny Weatherwax out-thinks everyone and nudges (or shoves) people in the right direction, but her natural tendency is to be icy and a bit frightening. Nanny Ogg is that person you can't help but talk to, who may seem happy-go-lucky and hedonistic but who can effortlessly change the mood of a room. And Tiffany is stubborn duty and blunt practicality, which fits the daughter of shepherds. In previous books, we've watched Tiffany as a student, learning the practicalities of being a witch. This is the book where she realizes how much she knows and how much easier the world is to navigate when she's in her own territory. There is a wonderful scene, late in this book, where Pratchett shows Nanny Ogg at her best, doing the kinds of things that only Nanny Ogg can do. Both Tiffany and the reader are in awe.
I should have learned this, she thought. I wanted to learn fire, and pain, but I should have learned people.
And it's true that Nanny Ogg can do things that Tiffany can't. But what makes this book so great is that it shows how Tiffany's personality and her training come together with her knowledge of the Chalk. She may not know people, in general, but she knows her neighbors and how they think. She doesn't manage them the way that Nanny Ogg would; she's better at solving different kinds of problems, in different ways. But they're the right ways, and the right problems, for her home. This is another Discworld novel with a forgettable villain that's more of a malevolent force of nature than a character in its own right. It's also another Discworld novel where Pratchett externalizes a human tendency into a malevolent force that can possess people. I have mixed feelings about this narrative approach. That externalization of evil into (in essence) demons has been repeatedly used to squirm out of responsibility and excuse atrocities, and it neatly avoids having to wrestle with the hard questions of prejudice and injustice and why apparently good people do awful things. I think some of those weaknesses persist even in Pratchett's hands, but I think what he was attempting with that approach in this book is to show how almost no one is immune to nastier ideas that spread through society. Rather than using the externalization of evil as an excuse, he's using it as a warning. With enough exposure to those ideas, they start sounding tempting and partly credible even to people who would never have embraced them earlier. Pratchett also does a good job capturing the way prejudice can start from thoughtless actions that have more to do with the specific circumstances of someone's life than any coherent strategy. Still, the one major complaint I have about this book is that the externalization of evil is an inaccurate portrayal of the world, and this catches up with Pratchett at the ending. Postulating an external malevolent force reduces evil to something that can be puzzled out and decisively defeated, thus resolving the problem. Sadly, this is not how humans actually work. I'll forgive that structural flaw, though, because the rest of this book is so good. It's rare that a plot twist in a Discworld novel surprises me twisty plots are not Pratchett's strength but this one did. I will not spoil the surprise, but one of the characters is not quite who they seem to be, and Tiffany's reactions once she figures that out are one of my favorite parts of this book. Pratchett is making a point about assumptions, observation, and the importance of being willing to change one's mind about someone when you know more, and I thought it was very well done. But, most of all, I enjoyed reading about Tiffany being calm, competent, determined, and capable. There's also a bit of an unexpected romance plot that's one of my favorite types: the person who notices that you're doing a lot of work and quietly steps in and starts helping while paying attention to what's needed and not taking over. And it's full of the sort of pithy moral wisdom that makes Discworld such a delight to read.
"There have been times, lately, when I dearly wished that I could change the past. Well, I can't, but I can change the present, so that when it becomes the past it will turn out to be a past worth having."
This was just what I wanted. Highly recommended. Followed by Snuff in publication order. The next (and last, sadly) Tiffany Aching book is The Shepherd's Crown. Rating: 9 out of 10

28 May 2024

Russell Coker: Creating a Micro Users Group

Fosdem had a great lecture Building an Open Source Community One Friend at a Time [1]. I recommend that everyone who is involved in the FOSS community watches this lecture to get some ideas. For some time I ve been periodically inviting a few friends to visit for lunch, chat about Linux, maybe do some coding, and watch some anime between coding. It seems that I have accidentally created a micro users group. LUGs were really big in the mid to late 90s and still quite vibrant in the early 2000 s. But they seem to have decreased in popularity even before Covid19 and since Covid19 a lot of people have stopped attending large meetings to avoid health risks. I think that a large part of the decline of users groups has been due to the success of YouTube. Being able to choose from thousands of hours of lectures about computers on YouTube is a disincentive to spending the time and effort needed to attend a meeting with content that s probably not your first choice of topic. Attending a formal meeting where someone you don t know has arranged a lecture might not have a topic that s really interesting to you. Having lunch with a couple of friends and watching a YouTube video that one of your friends assures you is really good is something more people will find interesting. In recent times homeschooling [2] has become more widely known. The same factors that allow learning about computers at home also make homeschooling easier. The difference between the traditional LUG model of having everyone meet at a fixed time for a lecture and a micro LUG of a small group of people having an informal meeting is similar to the difference between traditional schools and homeschooling. I encourage everyone to create their own micro LUG. All you have to do is choose a suitable time and place and invite some people who are interested. Have a BBQ in a park if the weather is good, meet at a cafe or restaurant, or invite people to visit you for lunch on a weekend.

27 May 2024

Thomas Koch: Minimal overhead VMs with Nix and MicroVM

Posted on March 17, 2024
Joachim Breitner wrote about a Convenient sandboxed development environment and thus reminded me to blog about MicroVM. I ve toyed around with it a little but not yet seriously used it as I m currently not coding. MicroVM is a nix based project to configure and run minimal VMs. It can mount and thus reuse the hosts nix store inside the VM and thus has a very small disk footprint. I use MicroVM on a debian system using the nix package manager. The MicroVM author uses the project to host production services. Otherwise I consider it also a nice way to learn about NixOS after having started with the nix package manager and before making the big step to NixOS as my main system. The guests root filesystem is a tmpdir, so one must explicitly define folders that should be mounted from the host and thus be persistent across VM reboots. I defined the VM as a nix flake since this is how I started from the MicroVM projects example:
 
  description = "Haskell dev MicroVM";
  inputs.impermanence.url = "github:nix-community/impermanence";
  inputs.microvm.url = "github:astro/microvm.nix";
  inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
  outputs =   self, impermanence, microvm, nixpkgs  :
    let
      persistencePath = "/persistent";
      system = "x86_64-linux";
      user = "thk";
      vmname = "haskell";
      nixosConfiguration = nixpkgs.lib.nixosSystem  
          inherit system;
          modules = [
            microvm.nixosModules.microvm
            impermanence.nixosModules.impermanence
            ( pkgs, ...  :  
            environment.persistence.$ persistencePath  =  
                hideMounts = true;
                users.$ user  =  
                  directories = [
                    "git" ".stack"
                  ];
                 ;
               ;
              environment.sessionVariables =  
                TERM = "screen-256color";
               ;
              environment.systemPackages = with pkgs; [
                ghc
                git
                (haskell-language-server.override   supportedGhcVersions = [ "94" ];  )
                htop
                stack
                tmux
                tree
                vcsh
                zsh
              ];
              fileSystems.$ persistencePath .neededForBoot = nixpkgs.lib.mkForce true;
              microvm =  
                forwardPorts = [
                    from = "host"; host.port = 2222; guest.port = 22;  
                    from = "guest"; host.port = 5432; guest.port = 5432;   # postgresql
                ];
                hypervisor = "qemu";
                interfaces = [
                    type = "user"; id = "usernet"; mac = "00:00:00:00:00:02";  
                ];
                mem = 4096;
                shares = [  
                  # use "virtiofs" for MicroVMs that are started by systemd
                  proto = "9p";
                  tag = "ro-store";
                  # a host's /nix/store will be picked up so that no
                  # squashfs/erofs will be built for it.
                  source = "/nix/store";
                  mountPoint = "/nix/.ro-store";
                   
                  proto = "virtiofs";
                  tag = "persistent";
                  source = "~/.local/share/microvm/vms/$ vmname /persistent";
                  mountPoint = persistencePath;
                  socket = "/run/user/1000/microvm-$ vmname -persistent";
                 
                ];
                socket = "/run/user/1000/microvm-control.socket";
                vcpu = 3;
                volumes = [];
                writableStoreOverlay = "/nix/.rwstore";
               ;
              networking.hostName = vmname;
              nix.enable = true;
              nix.nixPath = ["nixpkgs=$ builtins.storePath <nixpkgs> "];
              nix.settings =  
                extra-experimental-features = ["nix-command" "flakes"];
                trusted-users = [user];
               ;
              security.sudo =  
                enable = true;
                wheelNeedsPassword = false;
               ;
              services.getty.autologinUser = user;
              services.openssh =  
                enable = true;
               ;
              system.stateVersion = "24.11";
              systemd.services.loadnixdb =  
                description = "import hosts nix database";
                path = [pkgs.nix];
                wantedBy = ["multi-user.target"];
                requires = ["nix-daemon.service"];
                script = "cat $ persistencePath /nix-store-db-dump nix-store --load-db";
               ;
              time.timeZone = nixpkgs.lib.mkDefault "Europe/Berlin";
              users.users.$ user  =  
                extraGroups = [ "wheel" "video" ];
                group = "user";
                isNormalUser = true;
                openssh.authorizedKeys.keys = [
                  "ssh-rsa REDACTED"
                ];
                password = "";
               ;
              users.users.root.password = "";
              users.groups.user =  ;
             )
          ];
         ;
    in  
      packages.$ system .default = nixosConfiguration.config.microvm.declaredRunner;
     ;
 
I start the microVM with a templated systemd user service:
[Unit]
Description=MicroVM for Haskell development
Requires=microvm-virtiofsd-persistent@.service
After=microvm-virtiofsd-persistent@.service
AssertFileNotEmpty=%h/.local/share/microvm/vms/%i/flake/flake.nix
[Service]
Type=forking
ExecStartPre=/usr/bin/sh -c "[ /nix/var/nix/db/db.sqlite -ot %h/.local/share/microvm/nix-store-db-dump ]   nix-store --dump-db >%h/.local/share/microvm/nix-store-db-dump"
ExecStartPre=ln -f -t %h/.local/share/microvm/vms/%i/persistent/ %h/.local/share/microvm/nix-store-db-dump
ExecStartPre=-%h/.local/state/nix/profile/bin/tmux new -s microvm -d
ExecStart=%h/.local/state/nix/profile/bin/tmux new-window -t microvm: -n "%i" "exec %h/.local/state/nix/profile/bin/nix run --impure %h/.local/share/microvm/vms/%i/flake"
The above service definition creates a dump of the hosts nix store db so that it can be imported in the guest. This is necessary so that the guest can actually use what is available in /nix/store. There is an effort for an overlayed nix store that would be preferable to this hack. Finally the microvm is started inside a tmux session named microvm . This way I can use the VM with SSH or through the console and also access the qemu console. And for completeness the virtiofsd service:
[Unit]
Description=serve host persistent folder for dev VM
AssertPathIsDirectory=%h/.local/share/microvm/vms/%i/persistent
[Service]
ExecStart=%h/.local/state/nix/profile/bin/virtiofsd \
 --socket-path=$ XDG_RUNTIME_DIR /microvm-%i-persistent \
 --shared-dir=%h/.local/share/microvm/vms/%i/persistent \
 --gid-map :995:%G:1: \
 --uid-map :1000:%U:1:

26 May 2024

Russell Coker: USB-A vs USB-C

USB-A is the original socket for USB at the PC end. There are 2 variants of it, the first is for USB 1.1 to USB 2 and the second is for USB 3 which adds extra pins in a plug and socket compatible manner you can plug a USB-A device into a USB-A socket without worrying about the speeds of each end as long as you don t need USB 3 speeds. The differences between USB-A and USB-C are:
  1. USB-C has the same form factor as Thunderbolt and the Thunderbolt protocol can run over it if both ends support it.
  2. USB-C generally supports higher power modes for charging (like 130W for Dell laptops, monitors, and plugpacks) but there s no technical reason why USB-A couldn t do it. You can buy chargers that do 60W over USB-A which could power one of our laptops via a USB-A to USB-C cable. So high power USB-A is theoretically possible but generally you won t see it.
  3. USB-C has DisplayPort alternate mode which means using some of the wires for DisplayPort.
  4. USB-C is more likely to support the highest speeds than USB-A sockets for super speed etc. This is not a difference in the standards just a choice made by manufacturers.
While USB-C tends to support higher power delivery modes in actual implementations for connecting to a PC the PC end seems to only support lower power modes regardless of port. I think it would be really good if workstations could connect to monitors via USB-C and provide power, DisplayPort, and keyboard, mouse, etc over the same connection. But unfortunately the PC and monitor ends don t appear to support such things. If you don t need any of those benefits in the list above (IE you are using USB for almost anything we do other than connecting a laptop to a dock/monitor/charger) then USB-A will do the job just as well as USB-C. The choice of which type to use should be based on price and which ports are available, EG My laptop has 2*USB-C ports and 2*USB-A so given that one USB-C port is almost always used for the monitor or for charging I don t really want to use USB-C for anything else to avoid running out of ports. When buying USB devices you can t always predict which systems you will need to connect them to. Currently there are a lot of systems without USB-C that are working well and have no need to be replaced. I haven t yet seen a system where the majority of ports are USB-C but that will probably happen in the next few years. Maybe in 2027 there will be PCs on sale with only two USB-A sockets forcing people who don t want to use a USB hub to save both of them for keyboard and mouse. Currently USB-C keyboards and mice are available on AliExpress but they are expensive and I haven t seen them in Australian stores. Most computer users don t wear out keyboards or mice so a lot of USB-A keyboard and mice will be in service for a long time. As an aside there are still many PCs with PS/2 keyboard and mouse ports in service so these things don t go away for a long time. There is one corner case where USB-C is convenient which is when you want to connect a mass storage device for system recovery or emergency backup, want a high speed, and don t want to spend time figuring out which of the ports are super speed (which can be difficult at the back of a PC with poor lighting). With USB-C you can expect a speed of at least 5Gbit/s and don t have to worry about accidentally connecting to a USB 2 port as is the situation with USB-A. For my own use the only times that I prefer USB-C over USB-A are for devices to connect to phones. Eventually I ll get a laptop that only has USB-C ports and this will change, but even then adaptors are possible. For someone who doesn t know the details of how things works it s not unreasonable to just buy the newest stuff and assume it s better as it usually is. But hopefully blog posts like this can help people make more informed decisions.

25 May 2024

Gunnar Wolf: How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting

This post is a review for Computing Reviews for How computers make books from graphics rendering, search algorithms, and functional programming to indexing and typesetting , a book published in Manning
If we look at the age-old process of creating books, how many different areas can a computer help us with? And how can each of them be used to teach computer science (CS) fundamentals to a nontechnical audience? This is the premise of John Whitington s enticing book and the result is quite amazing. The book immediately drew my attention when looking at the titles available for review. After all, my initiation into computing as a kid was learning the LaTeX typesetting system while my father worked on his first book on scientific language and typography [1]. Whitington picks 11 different technical aspects of book production, from how dots of ink are transferred to a white page and how they are made into controllable, recognizable shapes, all the way to forming beautiful typefaces and the nuances of properly addressing white-space to present aesthetically pleasing paragraphs, building it all into specific formats aimed at different ends. But if we dig beyond just the chapter titles, we will find a very interesting book on CS that, without ever using technical language or notation, presents aspects as varied as anti-aliasing, vector and raster images, character sets such as ASCII and Unicode, an introduction to programming, input methods for different writing systems, efficient encoding (compression) methods, both for text and images, lossless and lossy, and recursion and dithering methods. To my absolute surprise, while the author thankfully spared the reader the syntax usually associated with LISP-related languages, the programming examples clearly stem from the LISP school, presenting solutions based on tail recursion. Of course, it is no match for Donald Knuth s classic book on this same topic [2], but could very well be a primer for readers to approach it. The book is light and easy to read, and keeps a very informal, nontechnical tone throughout. My only complaint relates to reading it in PDF format; the topic of this book, and the care with which the images were provided by the author, warrant high resolution. The included images are not only decorative but an integral part of the book. Maybe this is specific to my review copy, but all of the raster images were in very low resolution. This book is quite different from what readers may usually expect, as it introduces several significant topics in the field. CS professors will enjoy it, of course, but also readers with a humanities background, students new to the field, or even those who are just interested in learning a bit more.

References
  1. S nchez y G ndara, A.; Magari os Lamas, F.; Wolf, K. B., Manual de lenguaje y tipograf a cient fica en castellano. Trillas, Mexico City, Mexico, 1986, https://www.fis.unam.mx/~bwolf/manual.html
  2. Knuth, D. E. Digital typographyCSLI Lecture Notes: CSLI Lecture Notes. CSLI Publications, Stanford, CA, 1999, https://www-cs-faculty.stanford.edu/~knuth/dt.html

24 May 2024

Julian Andres Klode: Observations in Debian dependency solving

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs. You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don t actually have any pure literals in there). We can control it in a bunch of ways:
  1. We can mark packages as install or reject
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.
This is about all that we really want to do, we can t go if we reach a conflict, say oh but this conflict was introduced by that upgrade, and it seems more important, so let s not backtrack on the upgrade request but on this dependency instead. . This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a which of these packages should I flip the opposite way to break the conflict kind of thinking. Now our test suite has a whole bunch of these semantics encoded in it, and I m going to share some problems and ideas for how to solve them. I can t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let s be honest).

apt upgrade is hard The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages. Now, consider the following package is installed:
X Depends: A (= 1)   B
An upgrade from A=1 to A=2 is available. What should happen? The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it s answer is quite clear: Keep back the upgrade of A. The new solver however sees two possible solutions:
  1. Install B to satisfy X Depends A (= 1) B.
  2. Keep back the upgrade of A
Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So
  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) A (= 1) and sees it is satisfied already and is content.
  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) B, sees that A (= 1) is not satisfiable, and picks B.
We have two ways to approach this issue:
  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.

Recommends are hard too See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases. But let s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:
  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)
This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, promotions :
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.
This neatly solves the problem for us. We will never break Recommends that are satisfied. Likewise, we already have a Recommends demotion rule:
A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).
Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn t autoremove them, but treat them as optional?

tightening of versioned dependencies Another case of versioned dependencies with alternatives that has complex behavior is something like
X Depends: A (>= 2)   B
X Recommends: A (>= 2)   B
In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B. We can solve this again as in the previous example by ordering the keep A installed requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate
Depends: A (>= 2)
into two rules:
  1. The package selection rule:
     Depends: A
    
    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) A (= 2) in an example with two versions for A.
  2. The version narrowing rule:
     Conflicts: A (<< 2)
    
    This outright would reject a choice of A (= 1).
So now we have 3 kinds of clauses:
  1. package selection
  2. version narrowing
  3. version selection
If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions. This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) B but e.g. Depends: A (= 3) B A (= 2). He d expect us to fall back to B if A (= 3) is not installable, and not to B. But we d like to enqueue A and reject all choices other than 3 and 2. I think it s fair to say: Don t do that, then here.

Implementing strict pinning correctly APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions. But of course, APT allows you to specify a non-candidate version of a package to install, for example:
apt install foo/oracular-proposed
The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy. The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I d really like to get rid of it. But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache. The current implementation of allowed version is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) A (= 1). However this has two disadvantages. (1) It means if we show you why A could not be installed, you don t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides. So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost One of the common issues we have is that when we have a dependency group
 A   B   C   D 
we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn t perhaps the best choice of operation. I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don t do this here: We have already lowered the representation of the dependency group into a list of versions, so we d need to extract the package back out of it. This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:
  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A
Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway). The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly #A * (#B+#C+#D) Each dependency group we need to check i.e. is X Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X Y and Y X are different dependencies of course, but that is to be expected they are. But any dependency of the same order will have the same memory layout. So really the cost is roughly N^4. This isn t nice. You can apply various heuristics here on how to improve that, or you can even apply binary logic:
  1. Enqueue common dependencies of A B C D
  2. Move into the left half, enqueue of A B
  3. Again divide and conquer and select A.
This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one. Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

Next.

Previous.