Search Results: "beh"

16 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project.

Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository:
dumb.sh
#!/bin/sh
echo "The script arguments are: '$@'"
If we run the following job on the common repository:
job:
  script:
    - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2
the output will be:
The script arguments are: 'ARG1 ARG2'
But if we run the same job from a different project that includes the same job definition the output will be different:
/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found
The problem here is that we include and expand the YAML files, but if a script wants to use other files from the common repository as an asset (configuration file, shell script, template, etc.), the execution fails if the files are not available on the project that includes the remote job definition.

SolutionsWe can solve the issue using multiple approaches, I ll describe two of them:
  • Create files using scripts
  • Download files from the common repository

Create files using scriptsOne way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using HERE documents. The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error prone) or encode the whole file into base64 or something similar, making maintenance harder. As an example, imagine that we want to use the dumb.sh script presented on the previous section and we want to call it from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden folder inside the project directory or use a PATH like /tmp/assets-$CI_JOB_ID to leave things outside the project folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh runner). To create the file we will use hidden jobs to write our script template and reference tags to add it to the scripts when we want to use them. Here we have a snippet that creates the file with cat:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      cat >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      #!/bin/sh
      echo "The script arguments are: '\$@'"
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Note that to make things work we ve added 6 spaces before the script code and escaped the dollar sign. To do the same using base64 we replace the previous snippet by this:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      base64 -d >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Again, we have to indent the base64 version of the file using 6 spaces (all lines of the base64 output have to be indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain. With either version we just need to add a !reference before using the script, if we add the call on the first lines of the before_script we can use the downloaded file in the before_script, script or after_script sections of the job without problems:
job:
  before_script:
    - !reference [.file_scripts, create_dumb_sh]
  script:
    - $ CI_PROJECT_DIR /assets/dumb.sh ARG1 ARG2
The output of a pipeline that uses this job will be the same as the one shown in the original example:
The script arguments are: 'ARG1 ARG2'

Download the files from the common repositoryAs we ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use. An alternative approach is to keep the assets on a directory of the common repository (in our examples we will name it assets) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where we want to download the files) and defines a script fragment to download the complete folder. Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the before_script of the jobs that use files from the assets directory and they will be available when needed. The following file is an example of the YAML file we just mentioned:
bootstrap.yml
variables:
  CI_TMPL_API_V4_URL: "$ CI_API_V4_URL /projects/common%2Fci-templates"
  CI_TMPL_ARCHIVE_URL: "$ CI_TMPL_API_V4_URL /repository/archive"
  CI_TMPL_ASSETS_DIR: "/tmp/assets-$ CI_JOB_ID "
.scripts_common:
  bootstrap_ci_templates:
    -  
      # Downloading assets
      echo "Downloading assets"
      mkdir -p "$CI_TMPL_ASSETS_DIR"
      wget -q -O - --header="PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN" \
        "$CI_TMPL_ARCHIVE_URL?path=assets&sha=$ CI_TMPL_REF:-main "  
        tar --strip-components 2 -C "$ASSETS_DIR" -xzf -
The file defines the following variables:
  • CI_TMPL_API_V4_URL: URL of the common project, in our case we are using the project ci-templates inside the common group (note that the slash between the group and the project is escaped, that is needed to reference the project by name, if we don t like that approach we can replace the url encoded path by the project id, i.e. we could use a value like $ CI_API_V4_URL /projects/31)
  • CI_TMPL_ARCHIVE_URL: Base URL to use the gitlab API to download files from a repository, we will add the arguments path and sha to select which sub path to download and from which commit, branch or tag (we will explain later why we use the CI_TMPL_REF, for now just keep in mind that if it is not defined we will download the version of the files available on the main branch when the job is executed).
  • CI_TMPL_ASSETS_DIR: Destination of the downloaded files.
And uses variables defined in other places:
  • CI_TMPL_READ_TOKEN: token that includes the read_api scope for the common project, we need it because the tokens created by the CI/CD pipelines of other projects can t be used to access the api of the common one.We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)
  • CI_TMPL_REF: branch or tag of the common repo from which to get the files (we need that to make sure we are using the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed tags to make sure that the assets don t change between executions unless we change the reference).We will set the value on the .gitlab-ci.yml file of the remote projects and will use the same reference when including the files to make sure that everything is coherent.
This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:
pipeline.yml
include:
  - /bootstrap.yaml
stages:
  - test
dumb_job:
  stage: test
  before_script:
    - !reference [.bootstrap_ci_templates, create_dumb_sh]
  script:
    - $ CI_TMPL_ASSETS_DIR /dumb.sh ARG1 ARG2
To use it from an external project we will use the following gitlab ci configuration:
gitlab-ci.yml
include:
  - project: 'common/ci-templates'
    ref: &ciTmplRef 'main'
    file: '/pipeline.yml'
variables:
  CI_TMPL_REF: *ciTmplRef
Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the CI_TMPL_REF variable (as far as I know we have to pass the ref value explicitly to know which reference was used when including the file, the anchor allows us to make sure that the value is always the same in both places). The reference we use is quite important for the reproducibility of the jobs, if we don t use fixed tags or commit hashes as references each time a job that downloads the files is executed we can get different versions of them. For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common repo the way everything is going to work is almost the same as having the pipelines directly in our repo). But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs that we want to test and they will download the latest versions of the asset files on the branch, speeding up the testing process. However keep in mind that the trick only works with the asset files, if we change a job or a pipeline on the YAML files restarting the job is not enough to test the new version as the restart uses the same job created with the current pipeline. To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the pipeline manually.

ConclusionFor now I m using the second solution and as it is working well my guess is that I ll keep using that approach unless giltab itself provides a better or simpler way of doing the same thing.

13 September 2023

Matthew Garrett: Reconstructing an invalid TPM event log

TPMs contain a set of registers ("Platform Configuration Registers", or PCRs) that are used to track what a system boots. Each time a new event is measured, a cryptographic hash representing that event is passed to the TPM. The TPM appends that hash to the existing value in the PCR, hashes that, and stores the final result in the PCR. This means that while the PCR's value depends on the precise sequence and value of the hashes presented to it, the PCR value alone doesn't tell you what those individual events were. Different PCRs are used to store different event types, but there are still more events than there are PCRs so we can't avoid this problem by simply storing each event separately.

This is solved using the event log. The event log is simply a record of each event, stored in RAM. The algorithm the TPM uses to calculate the PCR values is known, so we can reproduce that by simply taking the events from the event log and replaying the series of events that were passed to the TPM. If the final calculated value is the same as the value in the PCR, we know that the event log is accurate, which means we now know the value of each individual event and can make an appropriate judgement regarding its security.

If any value in the event log is invalid, we'll calculate a different PCR value and it won't match. This isn't terribly helpful - we know that at least one entry in the event log doesn't match what was passed to the TPM, but we don't know which entry. That means we can't trust any of the events associated with that PCR. If you're trying to make a security determination based on this, that's going to be a problem.

PCR 7 is used to track information about the secure boot policy on the system. It contains measurements of whether or not secure boot is enabled, and which keys are trusted and untrusted on the system in question. This is extremely helpful if you want to verify that a system booted with secure boot enabled before allowing it to do something security or safety critical. Unfortunately, if the device gives you an event log that doesn't replay correctly for PCR 7, you now have no idea what the security state of the system is.

We ran into that this week. Examination of the event log revealed an additional event other than the expected ones - a measurement accompanied by the string "Boot Guard Measured S-CRTM". Boot Guard is an Intel feature where the CPU verifies the firmware is signed with a trusted key before executing it, and measures information about the firmware in the process. Previously I'd only encountered this as a measurement into PCR 0, which is the PCR used to track information about the firmware itself. But it turns out that at least some versions of Boot Guard also measure information about the Boot Guard policy into PCR 7. The argument for this is that this is effectively part of the secure boot policy - having a measurement of the Boot Guard state tells you whether Boot Guard was enabled, which tells you whether or not the CPU verified a signature on your firmware before running it (as I wrote before, I think Boot Guard has user-hostile default behaviour, and that enforcing this on consumer devices is a bad idea).

But there's a problem here. The event log is created by the firmware, and the Boot Guard measurements occur before the firmware is executed. So how do we get a log that represents them? That one's fairly simple - the firmware simply re-calculates the same measurements that Boot Guard did and creates a log entry after the fact[1]. All good.

Except. What if the firmware screws up the calculation and comes up with a different answer? The entry in the event log will now not match what was sent to the TPM, and replaying will fail. And without knowing what the actual value should be, there's no way to fix this, which means there's no way to verify the contents of PCR 7 and determine whether or not secure boot was enabled.

But there's still a fundamental source of truth - the measurement that was sent to the TPM in the first place. Inspired by Henri Nurmi's work on sniffing Bitlocker encryption keys, I asked a coworker if we could sniff the TPM traffic during boot. The TPM on the board in question uses SPI, a simple bus that can have multiple devices connected to it. In this case the system flash and the TPM are on the same SPI bus, which made things easier. The board had a flash header for external reprogramming of the firmware in the event of failure, and all SPI traffic was visible through that header. Attaching a logic analyser to this header made it simple to generate a record of that. The only problem was that the chip select line on the header was attached to the firmware flash chip, not the TPM. This was worked around by simply telling the analysis software that it should invert the sense of the chip select line, ignoring all traffic that was bound for the flash and paying attention to all other traffic. This worked in this case since the only other device on the bus was the TPM, but would cause problems in the event of multiple devices on the bus all communicating.

With the aid of this analyser plugin, I was able to dump all the TPM traffic and could then search for writes that included the "0182" sequence that corresponds to the command code for a measurement event. This gave me a couple of accesses to the locality 3 registers, which was a strong indication that they were coming from the CPU rather than from the firmware. One was for PCR 0, and one was for PCR 7. This corresponded to the two Boot Guard events that we expected from the event log. The hash in the PCR 0 measurement was the same as the hash in the event log, but the hash in the PCR 7 measurement differed from the hash in the event log. Replacing the event log value with the value actually sent to the TPM resulted in the event log now replaying correctly, supporting the hypothesis that the firmware was failing to correctly reconstruct the event.

What now? The simple thing to do is for us to simply hard code this fixup, but longer term we'd like to figure out how to reconstruct the event so we can calculate the expected value ourselves. Unfortunately there doesn't seem to be any public documentation on this. Sigh.

[1] What stops firmware on a system with no Boot Guard faking those measurements? TPMs have a concept of "localities", effectively different privilege levels. When Boot Guard performs its initial measurement into PCR 0, it does so at locality 3, a locality that's only available to the CPU. This causes PCR 0 to be initialised to a different initial value, affecting the final PCR value. The firmware can't access locality 3, so can't perform an equivalent measurement, so can't fake the value.

comment count unavailable comments

12 September 2023

John Goerzen: A Maze of Twisty Little Pixels, All Tiny

Two years ago, I wrote Managing an External Display on Linux Shouldn t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved. But then you throw HiDPI into the mix and it all goes wonky. If you re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren t. Wayland is far better, supporting on-the-fly resizes quite nicely. I ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that. On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness. But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right. I tried both Gnome and KDE. Here are my observations with both: Gnome I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section. Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable. Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote We re not going to add a preference just because you want one . KDE, on the other hand, aside from not mucking with your system s power settings in this way, has a nice panel with on AC and on battery and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff. While Gnome has excellent support for Wayland, it doesn t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below). Gnome won t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn t exist on KDE. Both Gnome and KDE support night light (warmer color temperatures at night), but Gnome s often didn t change when it should have, or changed on one display but not the other. The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this. Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I m about to dd a Debian installer to it, I definitely don t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn t have a nice removable-drives icon like KDE does) and it s a bunch of annoying clicks, and I didn t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc. The ssh agent on Gnome doesn t start up for a Wayland session, though this is easily enough worked around. The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies. The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly. KDE The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system. Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn t muck with my CPU power settings, and lets me easily define on AC vs on battery settings for things like suspend when idle. KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is Scaled by the system , which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is Apply scaling themselves , which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use Apply scaling by themselves because only a few of my apps aren t Wayland-aware. I did encounter a few bugs in KDE under Wayland: sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec. Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine. The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it. On Debian, KDE mysteriously installed Pulseaudio instead of Debian s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine). Conclusions I m sticking with KDE. Given that I couldn t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn t hard. But even if it weren t for that, I d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome. Update: Corrected the gsettings command

8 September 2023

Reproducible Builds: Reproducible Builds in August 2023

Welcome to the August 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. If you are interested in contributing to the project, please visit our Contribute page on our website.

Rust serialisation library moving to precompiled binaries Bleeping Computer reported that Serde, a popular Rust serialization framework, had decided to ship its serde_derive macro as a precompiled binary. As Ax Sharma writes:
The move has generated a fair amount of push back among developers who worry about its future legal and technical implications, along with a potential for supply chain attacks, should the maintainer account publishing these binaries be compromised.
After intensive discussions, use of the precompiled binary was phased out.

Reproducible builds, the first ten years On August 4th, Holger Levsen gave a talk at BornHack 2023 on the Danish island of Funen titled Reproducible Builds, the first ten years which promised to contain:
[ ] an overview about reproducible builds, the past, the presence and the future. How it started with a small [meeting] at DebConf13 (and before), how it grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an executive order of the president of the United States. (HTML slides)
Holger repeated the talk later in the month at Chaos Communication Camp 2023 in Zehdenick, Germany: A video of the talk is available online, as are the HTML slides.

Reproducible Builds Summit Just another reminder that our upcoming Reproducible Builds Summit is set to take place from October 31st November 2nd 2023 in Hamburg, Germany. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. If you re interested in joining us this year, please make sure to read the event page, the news item, or the invitation email that Mattia Rizzolo sent out, which have more details about the event and location. We are also still looking for sponsors to support the event, so do reach out to the organizing team if you are able to help. (Also of note that PackagingCon 2023 is taking place in Berlin just before our summit, and their schedule has just been published.)

Vagrant Cascadian on the Sustain podcast Vagrant Cascadian was interviewed on the SustainOSS podcast on reproducible builds:
Vagrant walks us through his role in the project where the aim is to ensure identical results in software builds across various machines and times, enhancing software security and creating a seamless developer experience. Discover how this mission, supported by the Software Freedom Conservancy and a broad community, is changing the face of Linux distros, Arch Linux, openSUSE, and F-Droid. They also explore the challenges of managing random elements in software, and Vagrant s vision to make reproducible builds a standard best practice that will ideally become automatic for users. Vagrant shares his work in progress and their commitment to the last mile problem.
The episode is available to listen (or download) from the Sustain podcast website. As it happens, the episode was recorded at FOSSY 2023, and the video of Vagrant s talk from this conference (Breaking the Chains of Trusting Trust is now available on Archive.org: It was also announced that Vagrant Cascadian will be presenting at the Open Source Firmware Conference in October on the topic of Reproducible Builds All The Way Down.

On our mailing list Carles Pina i Estany wrote to our mailing list during August with an interesting question concerning the practical steps to reproduce the hello-traditional package from Debian. The entire thread can be viewed from the archive page, as can Vagrant Cascadian s reply.

Website updates Rahul Bajaj updated our website to add a series of environment variations related to reproducible builds [ ], Russ Cox added the Go programming language to our projects page [ ] and Vagrant Cascadian fixed a number of broken links and typos around the website [ ][ ][ ].

Software development In diffoscope development this month, versions 247, 248 and 249 were uploaded to Debian unstable by Chris Lamb, who also added documentation for the new specialize_as method and expanding the documentation of the existing specialize as well [ ]. In addition, Fay Stegerman added specialize_as and used it to optimise .smali comparisons when decompiling Android .apk files [ ], Felix Yan and Mattia Rizzolo corrected some typos in code comments [ , ], Greg Chabala merged the RUN commands into single layer in the package s Dockerfile [ ] thus greatly reducing the final image size. Lastly, Roland Clobus updated tool descriptions to mark that the xb-tool has moved package within Debian [ ].
reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian updated the packaging to be compatible with Tox version 4. This was originally filed as Debian bug #1042918 and Holger Levsen uploaded this to change to Debian unstable as version 0.7.26 [ ].

Distribution work In Debian, 28 reviews of Debian packages were added, 14 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types were added, including Chris Lamb adding a new timestamp_in_documentation_using_sphinx_zzzeeksphinx_theme toolchain issue.
In August, F-Droid added 25 new reproducible apps and saw 2 existing apps switch to reproducible builds, making 191 apps in total that are published with Reproducible Builds and using the upstream developer s signature. [ ]
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Disable Debian live image creation jobs until an OpenQA credential problem has been fixed. [ ]
    • Run our maintenance scripts every 3 hours instead of every 2. [ ]
    • Export data for unstable to the reproducible-tracker.json data file. [ ]
    • Stop varying the build path, we want reproducible builds. [ ]
    • Temporarily stop updating the pbuilder.tgz for Debian unstable due to #1050784. [ ][ ]
    • Correctly document that we are not variying usrmerge. [ ][ ]
    • Mark two armhf nodes (wbq0 and jtx1a) as down; investigation is needed. [ ]
  • Misc:
    • Force reconfiguration of all Jenkins jobs, due to the recent rise of zombie processes. [ ]
    • In the node health checks, also try to restart failed ntpsec, postfix and vnstat services. [ ][ ][ ]
  • System health checks:
    • Detect Debian live build failures due to missing credentials. [ ][ ]
    • Ignore specific types of known zombie processes. [ ][ ]
In addition, Vagrant Cascadian updated the scripts to use a predictable build path that is consistent with the one used on buildd.debian.org. [ ][ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

5 September 2023

Russ Allbery: Review: Before We Go Live

Review: Before We Go Live, by Stephen Flavall
Publisher: Spender Books
Copyright: 2023
ISBN: 1-7392859-1-3
Format: Kindle
Pages: 271
Stephen Flavall, better known as jorbs, is a Twitch streamer specializing in strategy games and most well-known as one of the best Slay the Spire players in the world. Before We Go Live, subtitled Navigating the Abusive World of Online Entertainment, is a memoir of some of his experiences as a streamer. It is his first book. I watch a lot of Twitch. For a long time, it was my primary form of background entertainment. (Twitch's baffling choices to cripple their app have subsequently made YouTube somewhat more attractive.) There are a few things one learns after a few years of watching a lot of streamers. One is that it's a precarious, unforgiving living for all but the most popular streamers. Another is that the level of behind-the-scenes drama is very high. And a third is that the prevailing streaming style has converged on fast-talking, manic, stream-of-consciousness joking apparently designed to satisfy people with very short attention spans. As someone for whom that manic style is like nails on a chalkboard, I am therefore very picky about who I'm willing to watch and rarely can tolerate the top streamers for more than an hour. jorbs is one of the handful of streamers I've found who seems pitched towards adults who don't need instant bursts of dopamine. He's calm, analytical, and projects a relaxed, comfortable feeling most of the time (although like the other streamers I prefer, he doesn't put up with nonsense from his chat). If you watch him for a while, he's also one of those people who makes you think "oh, this is an interestingly unusual person." It's a bit hard to put a finger on, but he thinks about things from intriguing angles. Going in, I thought this would be a general non-fiction book about the behind-the-scenes experience of the streaming industry. Before We Go Live isn't really that. It is primarily a memoir focused on Flavall's personal experience (as well as the experience of his business manager Hannah) with the streaming team and company F2K, supplemented by a brief history of Flavall's streaming career and occasional deeply personal thoughts on his own mental state and past experiences. Along the way, the reader learns a lot more about his thought processes and approach to life. He is indeed a fascinatingly unusual person. This is to some extent an expos , but that's not the most interesting part of this book. It quickly becomes clear that F2K is the sort of parasitic, chaotic, half-assed organization that crops up around any new business model. (Yes, there's crypto.) People who are good at talking other people out of money and making a lot of big promises try to follow a startup fast-growth model with unclear plans for future revenue and hope that it all works out and turns into a valuable company. Most of the time it doesn't, because most of the people running these sorts of opportunistic companies are better at talking people out of money than at running a business. When the new business model is in gaming, you might expect a high risk of sexism and frat culture; in this case, you would not be disappointed. This is moderately interesting but not very revealing if one is already familiar with startup culture and the kind of people who start businesses without doing any of the work the business is about. The F2K principals are at best opportunistic grifters, if not actual con artists. It's not long into this story before this is obvious. At that point, the main narrative of this book becomes frustrating; Flavall recognizes the dysfunction to some extent, but continues to associate with these people. There are good reasons related to his (and Hannah's) psychological state, but it doesn't make it easier to read. Expect to spend most of the book yelling "just break up with these people already" as if you were reading Captain Awkward letters. The real merit of this book is that people are endlessly fascinating, Flavall is charmingly quirky, and he has the rare mix of the introspection that allows him to describe himself without the tendency to make his self-story align with social expectations. I think every person is intriguingly weird in at least some ways, but usually the oddities are smoothed away and hidden under a desire to present as "normal" to the rest of society. Flavall has the right mix of writing skill and a willingness to write with direct honesty that lets the reader appreciate and explore the complex oddities of a real person, including the bits that at first don't make much sense. Parts of this book are uncomfortable reading. Both Flavall and his manager Hannah are abuse survivors, which has a lot to do with their reactions to their treatment by F2K, and those reactions are both tragic and maddening to read about. It's a good way to build empathy for why people will put up with people who don't have their best interests at heart, but at times that empathy can require work because some of the people on the F2K side are so transparently sleazy. This is not the sort of book I'm likely to re-read, but I'm glad I read it simply for that time spent inside the mind of someone who thinks very differently than I do and is both honest and introspective enough to give me a picture of his thought processes that I think was largely accurate. This is something memoir is uniquely capable of doing if the author doesn't polish all of the oddities out of their story. It takes a lot of work to be this forthright about one's internal thought processes, and Flavall does an excellent job. Rating: 7 out of 10

1 September 2023

Simon Josefsson: Trisquel on ppc64el: Talos II

The release notes for Trisquel 11.0 Aramo mention support for POWER and ARM architectures, however the download area only contains links for x86, and forum posts suggest there is a lack of instructions how to run Trisquel on non-x86. Since the release of Trisquel 11 I have been busy migrating x86 machines from Debian to Trisquel. One would think that I would be finished after this time period, but re-installing and migrating machines is really time consuming, especially if you allow yourself to be distracted every time you notice something that Really Ought to be improved. Rabbit holes all the way down. One of my production machines is running Debian 11 bullseye on a Talos II Lite machine from Raptor Computing Systems, and migrating the virtual machines running on that host (including the VM that serves this blog) to a x86 machine running Trisquel felt unsatisfying to me. I want to migrate my computing towards hardware that harmonize with FSF s Respects Your Freedom and not away from it. Here I had to chose between using the non-free software present in newer Debian or the non-free software implied by most x86 systems: not an easy chose. So I have ignored the dilemma for some time. After all, the machine was running Debian 11 bullseye , which was released before Debian started to require use of non-free software. With the end-of-life date for bullseye approaching, it seems that this isn t a sustainable choice. There is a report open about providing ppc64el ISOs that was created by Jason Self shortly after the release, but for many months nothing happened. About a month ago, Luis Guzm n mentioned an initial ISO build and I started testing it. The setup has worked well for a month, and with this post I want to contribute instructions how to get it up and running since this is still missing. The setup of my soon-to-be new production machine: According to the notes in issue 14 the ISO image is available at https://builds.trisquel.org/debian-installer-images/ and the following commands download, integrity check and write it to a USB stick:
wget -q https://builds.trisquel.org/debian-installer-images/debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz
tar xfa debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso
echo '6df8f45fbc0e7a5fadf039e9de7fa2dc57a4d466e95d65f2eabeec80577631b7  ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso'   sha256sum -c
sudo wipefs -a /dev/sdX
sudo dd if=./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso of=/dev/sdX conv=sync status=progress
Sadly, no hash checksums or OpenPGP signatures are published. Power off your device, insert the USB stick, and power it up, and you see a Petitboot menu offering to boot from the USB stick. For some reason, the "Expert Install" was the default in the menu, and instead I select "Default Install" for the regular experience. For this post, I will ignore BMC/IPMI, as interacting with it is not necessary. Make sure to not connect the BMC/IPMI ethernet port unless you are willing to enter that dungeon. The VGA console works fine with a normal USB keyboard, and you can chose to use only the second enP4p1s0f1 network card in the network card selection menu. If you are familiar with Debian netinst ISO s, the installation is straight-forward. I complicate the setup by partitioning two RAID1 partitions on the two NVMe sticks, one RAID1 for a 75GB ext4 root filesystem (discard,noatime) and one RAID1 for a 900GB LVM volume group for virtual machines, and two 20GB swap partitions on each of the NVMe sticks (to silence a warning about lack of swap, I m not sure swap is still a good idea?). The 3x18TB disks use DM-integrity with RAID1 however the installer does not support DM-integrity so I had to create it after the installation. There are two additional matters worth mentioning: I have re-installed the machine a couple of times, and have now finished installing the production setup. I haven t ran into any serious issues, and the system has been stable. Time to wrap up, and celebrate that I now run an operating system aligned with the Free System Distribution Guidelines on hardware that aligns with Respects Your Freedom Happy Hacking indeed!

31 August 2023

Ian Jackson: Conferences take note: the pandemic is not over

Many people seem to be pretending that the pandemic is over. It isn t. People are still getting Covid, becoming sick, and even in some cases becoming disabled. People s plans are still being disrupted. Vulnerable people are still hiding. Conference organisers: please make robust Covid policies, publish them early, and enforce them. And, clearly set expectations for your attendees. Attendees: please don t be the superspreader. Two conferences This year I have attended a number of in-person events. For Eastercon I chose to participate online, remotely. This turns out to have been a very good decision. At least a quarter of attendees got Covid. At BiCon we had about 300 attendees. I m not aware of any Covid cases. Part of the difference between the two may have been in the policies. BiCon s policy was rather more robust. Unlike Eastercon s it had a much better refund policy for people who got Covid and therefore shouldn t come; also BiCon asked attendees to actually show evidence of a negative test. Another part of the difference will have been the venue. The NTU buildings we used at BiCon were modern and well ventilated. But, I think the biggest difference was attendees' attitudes. BiCon attendees are disproportionately likely (compared to society at large) to have long term medical conditions. And the cultural norms are to value and protect those people. Conversely, in my experience, a larger proportion of Eastercon attendees don t always have the same level of consideration. I don t want to give details, but I have reliable reports of quite reprehensible behaviour by some attendees - even members of the convention volunteer staff. Policies Your conference should IMO at the very least: The rules should be published very early, so that people can see them, and decide if they want to go, before they have to book anything. Don t recommend that people don t spread disease Most of the things that attendees can do to about Covid primarily protect others, rather than themselves. Making those things recommendations or advice is grossly unfair. You re setting up an arsehole filter: nice people will want to protect others, but less public spirited people will tell themselves it s only a recommendation. Make the rules mandatory. But won t we be driving people away ? If you don t have a robust Covid policy, you are already driving people away. And the people who won t come because of reasonable measures like I ve asked for above, are dickheads. You don t want them putting your other attendees at risk. And probably they re annoying in other ways too. Example of something that is not OK Yesterday (2023-08-30 13:44 UTC), less than two weeks before the conference, Debconf 23 s Covid policy still looked like you see below. Today there is a policy, but it is still weak.
Edited 2023-09-01 00:58 +01:00 to fix the link to the Debconf policy.


comment count unavailable comments

27 August 2023

Shirish Agarwal: FSCKing /home

There is a bit of context that needs to be shared before I get to this and would be a long one. For reasons known and unknown, I have a lot of sudden electricity outages. Not just me, all those who are on my line. A discussion with a lineman revealed that around 200+ families and businesses are on the same line and when for whatever reason the electricity goes for all. Even some of the traffic lights don t work. This affects software more than hardware or in some cases, both. And more specifically HDD s are vulnerable. I had bought an APC unit several years for precisely this, but over period of time it just couldn t function and trips also when the electricity goes out. It s been 6-7 years so can t even ask customer service to fix the issue and from whatever discussions I have had with APC personnel, the only meaningful difference is to buy a new unit but even then not sure this is an issue that can be resolved, even with that. That comes to the issue that happens once in a while where the system fsck is unable to repair /home and you need to use an external pen drive for the same. This is my how my hdd stacks up
/ is on dev/sda7 /boot is on /dev/sda6, /boot/efi is on /dev/sda2 and /home is on /dev/sda8 so theoretically, if /home for some reason doesn t work I should be able drop down on /dev/sda7, unmount /dev/sda8, run fsck and carry on with my work. I tried it number of times but it didn t work. I was dropping down on tty1 and attempting the same, no dice as root/superuser getting the barest x-term. So first I tried asking couple of friends who live nearby me. Unfortunately, both are MS-Windows users and both use what are called as company-owned laptops . Surfing on those systems were a nightmare. Especially the number of pop-ups of ads that the web has become. And to think about how much harassment ublock origin has saved me over the years. One of the more interesting bits from both their devices were showing all and any downloads from fosshub was showing up as malware. I dunno how much of that is true or not as haven t had to use it as most software we get through debian archives or if needed, download from github or wherever and run/install it and you are in business. Some of them even get compiled into a good .deb package but that s outside the conversation atm. My only experience with fosshub was few years before the pandemic and that was good. I dunno if fosshub really has malware or malwarebytes was giving false positives. It also isn t easy to upload a 600 MB+ ISO file somewhere to see whether it really has malware or not. I used to know of a site or two where you could upload a suspicious file and almost 20-30 famous and known antivirus and anti-malware engines would check it and tell you the result. Unfortunately, I have forgotten the URL and seeing things from MS-Windows perspective, things have gotten way worse than before. So left with no choice, I turned to the local LUG for help. Fortunately, my mobile does have e-mail and I could use gmail to solicit help. While there could have been any number of live CD s that could have helped but one of my first experiences with GNU/Linux was that of Knoppix that I had got from Linux For You (now known as OSFY) sometime in 2003. IIRC, had read an interview of Mr. Klaus Knopper as well and was impressed by it. In those days, Debian wasn t accessible to non-technical users then and Knoppix was a good tool to see it. In fact, think he was the first to come up with the idea of a Live CD and run with it while Canonical/Ubuntu took another 2 years to do it. I think both the CD and the interview by distrowatch was shared by LFY in those early days. Of course, later the story changes after he got married, but I think that is more about Adriane rather than Knoppix. So Vishal Rao helped me out. I got an HP USB 3.2 32GB Type C OTG Flash Drive x5600c (Grey & Black) from a local hardware dealer around similar price point. The dealer is a big one and has almost 200+ people scattered around the city doing channel sales who in turn sell to end users. Asking one of the representatives about their opinion on stopping electronic imports (apparently more things were added later to the list including all sorts of sundry items from digital cameras to shavers and whatnot.) The gentleman replied that he hopes that it would not happen otherwise more than 90% would have to leave their jobs. They already have started into lighting fixtures (LED bulbs, tubelights etc.) but even those would come in the same ban  The main argument as have shared before is that Indian Govt. thinks we need our home grown CPU and while I have no issues with that, as shared before except for RISC-V there is no other space where India could look into doing that. Especially after the Chip Act, Biden has made that any new fabs or any new thing in chip fabrication will only be shared with Five Eyes only. Also, while India is looking to generate about 2000 GW by 2030 by solar, China has an ambitious 20,000 GW generation capacity by the end of this year and the Chinese are the ones who are actually driving down the module prices. The Chinese are also automating their factories as if there s no tomorrow. The end result of both is that China will continue to be the world s factory floor for the foreseeable future and whoever may try whatever policies, it probably is gonna be difficult to compete with them on prices of electronic products. That s the reason the U.S. has been trying so that China doesn t get the latest technology but that perhaps is a story for another day.

HP USB 3.2 Type C OTG Flash Drive x5600c For people who have had read this blog they know that most of the flash drives today are MLC Drives and do not have the longevity of the SLC Drives. For those who maybe are new, this short brochure/explainer from Kingston should enhance your understanding. SLC Drives are rare and expensive. There are also a huge number of counterfeit flash drives available in the market and almost all the companies efforts whether it s Kingston, HP or any other manufacturer, they have been like a drop in the bucket. Coming back to the topic at hand. While there are some tools that can help you to figure out whether a pen drive is genuine or not. While there are products that can tell you whether they are genuine or not (basically by probing the memory controller and the info. you get from that.) that probably is a discussion left for another day. It took me couple of days and finally I was able to find time to go Vishal s place. The journey of back and forth lasted almost 6 hours, with crazy traffic jams. Tells you why Pune or specifically the Swargate, Hadapsar patch really needs a Metro. While an in-principle nod has been given, it probably is more than 5-7 years or more before we actually have a functioning metro. Even the current route the Metro has was supposed to be done almost 5 years to the date and even the modified plan was of 3 years ago. And even now, most of the Stations still need a lot of work to be done. PMC, Deccan as examples etc. still have loads to be done. Even PMT (Pune Muncipal Transport) that that is supposed to do the last mile connections via its buses has been putting half-hearted attempts

Vishal Rao While Vishal had apparently seen me and perhaps we had also interacted, this was my first memory of him although we have been on a few boards now and then including stackexchange. He was genuine and warm and shared 4-5 distros with me, including Knoppix and System Rescue as shared by Arun Khan. While this is and was the first time I had heard about Ventoy apparently Vishal has been using it for couple of years now. It s a simple shell script that you need to download and run on your pen drive and then just dump all the .iso images. The easiest way to explain ventoy is that it looks and feels like Grub. Which also reminds me an interaction I had with Vishal on mobile. While troubleshooting the issue, I was unsure whether it was filesystem that was the issue or also systemd was corrupted. Vishal reminded me of putting fastboot to the kernel parameters to see if I m able to boot without fscking and get into userspace i.e. /home. Although journalctl and systemctl were responding even on tty1 still was a bit apprehensive. Using fastboot was able to mount the whole thing and get into userspace and that told me that it s only some of the inodes that need clearing and there probably are some orphaned inodes. While Vishal had got a mini-pc he uses that a server, downloads stuff to it and then downloads stuff from it. From both privacy, backup etc. it is a better way to do things but then you need to laptop to access it. I am sure he probably uses it for virtualization and other ways as well but we just didn t have time for that discussion. Also a mini-pc can set you back anywhere from 25 to 40k depending on the mini-pc and the RAM and the SSD. And you need either a lappy or an Raspberry Pi with some kinda visual display to interact with the mini-pc. While he did share some of the things, there probably could have been a far longer interaction just on that but probably best left for another day. Now at my end, the system I had bought is about 5-6 years old. At that time it only had 6 USB 2.0 drives and 2 USB 3.0 (A) drives.
The above image does tell of the various form factors. One of the other things is that I found the pendrive and its connectors to be extremely fiddly. It took me number of times fiddling around with it when I was finally able to put in and able to access the pen drive partitions. Unfortunately, was unable to see/use systemrescue but Knoppix booted up fine. I mounted the partitions briefly to see where is what and sure enough /dev/sda8 showed my /home files and folders. Unmounted it, then used $fsck -y /dev/sda8 and back in business. This concludes what happened. Updates Quite a bit was left out on the original post, part of which I didn t know and partly stuff which is interesting and perhaps need a blog post of their own. It s sad I won t be part of debconf otherwise who knows what else I would have come to know.
  1. One of the interesting bits that I came to know about last week is the Alibaba T-Head T-Head TH1520 RISC-V CPU and saw it first being demoed on a laptop and then a standalone tablet. The laptop is an interesting proposition considering Alibaba opened up it s chip thing only couple of years ago. To have an SOC within 18 months and then under production for lappies and tablets is practically unheard of especially of a newbie/startup. Even AMD took 3-4 years for its first chip.It seems they (Alibaba) would be parceling them out by quarter end 2023 and another 1000 pieces/Units first quarter next year, while the scale is nothing compared to the behemoths, I think this would be more as a matter of getting feedback on both the hardware and software. The value proposition is much better than what most of us get, at least in India. For example, they are doing a warranty for 5 years and also giving spare parts. RISC-V has been having a lot of resurgence in China in part as its an open standard and partly development will be far cheaper and faster than trying x86 or x86-64. If you look into both the manufacturers, due to monopoly, both of them now give 5-8% increment per year, and if you look back in history, you would find that when more chips were in competition, they used to give 15-20% performance increment per year.
2. While Vishal did share with me what he used and the various ways he uses the mini-pc, I did have a fun speculating on what he could use it. As shared by Romane as his case has shared, the first thing to my mind was backups. Filesystems are notorious in the sense they can be corrupted or can be prone to be corrupted very easily as can be seen above  . Backups certainly make a lot of sense, especially rsync. The other thing that came to my mind was having some sort of A.I. and chat server. IIRC, somebody has put quite a bit of open source public domain data in debian servers that could be used to run either a chatbot or an A.I. or both and use that similar to how chatGPT but with much limited scope than what chatgpt uses. I was also thinking a media server which Vishal did share he does. I may probably visit him sometime to see what choices he did and what he learned in the process, if anything. Another thing that could be done is just take a dump of any of commodity markets or any markets and have some sort of predictive A.I. or whatever. A whole bunch of people have scammed thousands of Indian users on this, but if you do it on your own and for your own purposes to aid you buy and sell stocks or whatever commodity you may fancy. After all, nowadays markets themselves are virtual. While Vishal s mini-pc doesn t have any graphics, if it was an AMD APU mini-pc, something like this he could have hosted games in the way of thick server, thin client where all graphics processing happens on the server rather than the client. With virtual reality I think the case for the same case could be made or much more. The only problem with VR/AR is that we don t really have mass-market googles, eye pieces or headset. The only notable project that Google has/had in that place is the Google VR Cardboard headset and the experience is not that great or at least was not that great few years back when I could hear and experience the same. Most of the VR headsets say for example the Meta Quest 2 is for around INR 44k/- while Quest 3 is INR 50k+ and officially not available. As have shared before, the holy grail of VR would be when it falls below INR 10k/- so it becomes just another accessory, not something you really have to save for. There also isn t much content on that but then that is also the whole chicken or egg situation. This again is a non-stop discussion as so much has been happening in that space it needs its own blog post/article whatever. Till later.

Gunnar Wolf: Interested in adopting the RPi images for Debian?

Back in June 2018, Michael Stapelberg put the Raspberry Pi image building up for adoption. He created the first set of unofficial, experimental Raspberry Pi images for Debian. I promptly answered to him, and while it took me some time to actually warp my head around Michael s work, managed to eventually do so. By December, I started pushing some updates. Not only that: I didn t think much about it in the beginning, as the needed non-free pacakge was called raspi3-firmware, but By early 2019, I had it running for all of the then-available Raspberry families (so the package was naturally renamed to raspi-firmware). I got my Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and it soon joined the happy Debian family. The images are built daily, and are available in https://raspi.debian.net. In the process, I also adopted Lars great vmdb2 image building tool, and have kept it decently up to date (yes, I m currently lagging behind, but I ll get to it soonish ). Anyway This year, I have been seriously neglecting the Raspberry builds. I have simply not had time to regularly test built images, nor to debug why the builder has not picked up building for trixie (testing). And my time availability is not going to improve any time soon. We are close to one month away from moving for six months to Paran (Argentina), where I ll be focusing on my PhD. And while I do contemplate taking my Raspberries along, I do not forsee being able to put much energy to them. So This is basically a call for adoption for the Raspberry Debian images building service. I do intend to stick around and try to help. It s not only me (although I m responsible for the build itself) we have a nice and healthy group of Debian people hanging out in the #debian-raspberrypi channel in OFTC IRC. Don t be afraid, and come ask. I hope giving this project in adoption will breathe new life into it!

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

16 August 2023

Simon Josefsson: Enforcing wrap-and-sort -satb

For Debian package maintainers, the wrap-and-sort tool is one of those nice tools that I use once in a while, and every time have to re-read the documentation to conclude that I want to use the --wrap-always --short-indent --trailing-comma --sort-binary-package options (or -satb for short). Every time, I also wish that I could automate this and have it always be invoked to keep my debian/ directory tidy, so I don t have to do this manually once every blue moon. I haven t found a way to achieve this automation in a non-obtrusive way that interacts well with my git-based packaging workflow. Ideally I would like for something like the lintian-hook during gbp buildpackage to check for this ideas? Meanwhile, I have come up with a way to make sure I don t forget to run wrap-and-sort for long, and that others who work on the same package won t either: create an autopkgtest which is invoked during the Salsa CI/CD pipeline using the following as debian/tests/wrap-and-sort:
#!/bin/sh
set -eu
TMPDIR=$(mktemp -d)
trap "rm -rf $TMPDIR" 0 INT QUIT ABRT PIPE TERM
cp -a debian $TMPDIR
cd $TMPDIR
wrap-and-sort -satb
diff -ur $OLDPWD/debian debian
Add the following to debian/tests/control to invoke it which is intentionally not indented properly so that the self-test will fail so you will learn how it behaves.
Tests: wrap-and-sort
Depends: devscripts, python3-debian
Restrictions: superficial
Now I will get build failures in the pipeline once I upload the package into Salsa, which I usually do before uploading into Debian. I will get a diff output, and it won t be happy until I push a commit with the output of running wrap-and-sort with the parameters I settled with. While autopkgtest is intended to test the installed package, the tooling around autopkgtest is powerful and easily allows this mild abuse of its purpose for a pleasant QA improvement. Thoughts? Happy hacking!

11 August 2023

Birger Schacht: Another round of rust

A couple of weeks ago I had to undergo surgery, because one of my kidneys malfunctioned. Everything went well and I m on my way to recovery. Luckily the most recent local heat wave was over just shortly after I got home, which made being stuck at home a little easier (not sure yet when I ll be allowed to do sports again, I miss my climbing gym ). At first I did not have that much energy to do computer stuff, but after a week or so I was able to sit in front of the screen for short amounts of time and I started to get into writing Rust code again.

carl The first thing I did was updating carl. I updated all the dependencies and switched the dependency that does coloring from ansi_term, which is unmaintained, to nu-ansi-term. When I then updated the clap dependency to version 4 I realized that clap now depends on the anstyle crate for text styling - so I updated carls coloring code once again so it now uses anstyle, which led to less dependencies overall. Implementing this change I also did some refactoring of the code. carl how also has its own website as well as a subdomain1. I also added a couple of new date properties to carl, namely all weekdays as well as odd and even - this means it is now possible choose a separate color for every weekday and have a rainbow calendar:
screenshot carl
This is included in version 0.1.0 of carl, which I published on crates.io.

typelerate Then I started writing my first game - typelerate. It is a copy of the great typespeed, without the multiplayer support. To describe the idea behind the game, I quote the typespeed website:
Typespeed s idea is ripped from ztspeed (a DOS game made by Zorlim). The Idea behind the game is rather easy: type words that are flying by from left to right as fast as you can. If you miss 10 or more words, game is over.
Instead of the multiplayer support, typelerate works with UTF-8 strings and it also has another game mode: in typespeed you only type whats scrolling via the screen. In typelerate I added the option to have one or more answer strings. One of those has to be typed instead of the word flying across the screen. This lets you implement kind of an question/answer game. To be backwards compatible with the existing wordfiles from typespeed2, the wordfiles for the question/answer games contain comma separated values. The typelerate repository contains wordfiles with Python and Rust keywords as well as wordfiles where you are shown an Emoji and you have to type the corresponding Github shortcode. I m happy to add additional wordfiles (there could be for example math questions ).
screenshot typelerate

marsrover Another commandline game I really like, because I am fascinated by the animated ASCII graphics, is the venerable moon-buggy. In this game you have to drive a vehicle across the moon s surface and deal with obstacles like craters or aliens. I reimplemented the game in rust and called it marsrover:
screenshot marsrover
I published it on crates.io, you can find the repository on github. The game uses a configuration file in $XDG_CONFIG_HOME/marsrover/config.toml - you can configure the colors of the elements as well as the levels. The game comes with four levels predefined, but you can use the configuration file to override that list of levels with levels with your own properties. The level properties define the probabilities of obstacles occuring on your way on the mars surface and a points setting that defines how many points the user can get in that level (=the game switches to the next level if the user reaches the points).
[[levels]]
prob_ditch_one = 0.2
prob_ditch_two = 0.0
prob_ditch_three = 0.0
prob_alien = 0.5
points = 100
After the last level, the game generates new ones on the fly.

  1. thanks to the service from https://cli.rs.
  2. actually, typelerate is not backwards compatible with the typespeed wordfiles, because those are not UTF-8 encoded

7 August 2023

Dirk Eddelbuettel: RQuantLib 0.4.19 on CRAN: More Maintenance

A new release 0.4.19 of RQuantLib arrived at CRAN earlier today, and has already been uploaded to Debian too. QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects it to the R environment and language, and has been part of CRAN for more than twenty years (!!) This release of RQuantLib brings a small update to three unit tests as very recent 1.31 release QuantLib brought a subtle change to some fixed income payment schedules and dates. On a sadder note, as CRAN now checks the ratio of user time over elapsed time , excessive threading was inferred for five examples. As we seemingly cannot limit std::thread here, I opted to park these examples behind a \dontrun . Not ideal. Lastly, a few version checks in configure were updated.

Changes in RQuantLib version 0.4.19 (2023-08-07)
  • Three calendaring / schedule tests were adjusted for slightly changed values under QuantLib 1.31
  • Several checks in the configure script have been updated to reflect current versions of packages.
  • Five examples no longer run because, even while extremely short, use of (too many default) threads was seen.

Courtesy of my CRANberries, there is also a diffstat report for the this release 0.4.19. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

1 August 2023

Reproducible Builds: Supporter spotlight: Simon Butler on business adoption of Reproducible Builds

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the seventh instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project and David A. Wheeler. Today, however, we will be talking with Simon Butler, an associate senior lecturer in the School of Informatics at the University of Sk vde, where he undertakes research in software engineering that focuses on IoT and open source software, and contributes to the teaching of computer science to undergraduates.

Chris: For those who have not heard of it before, can you tell us more about the School of Informatics at Sk vde University? Simon: Certainly, but I may be a little long-winded. Sk vde is a city in the area between the two large lakes in southern Sweden. The city is a busy place. Sk vde is home to the regional hospital, some of Volvo s manufacturing facilities, two regiments of the Swedish defence force, a lot of businesses in the Swedish computer games industry, other tech companies and more. The University of Sk vde is relatively small. Sweden s large land area and low population density mean that regional centres such as Sk vde are important and local universities support businesses by training new staff and supporting innovation. The School of Informatics has two divisions. One focuses on teaching and researching computer games. The other division encompasses a wider range of teaching and research, including computer science, web development, computer security, network administration, data science and so on.
Chris: You recently had a open-access paper published in Software Quality Journal. Could you tell us a little bit more about it and perhaps briefly summarise its key findings? Simon: The paper is one output of a collaborative research project with six Swedish businesses that use open source software. There are two parts to the paper. The first consists of an analysis of what the group of businesses in the project know about Reproducible Builds (R-Bs), their experiences with R-Bs and their perception of the value of R-Bs to the businesses. The second part is an interview study with business practitioners and others with experience and expertise in R-Bs. We set out to try to understand the extent to which software-intensive businesses were aware of R-Bs, the technical and business reasons they were or were not using R-Bs and to document the business and technical use cases for R-Bs. The key findings were that businesses are aware of R-Bs, and some are using R-Bs as part of their day-to-day development process. Some of the uses for R-Bs we found were not previously documented. We also found that businesses understood the value R-Bs have as part of engineering and software quality processes. They are also aware of the costs of implementing R-Bs and that R-Bs are an intangible value proposition - in other words, businesses can add value through process improvement by using R-Bs. But, that, currently at least, R-Bs are not a selling point for software or products.
Chris: You performed a large number of interviews in order to prepare your paper. What was the most surprising response to you? Simon: Most surprising is a good question. Everybody I spoke to brought something new to my understanding of R-Bs, and many responses surprised me. The interviewees that surprised me most were I01 and I02 (interviews were anonymised and interviewees were assigned numeric identities). I02 described the sceptical perspective that there is a viable, pragmatic alternative to R-Bs - verifiable builds - which I was aware of before undertaking the research. The company had developed a sufficiently robust system for their needs and worked well. With a large archive of software used in production, they couldn t justify the cost of retrofitting a different solution that might only offer small advantages over the existing system. Doesn t really sound too surprising, but the interview was one of the first I did on this topic, and I was very focused on the value of, and need for, trust in a system that motivated the R-B. The solution used by the company requires trust, but they seem to have established sufficient trust for their needs by securing their build systems to the extent that they are more or less tamper-proof. The other big surprise for me was I01 s use of R-Bs to support the verification of system configuration in a system with multiple embedded components at boot time. It s such an obvious application of R-Bs, and exactly the kind of response I hoped to get from interviewees. However, it is another instance of a solution where trust is only one factor. In the first instance, the developer is using R-Bs to establish trust in the toolchain. There is also the second application that the developer can use a set of R-Bs to establish that deployed system consists of compatible components. While this might not sound too significant, there appear to be some important potential applications. One that came to mind immediately is a problem with firmware updates on nodes in IoT systems where the node needs to update quickly with limited downtime and without failure. The node also needs to be able to roll back any update proposed by a server if there are conflicts with the current configuration or if any tests on the node fail. Perhaps the chances of failure could be reduced, if a node can instead negotiate with a server to determine a safe path to migrate from its current configuration to a working configuration with the upgraded components the central system requires? Another potential application appears to be in the configuration management of AI systems, where decisions need to be explainable. A means of specifying validated configurations of training data, models and deployed systems might, perhaps, be leveraged to prevent invalid or broken configurations from being deployed in production.
Chris: One of your findings was that reproducible builds were perceived to be good engineering practice . To what extent do you believe cultural forces affect the adoption or rejection of a given technology or practice? Simon: To a large extent. People s decisions are informed by cultural norms, and business decisions are made by people acting collectively. Of course, decision-making, including assessments of risk and usefulness, is mediated by individual positions on the continuum from conformity to non-conformity, as well as individual and in-group norms. Whether a business will consider a given technology for adoption will depend on cultural forces. The decision to adopt may well be made on the grounds of cost and benefits.
Chris: Another conclusion implied by your research is that businesses are often dealing with software deployment lifespans (eg. 20+ years) that differ from widely from those of the typical hobbyist programmer. To what degree do you think this temporal mismatch is a problem for both groups? Simon: This is a fascinating question. Long-term software maintenance is a requirement in some industries because of the working lifespans of the products and legal requirements to maintain the products for a fixed period. For some other industries, it is less of a problem. Consequently, I would tend to divide developers into those who have been exposed to long-term maintenance problems and those who have not. Although, more professional than hobbyist developers will have been exposed to the problem. Nonetheless, there are areas, such as music software, where there are also long-term maintenance challenges for data formats and software.
Chris: Based on your research, what would you say are the biggest blockers for the adoption of reproducible builds within business ? And, based on this, would you have any advice or recommendations for the broader reproducible builds ecosystem? Simon: From the research, the main blocker appears to be cost. Not an absolute cost, but there is an overhead to introducing R-Bs. Businesses (and thus business managers) need to understand the business case for R-Bs. Making decision-makers in businesses aware of R-Bs and that they are valuable will take time. Advocacy at multiple levels appears to be the way forward and this is being done. I would recommend being persistent while being patient and to keep talking about reproducible builds. The work done in Linux distributions raises awareness of R-Bs amongst developers. Guix, NixOS and Software Heritage are all providing practical solutions and getting attention - I ve been seeing progressively more mentions of all three during the last couple of years. Increased awareness amongst developers should lead to more interest within companies. There is also research money being assigned to supply chain security and R-B s. The CHAINS project at KTH in Stockholm is one example of a strategic research project. There may be others that I m not aware of. The policy-level advocacy is slowly getting results in some countries, and where CISA leads, others may follow.
Chris: Was there a particular reason you alighted on the question of the adoption of reproducible builds in business? Do you think there s any truth behind the shopworn stereotype of hacker types neglecting the resources that business might be able to offer? Simon: Much of the motivation for the research came from the contrast between the visibility of R-Bs in open source projects and the relative invisibility of R-Bs in industry. Where companies are known to be using R-Bs (e.g. Google, etc.) there is no fuss, no hype. They were not selling R-Bs as a solution; instead the documentation is very matter-of-fact that R-Bs are part of a customer-facing process in their cloud solutions. An obvious question for me was that if some people use R-B s in software development, why doesn t everybody? There are limits to the tooling for some programming languages that mean R-Bs are difficult or impossible. But where creating an R-B is practical, why are they not used more widely? So, to your second question. There is another factor, which seems to be more about a lack of communication rather than neglecting opportunities. Businesses may not always be willing to discuss their development processes and innovations. Though I do think the increasing number of conferences (big and small) for software practitioners is helping to facilitate more communication and greater exchange of ideas.
Chris: Has your personal view of reproducible builds changed since before you embarked on writing this paper? Simon: Absolutely! In the early stages of the research, I was interested in questions of trust and how R-Bs were applied to resolve build and supply chain security problems. As the research developed, however, I started to see there were benefits to the use of R-Bs that were less obvious and that, in some cases, an R-B can have more than a single application.
Chris: Finally, do you have any plans to do future research touching on reproducible builds? Simon: Yes, definitely. There are a set of problems that interest me. One already mentioned is the use of reproducible builds with AI systems. Interpretable or explainable AI (XAI) is a necessity, and I think that R-Bs can be used to support traceability in the configuration and testing of both deployed systems and systems used during model training and evaluation. I would also like to return to a problem discussed briefly in the article, which is to develop a deeper understanding of the elements involved in the application of R-Bs that can be used to support reasoning about existing and potential applications of R-Bs. For example, R-Bs can be used to establish trust for different groups of individuals at different times, say, between remote developers prior to the release of software and by users after release. One question is whether when an R-B is used might be a significant factor. Another group of questions concerns the ways in which trust (of some sort) propagates among users of an R-B. There is an example in the paper of a company that rebuilds Debian reproducibly for security reasons and is then able to collaborate on software projects where software is built reproducibly with other companies that use public distributions of Debian.
Chris: Many thanks for this interview, Simon. If someone wanted to get in touch or learn more about you and your colleagues at the School of Informatics, where might they go? Thank you for the opportunity. It has been a pleasure to reflect a little more widely on the research! Personally, you can find out about my work on my official homepage and on my personal site. The software systems research group (SSRG) has a website, and the University of Sk vde s English language pages are also available. Chris: Many thanks for this interview, Simon!


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

26 July 2023

Shirish Agarwal: Manipur Violence, Drugs, Binging on Northshore, Alaska Daily, Doogie Kamealoha and EU Digital Resilence Act.

Manipur Videos Warning: The text might be mature and will have references to violence so if there are kids or you are sensitive, please excuse. Few days back, saw the videos and I cannot share the rage, shame and many conflicting emotions that were going through me. I almost didn t want to share but couldn t stop myself. The woman in the video were being palmed, fingered, nude, later reportedly raped and murdered. And there have been more than a few cases. The next day saw another video that showed beheaded heads, and Kukis being killed just next to their houses. I couldn t imagine what those people must be feeling as the CM has been making partisan statements against them. One of the husbands of the Kuki women who had been paraded, fondled is an Army Officer in the Indian Army. The Meiteis even tried to burn his home but the Army intervened and didn t let it get burnt. The CM s own statement as shared before tells his inability to bring the situation out of crisis. In fact, his statement was dumb stating that the Internet shutdown was because there were more than 100 such cases. And it s spreading to the nearby Northeast regions. Now Mizoram, the nearest neighbor is going through similar things where the Meitis are not dominant. The Mizos have told the Meitis to get out. To date, the PM has chosen not to visit Manipur. He just made a small 1 minute statement about it saying how the women have shamed India, an approximation of what he said.While it s actually not the women but the men who have shamed India. The Wire has been talking to both the Meitis, the Kukis, the Nagas. A Kuki women sort of bared all. She is right on many counts. The GOI while wanting to paint the Kukis in a negative light have forgotten what has been happening in its own state, especially its own youth as well as in other states while also ignoring the larger geopolitics and business around it. Taliban has been cracking as even they couldn t see young boys, women becoming drug users. I had read somewhere that 1 in 4 or 1 in 5 young person in Afghanistan is now in its grip. So no wonder,the Taliban is trying to eradicate and shutdown drug use among it s own youth. Circling back to Manipur, I was under the wrong impression that the Internet shutdown is now over. After those videos became viral as well as the others I mentioned, again the orders have been given and there is shutdown. It is not fully shut but now only Govt. offices have it. so nobody can share a video that goes against any State or Central Govt. narrative  A real sad state of affairs  Update: There is conditional reopening whatever that means  When I saw the videos, the first thing is I felt was being powerless, powerless to do anything about it. The second was if I do not write about it, amplify it and don t let others know about it then what s the use of being able to blog

Mental Health, Binging on various Webseries Both the videos shocked me and I couldn t sleep that night or the night after. it. Even after doing work and all, they would come in unobtrusively in my nightmares  While I felt a bit foolish, I felt it would be nice to binge on some webseries. Little I was to know that both Northshore and Alaska Daily would have stories similar to what is happening here  While the story in Alaska Daily is fictional it resembles very closely to a real newspaper called Anchorage Daily news. Even there the Intuit women , one of the marginalized communities in Alaska. The only difference I can see between GOI and the Alaskan Government is that the Alaskan Government was much subtle in doing the same things. There are some differences though. First, the State is and was responsive to the local press and apart from one close call to one of its reporters, most reporters do not have to think about their own life in peril. Here, the press cannot look after either their livelihood or their life. It was a juvenile kid who actually shot the video, uploaded and made it viral. One needs to just remember the case details of Siddique Kappan. Just for sharing the news and the video he was arrested. Bail was denied to him time and time again citing that the Police were investigating . Only after 2 years and 3 months he got bail and that too because none of the charges that the Police had they were able to show any prima facie evidence. One of the better interviews though was of Vrinda Grover. For those who don t know her, her Wikipedia page does tell a bit about her although it is woefully incomplete. For example, most recently she had relentlessly pursued the unconstitutional Internet Shutdown that happened in Kashmir for 5 months. Just like in Manipur, the shutdown was there to bury crimes either committed or being facilitated by the State. For the issues of livelihood, one can take the cases of Bipin Yadav and Rashid Hussain. Both were fired by their employer Dainik Bhaskar because they questioned the BJP MP Smriti Irani what she has done for the state. The problems for Dainik Bhaskar or for any other mainstream media is most of them rely on Government advertisements. Private investment in India has fallen to record lows mostly due to the policies made by the Centre. If any entity or sector grows a bit then either Adani or Ambani will one way or the other take it. So, for most first and second generation entrepreneurs it doesn t make sense to grow and then finally sell it to one of these corporates at a loss  GOI on Adani, Ambani side of any deal. The MSME sector that is and used to be the second highest employer hasn t been able to recover from the shocks of demonetization, GST and then the pandemic. Each resulting in more and more closures and shutdowns. Most of the joblessness has gone up tremendously in North India which the Government tries to deny. The most interesting points in all those above examples is within a month or less, whatever the media reports gets scrubbed. Even the firing of the journos that was covered by some of the mainstream media isn t there anymore. I have to use secondary sources instead of primary sources. One can think of the chilling effects on reportage due to the above. The sad fact is even with all the money in the world the PM is unable to come to the Parliament to face questions.
Why is PM not answering in Parliament,, even Rahul Gandhi is not there - Surya Pratap Singh, prev. IAS Officer.
The above poster/question is by Surya Pratap Singh, a retired IAS officer. He asks why the PM is unable to answer in either of the houses. As shared before, the Govt. wants very limited discussion. Even yesterday, the Lok Sabha TV just showed the BJP MP s making statements but silent or mic was off during whatever questions or statements made by the opposition. If this isn t mockery of Indian democracy then I don t know what is  Even the media landscape has been altered substantially within the last few years. Both Adani and Ambani have distributed the media pie between themselves. One of the last bastions of the free press, NDTV was bought by Adani in a hostile takeover. Both Ambani and Adani are close to this Goverment. In fact, there is no sector in which one or the other is not present. Media houses like Newsclick, The Wire etc. that are a fraction of mainstream press are where most of the youth have been going to get their news as they are not partisan. Although even there, GOI has time and again interfered. The Wire has had too many 504 Gateway timeouts in the recent months and they had been forced to move most of their journalism from online to video, rather Youtube in order to escape both the censoring and the timeouts as shared above. In such a hostile environment, how both the organizations are somehow able to survive is a miracle. Most local reportage is also going to YouTube as that s the best way for them to not get into Govt. censors. Not an ideal situation, but that s the way it is. The difference between Indian and Israeli media can be seen through this
The above is a Screenshot shared by how the Israeli media has reacted to the Israeli Government s Knesset over the judicial overhaul . Here, the press itself erodes its own by giving into the Government day and night

Binging on Webseries Saw Northshore, Three Pines, Alaska Daily and Doogie Kamealoha M.D. which is based on Doogie Howser M.D. Of the four, enjoyed Doogie Kamealoha M.D. the most but then it might be because it s a copy of Doogie Howser, just updated to the new millenia and there are some good childhood memories associated with that series. The others are also good. I tried to not see European stuff as most of them are twisted and didn t want that space.

EU Digital Operational Resilience Act and impact on FOSS Few days ago, apparently the EU shared the above Act. One can read about it more here. This would have more impact on FOSS as most development of various FOSS distributions happens in EU. Fair bit of Debian s own development happens in Germany and France. While there have been calls to make things more clearer, especially for FOSS given that most developers do foss development either on side or as a hobby while their day job is and would be different. The part about consumer electronics and FOSS is a tricky one as updates can screw up your systems. Microsoft has had a huge history of devices not working after an update or upgrade. And this is not limited to Windows as they would like to believe. Even apple seems to be having its share of issues time and time again. One would have hoped that these companies that make billions of dollars from their hardware and software sales would be doing more testing and Q&A and be more aware about security issues. FOSS, on the other hand while being more responsive doesn t make as much money vis-a-vis the competitors. Let s take the most concrete example. The most successful mobile phone having FOSS is Purism. But it s phone, it has priced itself out of the market. A huge part of that is to do with both economies of scale and trying to get an infrastructure and skills in the States where none or minimally exists. Compared that to say Pinepro that is manufactured in Hong Kong and is priced 1/3rd of the same. For most people it is simply not affordable in these times. Add to that the complexity of these modern cellphones make it harder, not easier for most people to be vigilant and update the phone at all times. Maybe we need more dumphones such as Light and Punkt but then can those be remotely hacked or not, there doesn t seem to be any answers on that one. I haven t even seen anybody even ask those questions. They may have their own chicken and egg issues. For people like me who have lost hearing, while I can navigate smartphones for now but as I become old I don t see anything that would help me. For many an elderly population, both hearing and seeing are the first to fade. There doesn t seem to be any solutions targeted for them even though they are 5-10% of any population at the very least. Probably more so in Europe and the U.S. as well as Japan and China. All of them are clearly under-served markets but dunno a solution for them. At least to me that s an open question.

19 July 2023

Shirish Agarwal: RISC-V, Chips Act, Burning of Books, Manipur

RISC -V Motherboard, SBC While I didn t want to, a part of me is hyped about this motherboard. This would probably be launched somewhere in November. There are obvious issues in this, the first being unlike regular motherboards you wouldn t be upgrade as you would do.You can t upgrade your memory, can t upgrade the CPU (although new versions of instructions could be uploaded, similar to BIOS updates) but as the hardware is integrated (the quad-core SiFive Performance P550 core complex) it would really depend. If the final pricing is around INR 4-5k then it may be able to sell handsomely provided there are people to push and provide support around it. A 500 GB or 1 TB SSD coupled with it and a cheap display unit and you could use it anywhere although as the name says it s more for tinkering as the name suggests. Another board that could perhaps be of more immediate use would be the beagleboard. They launched the same couple of days back and called it Beagle V-Ahead. Again, costs are going to be a concern. Just a year before the pandemic the Beagleboard Black (BB) used to cost in the sub 4k range, today it costs 8k+ for the end user, more than twice the price. How much Brexit is to be blamed for this and how much the Indian customs we would never know. The RS Group that is behind that shop is head-quartered in the UK. As said before, we do not know the price of either board as it probably will take few months for v-ahead to worm its way in the Indian market, maybe another 6 months or so. Even so, with the limited info. on both the boards, I am tilting more towards the other HiFive one. We should come to know about the boards say in 3-5 months of time.

CHIPS Act I had shared about the Chips Act a few times here as well as on SM. Two articles do tell how the CHIPS Act 2023 is more of a political tool, an industrial defence policy rather than just business as most people tend to think.

Cancelation of Books, Books Burning etc. Almost 2400 years ago, Plato released his work called Plato s Republic and one of the seminal works within it is perhaps one of the most famous works was the Allegory of the Cave. That is used again and again in a myriad ways, mostly in science-fiction though and mostly to do with utopian, dystopian movies, webseries etc. I did share how books are being canceled in the States, also a bit here. But the most damning thing has happened throughout history, huge quantities of books burned almost all for politics  But part of it has been neglect as well as this time article shares. What we have lost and continue to lose is just priceless. Every book has a grain of truth in it, some more, some less but equally enjoyable. Most harmful is the neglect towards books and is more true today than any other time in history. Kids today have a wide variety of tools to keep themselves happy or occupied, from anime, VR, gaming the list goes on and on. In that scenario, how the humble books can compete. People think of Kindle but most e-readers like Kindle are nothing but obsolescence by design. I have tried out Kindle a few times but find it a bit on the flimsy side. Books are much better IMHO or call me old-school. While there are many advantages, one of the things that I like about books is that you can easily put yourself in either the protagonist or the antagonist or somewhere in the middle and think of the possible scenarios wherever you are in a particular book. I could go on but it will be a blog post or two in itself. Till later. Happy Reading.

Update:Manipur Extremely horrifying visuals, articles and statements continue to emanate from Manipur. Today, 19th July 2023, just couple of hours back, a video surfaced showing two Kuki women were shown as stripped, naked and Meitei men touching their private parts. Later on, we came to know that this was in response of a disinformation news spread by the Meitis of few women being raped although no documentary evidence of the same surfaced, no names nothing. While I don t want to share the video I will however share the statement shared by the Kuki-Zo tribal community on that. The print gives a bit more context to what has been going on.
Update, Few hours later : The Print also shared more of a context about six days ago. The reason we saw the video now was that for the last 2.5 months Manipur was in Internet shutdown so those videos got uploaded now. There was huge backlash from the Twitter community and GOI ordered the Manipur Police to issue this Press Release yesterday night or just few hours before with yesterday s time-stamp.
IndianExpress shared an article that does state that while an FIR had been registered immediately no arrests so far and this is when you can see the faces of all the accused. Not one of them tried to hide their face behind a mask or something. So, if the police wanted, they could have easily identified who they are. They know which community the accused belong to, they even know from where they came. If they wanted to, they could have easily used mobile data and triangulation to find the accused and their helpers. So, it does seem to be attempt to whitewash and protect a certain community while letting it prey on the other. Another news that did come in, is because of the furious reaction on Twitter, Youtube has constantly been taking down the video as some people are getting a sort of high more so from the majoritarian community and making lewd remarks. Twitter has been somewhat quick when people are making lewd remarks against the two girl/women. Quite a bit of the above seems like a cover-up. Lastly, apparently GOI has agreed to having a conversation about it in Lok Sabha but without any voting or passing any resolutions as of right now. Would update as an when things change. Update: Smriti Irani, the Child and Development Minister gave the weakest statement possible
As can be noticed, she said sexual assault rather than rape. The women were under police custody for safety when they were whisked away by the mob. No mention of that. She spoke to the Chief Minister who has been publicly known as one of the provocateurs or instigators for the whole thing. The CM had publicly called the Kukus and Nagas as foreigners although both of them claim to be residing for thousands of years and they apparently have documentary evidence of the same  . Also not clear who is doing the condemning here. No word of support for the women, no offer of intervention, why is she the Minister of Child and Women Development (CDW) if she can t use harsh words or give support to the women who have gone and going through horrific things  Update : CM Biren Singh s Statement after the video surfaced
This tweet is contradictory to the statements made by Mr. Singh couple of months ago. At that point in time, Mr. Singh had said that NIA, State Intelligence Departments etc. were giving him minute to minute report on the ground station. The Police itself has suo-moto (on its own) powers to investigate and apprehend criminals for any crime. In fact, the Police can call for questioning of anybody in any relation to any crime and question them for upto 48 hours before charging them. In fact, many cases have been lodged where innocent persons have been framed or they have served much more in the jail than the crime they are alleged to have been committed. For e.g. just a few days before there was a media report of a boy who has been in jail for 3 years. His alleged crime, stealing mere INR 200/- to feed himself. Court doesn t have time to listen to him yet. And there are millions like him. The quint eloquently shares the tale where it tells how both the State and the Centre have been explicitly complicit in the incidents ravaging Manipur. In fact, what has been shared in the article has been very true as far as greed for land is concerned. Just couple of weeks back there have been a ton of floods emanating from Uttarakhand and others. Just before the flooding began, what was the CM doing can be seen here. Apart from the newspapers I have shared and the online resources, most of the mainstream media has been silent on the above. In fact, they have been silent on the Manipur issue until the said video didn t come into limelight. Just now, in Lok Sabha everybody is present except the Prime Minister and the Home Minister. The PM did say that the law will take its own course, but that s about it. Again no support for the women concerned.  Update: CJI (Chief Justice of India) has taken suo-moto cognizance and has warned both the State and Centre to move quickly otherwise they will take the matter in their own hand.
Update: Within 2 hours of the CJI taking suo-moto cognizance, they have arrested one of the main accused Heera Das
The above tells you why the ban on Internet was put in the first place. They wanted to cover it all up. Of all the celebs, only one could find a bit of spine, a bit of backbone to speak about it, all the rest mum
Just imagine, one of the women is around my age while the young one could have been a daughter if I had married on time or a younger sister for sure. If ever I came face to face with them, I just wouldn t be able to look them in the eye. Even their whole whataboutery is built on sham. From their view Kukis are from Burma or Burmese descent. All of which could be easily proved by DNA of all. But let s leave that for a sec. Let s take their own argument that they are Burmese. Their idea of Akhand Bharat stretches all the way to Burma (now called Myanmar). They want all the land but no idea with what to do with the citizens living on it. Even after the video, the whataboutery isn t stopping, that shows how much hatred is there. And not knowing that they too will be victim of the same venom one or the other day  Update: Opposition was told there would be a debate on Manipur. The whole day went by, no debate. That s the shamelessness of this Govt.  Update 20th July 19:25 Center may act or not act against the perpetrators but they will act against Twitter who showed the crime. Talk about shooting the messenger
We are now in the last stage. In 2014, we were at 6

15 July 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, June 2023 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In June, 17 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 12.0h (out of 6.0h assigned and 8.0h from previous period), thus carrying over 2.0h to the next month.
  • Adrian Bunk did 28.0h (out of 0h assigned and 34.5h from previous period), thus carrying over 6.5h to the next month.
  • Anton Gladky did 5.0h (out of 6.0h assigned and 9.0h from previous period), thus carrying over 10.0h to the next month.
  • Bastien Roucari s did 17.0h (out of 17.0h assigned and 3.0h from previous period), thus carrying over 3.0h to the next month.
  • Ben Hutchings did 24.0h (out of 16.5h assigned and 7.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Emilio Pozuelo Monfort did 24.0h (out of 21.0h assigned and 2.5h from previous period).
  • Guilhem Moulin did 20.0h (out of 20.0h assigned).
  • Lee Garrett did 25.0h (out of 0h assigned and 40.5h from previous period), thus carrying over 15.5h to the next month.
  • Markus Koschany did 23.5h (out of 23.5h assigned).
  • Ola Lundqvist did 13.0h (out of 0h assigned and 24.0h from previous period), thus carrying over 11.0h to the next month.
  • Roberto C. S nchez did 13.5h (out of 9.75h assigned and 13.75h from previous period), thus carrying over 10.0h to the next month.
  • Santiago Ruano Rinc n did 8.25h (out of 23.5h assigned), thus carrying over 15.25h to the next month.
  • Sylvain Beucler did 20.0h (out of 23.5h assigned), thus carrying over 3.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 16.0h (out of 16.0h assigned).
  • Utkarsh Gupta did 0.0h (out of 0h assigned and 25.5h from previous period), thus carrying over 25.5h to the next month.

Evolution of the situation In June, we have released 40 DLAs. Notable security updates in June included mariadb-10.3, openssl, and golang-go.crypto. The mariadb-10.3 package was synchronized with the latest upstream maintenance release, version 10.3.39. The openssl package was patched to correct several flaws with certificate validation and with object identifier parsing. Finally, the golang-go.crypto package was updated to address several vulnerabilities, and several associated Go packages were rebuilt in order to properly incorporate the update. LTS contributor Sylvain has been hard at work with some behind-the-scenes improvements to internal tooling and documentation. His efforts are helping to improve the efficiency of all LTS contributors and also helping to improve the quality of their work, making our LTS updates more timely and of higher quality. LTS contributor Lee Garrett began working on a testing framework specifically for Samba. Given the critical role which Samba plays in many deployments, the tremendous impact which regressions can have in those cases, and the unique testing requirements of Samba, this work will certainly result in increased confidence around our Samba updates for LTS. LTS contributor Emilio Pozuelo Monfort has begun preparatory work for the upcoming Firefox ESR version 115 release. Firefox ESR (and the related Thunderbird ESR) requires special work to maintain up to date in LTS. Mozilla do not release individual patches for CVEs, and our policy is to incorporate new ESR releases from Mozilla into LTS. Most updates are minor updates, but once a year Mozilla will release a major update as they move to a new major version for ESR. The update to a new major ESR version entails many related updates to toolchain and other packages. The preparations that Emilio has begun will ensure that once the 115 ESR release is made, updated packages will be available in LTS with minimal delay. Another highlight of behind-the-scenes work is our Front Desk personnel. While we often focus on the work which results in published package updates, much work is also involved in reviewing new vulnerabilities and triaging them (i.e., determining if they affect one or more packages in LTS and then determining the severity of those which are applicable). These intrepid contributors (Emilio Pozuelo Monfort, Markus Koschany, Ola Lundqvist, Sylvain Beucler, and Thorsten Alteholz for the month of June) reviewed dozens of vulnerabilities and made decisions about how those vulnerabilities should be dealt with.

Thanks to our sponsors

14 July 2023

Aurelien Jarno: Goodbye Debian GNU/kFreeBSD

Over the years, the Debian GNU/kFreeBSD port has gone through various phases. After many years of development, it was released as technology preview with the release of Squeeze and eventually became an official architecture with the release of Wheezy. However it ceased being an official architecture a couple of years later with the release of Jessie, although a jessie-kfreebsd suite was available in the official archive. Some years later, it was moved to the debian-ports archive, where it slowly regressed over the years. The development totally has now been stopped for over a year, and the port has been removed from the debian-ports archive. It's time to say it goodbye! I feel a touch of nostalgia as I was deeply involved in the Debian GNU/kFreeBSD port for nearly a decade, starting in 2006. There are many different reasons to like GNU/kFreeBSD ranging from political to technical considerations. Personally, I liked the technical aspect, as the FreeBSD kernel, at that time, was ahead of the Linux kernel in term of features: jails, ZFS, IPv6 stateful firewalling, and at a later point superpages. That said it was way behind for hardware support and to the best of my knowledge this remains unchanged. Meanwhile, the Linux kernel development accelerated in the latter stages of the 2.6.x series, and eventually closed the feature gap. At some point, I began to lose interest, and also to lack time, and slowly stepped away from its development.

11 July 2023

Simon Josefsson: Coping with non-free software in Debian

A personal reflection on how I moved from my Debian home to find two new homes with Trisquel and Guix for my own ethical computing, and while doing so settled my dilemma about further Debian contributions. Debian s contributions to the free software community has been tremendous. Debian was one of the early distributions in the 1990 s that combined the GNU tools (compiler, linker, shell, editor, and a set of Unix tools) with the Linux kernel and published a free software operating system. Back then there were little guidance on how to publish free software binaries, let alone entire operating systems. There was a lack of established community processes and conflict resolution mechanisms, and lack of guiding principles to motivate the work. The community building efforts that came about in parallel with the technical work has resulted in a steady flow of releases over the years. From the work of Richard Stallman and the Free Software Foundation (FSF) during the 1980 s and early 1990 s, there was at the time already an established definition of free software. Inspired by free software definition, and a belief that a social contract helps to build a community and resolve conflicts, Debian s social contract (DSC) with the free software community was published in 1997. The DSC included the Debian Free Software Guidelines (DFSG), which directly led to the Open Source Definition.

Slackware 3.5" disksOne of my earlier Slackware install disk sets, kept for nostalgic reasons.
I was introduced to GNU/Linux through Slackware in the early 1990 s (oh boy those nights calculating XFree86 modeline s and debugging sendmail.cf) and primarily used RedHat Linux during ca 1995-2003. I switched to Debian during the Woody release cycles, when the original RedHat Linux was abandoned and Fedora launched. It was Debian s explicit community processes and infrastructure that attracted me. The slow nature of community processes also kept me using RedHat for so long: centralized and dogmatic decision processes often produce quick and effective outcomes, and in my opinion RedHat Linux was technically better than Debian ca 1995-2003. However the RedHat model was not sustainable, and resulted in the RedHat vs Fedora split. Debian catched up, and reached technical stability once its community processes had been grounded. I started participating in the Debian community around late 2006. My interpretation of Debian s social contract is that Debian should be a distribution of works licensed 100% under a free license. The Debian community has always been inclusive towards non-free software, creating the contrib/non-free section and permitting use of the bug tracker to help resolve issues with non-free works. This is all explained in the social contract. There has always been a clear boundary between free and non-free work, and there has been a commitment that the Debian system itself would be 100% free. The concern that RedHat Linux was not 100% free software was not critical to me at the time: I primarily (and happily) ran GNU tools on Solaris, IRIX, AIX, OS/2, Windows etc. Running GNU tools on RedHat Linux was an improvement, and I hadn t realized it was possible to get rid of all non-free software on my own primary machine. Debian realized that goal for me. I ve been a believer in that model ever since. I can use Solaris, macOS, Android etc knowing that I have the option of using a 100% free Debian. While the inclusive approach towards non-free software invite and deserve criticism (some argue that being inclusive to non-inclusive behavior is a bad idea), I believe that Debian s approach was a successful survival technique: by being inclusive to and a compromise between free and non-free communities, Debian has been able to stay relevant and contribute to both environments. If Debian had not served and contributed to the free community, I believe free software people would have stopped contributing. If Debian had rejected non-free works completely, I don t think the successful Ubuntu distribution would have been based on Debian. I wrote the majority of the text above back in September 2022, intending to post it as a way to argue for my proposal to maintain the status quo within Debian. I didn t post it because I felt I was saying the obvious, and that the obvious do not need to be repeated, and the rest of the post was just me going down memory lane. The Debian project has been a sustainable producer of a 100% free OS up until Debian 11 bullseye. In the resolution on non-free firmware the community decided to leave the model that had resulted in a 100% free Debian for so long. The goal of Debian is no longer to publish a 100% free operating system, instead this was added: The Debian official media may include firmware . Indeed the Debian 12 bookworm release has confirmed that this would not only be an optional possibility. The Debian community could have published a 100% free Debian, in parallel with the non-free Debian, and still be consistent with their newly adopted policy, but chose not to. The result is that Debian s policies are not consistent with their actions. It doesn t make sense to claim that Debian is 100% free when the Debian installer contains non-free software. Actions speaks louder than words, so I m left reading the policies as well-intended prose that is no longer used for guidance, but for the peace of mind for people living in ivory towers. And to attract funding, I suppose. So how to deal with this, on a personal level? I did not have an answer to that back in October 2022 after the vote. It wasn t clear to me that I would ever want to contribute to Debian under the new social contract that promoted non-free software. I went on vacation from any Debian work. Meanwhile Debian 12 bookworm was released, confirming my fears. I kept coming back to this text, and my only take-away was that it would be unethical for me to use Debian on my machines. Letting actions speak for themselves, I switched to PureOS on my main laptop during October, barely noticing any difference since it is based on Debian 11 bullseye. Back in December, I bought a new laptop and tried Trisquel and Guix on it, as they promise a migration path towards ppc64el that PureOS do not. While I pondered how to approach my modest Debian contributions, I set out to learn Trisquel and gained trust in it. I migrated one Debian machine after another to Trisquel, and started to use Guix on others. Migration was easy because Trisquel is based on Ubuntu which is based on Debian. Using Guix has its challenges, but I enjoy its coherant documented environment. All of my essential self-hosted servers (VM hosts, DNS, e-mail, WWW, Nextcloud, CI/CD builders, backup etc) uses Trisquel or Guix now. I ve migrated many GitLab CI/CD rules to use Trisquel instead of Debian, to have a more ethical computing base for software development and deployment. I wish there were official Guix docker images around. Time has passed, and when I now think about any Debian contributions, I m a little less muddled by my disappointment of the exclusion of a 100% free Debian. I realize that today I can use Debian in the same way that I use macOS, Android, RHEL or Ubuntu. And what prevents me from contributing to free software on those platforms? So I will make the occasional Debian contribution again, knowing that it will also indirectly improve Trisquel. To avoid having to install Debian, I need a development environment in Trisquel that allows me to build Debian packages. I have found a recipe for doing this: # System commands:
sudo apt-get install debhelper git-buildpackage debian-archive-keyring
sudo wget -O /usr/share/debootstrap/scripts/debian-common https://sources.debian.org/data/main/d/debootstrap/1.0.128%2Bnmu2/scripts/debian-common
sudo wget -O /usr/share/debootstrap/scripts/sid https://sources.debian.org/data/main/d/debootstrap/1.0.128%2Bnmu2/scripts/sid
# Run once to create build image:
DIST=sid git-pbuilder create --mirror http://deb.debian.org/debian/ --debootstrapopts "--exclude=usr-is-merged" --basepath /var/cache/pbuilder/base-sid.cow
# Run in a directory with debian/ to build a package:
gbp buildpackage --git-pbuilder --git-dist=sid
How to sustainably deliver a 100% free software binary distributions seems like an open question, and the challenges are not all that different compared to the 1990 s or early 2000 s. I m hoping Debian will come back to provide a 100% free platform, but my fear is that Debian will compromise even further on the free software ideals rather than the opposite. With similar arguments that were used to add the non-free firmware, Debian could compromise the free software spirit of the Linux boot process (e.g., non-free boot images signed by Debian) and media handling (e.g., web browsers and DRM), as Debian have already done with appstore-like functionality for non-free software (Python pip). To learn about other freedom issues in Debian packaging, browsing Trisquel s helper scripts may enlight you. Debian s setback and the recent setback for RHEL-derived distributions are sad, and it will be a challenge for these communities to find internally consistent coherency going forward. I wish them the best of luck, as Debian and RHEL are important for the wider free software eco-system. Let s see how the community around Trisquel, Guix and the other FSDG-distributions evolve in the future. The situation for free software today appears better than it was years ago regardless of Debian and RHEL s setbacks though, which is important to remember! I don t recall being able install a 100% free OS on a modern laptop and modern server as easily as I am able to do today. Happy Hacking! Addendum 22 July 2023: The original title of this post was Coping with non-free Debian, and there was a thread about it that included feedback on the title. I do agree that my initial title was confrontational, and I ve changed it to the more specific Coping with non-free software in Debian. I do appreciate all the fine free software that goes into Debian, and hope that this will continue and improve, although I have doubts given the opinions expressed by the majority of developers. For the philosophically inclined, it is interesting to think about what it means to say that a compilation of software is freely licensed. At what point does a compilation of software deserve the labels free vs non-free? Windows probably contains some software that is published as free software, let s say Windows is 1% free. Apple authors a lot of free software (as a tangent, Apple probably produce more free software than what Debian as an organization produces), and let s say macOS contains 20% free software. Solaris (or some still maintained derivative like OpenIndiana) is mostly freely licensed these days, isn t it? Let s say it is 80% free. Ubuntu and RHEL pushes that closer to let s say 95% free software. Debian used to be 100% but is now slightly less at maybe 99%. Trisquel and Guix are at 100%. At what point is it reasonable to call a compilation free? Does Debian deserve to be called freely licensed? Does macOS? Is it even possible to use these labels for compilations in any meaningful way? All numbers just taken from thin air. It isn t even clear how this can be measured (binary bytes? lines of code? CPU cycles? etc). The caveat about license review mistakes applies. I ignore Debian s own claims that Debian is 100% free software, which I believe is inconsistent and no longer true under any reasonable objective analysis. It was not true before the firmware vote since Debian ships with non-free blobs in the Linux kernel for example.

Matthew Garrett: Roots of Trust are difficult

The phrase "Root of Trust" turns up at various points in discussions about verified boot and measured boot, and to a first approximation nobody is able to give you a coherent explanation of what it means[1]. The Trusted Computing Group has a fairly wordy definition, but (a) it's a lot of words and (b) I don't like it, so instead I'm going to start by defining a root of trust as "A thing that has to be trustworthy for anything else on your computer to be trustworthy".

(An aside: when I say "trustworthy", it is very easy to interpret this in a cynical manner and assume that "trust" means "trusted by someone I do not necessarily trust to act in my best interest". I want to be absolutely clear that when I say "trustworthy" I mean "trusted by the owner of the computer", and that as far as I'm concerned selling devices that do not allow the owner to define what's trusted is an extremely bad thing in the general case)

Let's take an example. In verified boot, a cryptographic signature of a component is verified before it's allowed to boot. A straightforward implementation of a verified boot implementation has the firmware verify the signature on the bootloader or kernel before executing it. In this scenario, the firmware is the root of trust - it's the first thing that makes a determination about whether something should be allowed to run or not[2]. As long as the firmware behaves correctly, and as long as there aren't any vulnerabilities in our boot chain, we know that we booted an OS that was signed with a key we trust.

But what guarantees that the firmware behaves correctly? What if someone replaces our firmware with firmware that trusts different keys, or hot-patches the OS as it's booting it? We can't just ask the firmware whether it's trustworthy - trustworthy firmware will say yes, but the thing about malicious firmware is that it can just lie to us (either directly, or by modifying the OS components it boots to lie instead). This is probably not sufficiently trustworthy!

Ok, so let's have the firmware be verified before it's executed. On Intel this is "Boot Guard", on AMD this is "Platform Secure Boot", everywhere else it's just "Secure Boot". Code on the CPU (either in ROM or signed with a key controlled by the CPU vendor) verifies the firmware[3] before executing it. Now the CPU itself is the root of trust, and, well, that seems reasonable - we have to place trust in the CPU, otherwise we can't actually do computing. We can now say with a reasonable degree of confidence (again, in the absence of vulnerabilities) that we booted an OS that we trusted. Hurrah!

Except. How do we know that the CPU actually did that verification? CPUs are generally manufactured without verification being enabled - different system vendors use different signing keys, so those keys can't be installed in the CPU at CPU manufacture time, and vendors need to do code development without signing everything so you can't require that keys be installed before a CPU will work. So, out of the box, a new CPU will boot anything without doing verification[4], and development units will frequently have no verification.

As a device owner, how do you tell whether or not your CPU has this verification enabled? Well, you could ask the CPU, but if you're doing that on a device that booted a compromised OS then maybe it's just hotpatching your OS so when you do that you just get RET_TRUST_ME_BRO even if the CPU is desperately waving its arms around trying to warn you it's a trap. This is, unfortunately, a problem that's basically impossible to solve using verified boot alone - if any component in the chain fails to enforce verification, the trust you're placing in the chain is misplaced and you are going to have a bad day.

So how do we solve it? The answer is that we can't simply ask the OS, we need a mechanism to query the root of trust itself. There's a few ways to do that, but fundamentally they depend on the ability of the root of trust to provide proof of what happened. This requires that the root of trust be able to sign (or cause to be signed) an "attestation" of the system state, a cryptographically verifiable representation of the security-critical configuration and code. The most common form of this is called "measured boot" or "trusted boot", and involves generating a "measurement" of each boot component or configuration (generally a cryptographic hash of it), and storing that measurement somewhere. The important thing is that it must not be possible for the running OS (or any pre-OS component) to arbitrarily modify these measurements, since otherwise a compromised environment could simply go back and rewrite history. One frequently used solution to this is to segregate the storage of the measurements (and the attestation of them) into a separate hardware component that can't be directly manipulated by the OS, such as a Trusted Platform Module. Each part of the boot chain measures relevant security configuration and the next component before executing it and sends that measurement to the TPM, and later the TPM can provide a signed attestation of the measurements it was given. So, an SoC that implements verified boot should create a measurement telling us whether verification is enabled - and, critically, should also create a measurement if it isn't. This is important because failing to measure the disabled state leaves us with the same problem as before; someone can replace the mutable firmware code with code that creates a fake measurement asserting that verified boot was enabled, and if we trust that we're going to have a bad time.

(Of course, simply measuring the fact that verified boot was enabled isn't enough - what if someone replaces the CPU with one that has verified boot enabled, but trusts keys under their control? We also need to measure the keys that were used in order to ensure that the device trusted only the keys we expected, otherwise again we're going to have a bad time)

So, an effective root of trust needs to:

1) Create a measurement of its verified boot policy before running any mutable code
2) Include the trusted signing key in that measurement
3) Actually perform that verification before executing any mutable code

and from then on we're in the hands of the verified code actually being trustworthy, and it's probably written in C so that's almost certainly false, but let's not try to solve every problem today.

Does anything do this today? As far as I can tell, Intel's Boot Guard implementation does. Based on publicly available documentation I can't find any evidence that AMD's Platform Secure Boot does (it does the verification, but it doesn't measure the policy beforehand, so it seems spoofable), but I could be wrong there. I haven't found any general purpose non-x86 parts that do, but this is in the realm of things that SoC vendors seem to believe is some sort of value-add that can only be documented under NDAs, so please do prove me wrong. And then there are add-on solutions like Titan, where we delegate the initial measurement and validation to a separate piece of hardware that measures the firmware as the CPU reads it, rather than requiring that the CPU do it.

But, overall, the situation isn't great. On many platforms there's simply no way to prove that you booted the code you expected to boot. People have designed elaborate security implementations that can be bypassed in a number of ways.

[1] In this respect it is extremely similar to "Zero Trust"
[2] This is a bit of an oversimplification - once we get into dynamic roots of trust like Intel's TXT this story gets more complicated, but let's stick to the simple case today
[3] I'm kind of using "firmware" in an x86ish manner here, so for embedded devices just think of "firmware" as "the first code executed out of flash and signed by someone other than the SoC vendor"
[4] In the Intel case this isn't strictly true, since the keys are stored in the motherboard chipset rather than the CPU, and so taking a board with Boot Guard enabled and swapping out the CPU won't disable Boot Guard because the CPU reads the configuration from the chipset. But many mobile Intel parts have the chipset in the same package as the CPU, so in theory swapping out that entire package would disable Boot Guard. I am not good enough at soldering to demonstrate that.

comment count unavailable comments

Next.

Previous.