Search Results: "alex"

15 April 2024

Andreas R nnquist: Status update for Allegro packaging in Debian

I have mailed to a Debian bug on allegro4.4 describing my reasoning regarding the allegro libraries in short, allegro4.4 is pretty much dead upstream, and my interest was basically to keep alex4 (which is cool) in Debian, but since it migrated to non-free, my interest in allegro4.4 has waned. So, if anybody would like to still see allegro4.4 in Debian, please step up now and help out. Since it is dead upstream, my reasoning is that it is better to remove it from Debian if no maintainer who wants to help steps up. Previously Tobias Hansen has helped out, but now it is 8 (!) years since his last upload of either package. (Please don t interpret this as judgement, I am very happy for the help he has provided and all the work he has done on the packages). Allegro5 is another deal still active upstream, and I have kept it up to date in Debian, and while I have held the latest upload a short while because of the time_t transition, it will come sooner or later There I am also waiting on a final decision on this bug from upstream. Other than that allegro 5 is in a very good state, and I will keep maintaining it as long as I can. But help would of course be appreciated on allegro5 too.

11 April 2024

Reproducible Builds: Reproducible Builds in March 2024

Welcome to the March 2024 report from the Reproducible Builds project! In our reports, we attempt to outline what we have been up to over the past month, as well as mentioning some of the important things happening more generally in software supply-chain security. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. Arch Linux minimal container userland now 100% reproducible
  2. Validating Debian s build infrastructure after the XZ backdoor
  3. Making Fedora Linux (more) reproducible
  4. Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management
  5. Software and source code identification with GNU Guix and reproducible builds
  6. Two new Rust-based tools for post-processing determinism
  7. Distribution work
  8. Mailing list highlights
  9. Website updates
  10. Delta chat clients now reproducible
  11. diffoscope updates
  12. Upstream patches
  13. Reproducibility testing framework

Arch Linux minimal container userland now 100% reproducible In remarkable news, Reproducible builds developer kpcyrd reported that that the Arch Linux minimal container userland is now 100% reproducible after work by developers dvzv and Foxboron on the one remaining package. This represents a real world , widely-used Linux distribution being reproducible. Their post, which kpcyrd suffixed with the question now what? , continues on to outline some potential next steps, including validating whether the container image itself could be reproduced bit-for-bit. The post, which was itself a followup for an Arch Linux update earlier in the month, generated a significant number of replies.

Validating Debian s build infrastructure after the XZ backdoor From our mailing list this month, Vagrant Cascadian wrote about being asked about trying to perform concrete reproducibility checks for recent Debian security updates, in an attempt to gain some confidence about Debian s build infrastructure given that they performed builds in environments running the high-profile XZ vulnerability. Vagrant reports (with some caveats):
So far, I have not found any reproducibility issues; everything I tested I was able to get to build bit-for-bit identical with what is in the Debian archive.
That is to say, reproducibility testing permitted Vagrant and Debian to claim with some confidence that builds performed when this vulnerable version of XZ was installed were not interfered with.

Making Fedora Linux (more) reproducible In March, Davide Cavalca gave a talk at the 2024 Southern California Linux Expo (aka SCALE 21x) about the ongoing effort to make the Fedora Linux distribution reproducible. Documented in more detail on Fedora s website, the talk touched on topics such as the specifics of implementing reproducible builds in Fedora, the challenges encountered, the current status and what s coming next. (YouTube video)

Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management Julien Malka published a brief but interesting paper in the HAL open archive on Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management:
Functional package managers (FPMs) and reproducible builds (R-B) are technologies and methodologies that are conceptually very different from the traditional software deployment model, and that have promising properties for software supply chain security. This thesis aims to evaluate the impact of FPMs and R-B on the security of the software supply chain and propose improvements to the FPM model to further improve trust in the open source supply chain. PDF
Julien s paper poses a number of research questions on how the model of distributions such as GNU Guix and NixOS can be leveraged to further improve the safety of the software supply chain , etc.

Software and source code identification with GNU Guix and reproducible builds In a long line of commendably detailed blog posts, Ludovic Court s, Maxim Cournoyer, Jan Nieuwenhuizen and Simon Tournier have together published two interesting posts on the GNU Guix blog this month. In early March, Ludovic Court s, Maxim Cournoyer, Jan Nieuwenhuizen and Simon Tournier wrote about software and source code identification and how that might be performed using Guix, rhetorically posing the questions: What does it take to identify software ? How can we tell what software is running on a machine to determine, for example, what security vulnerabilities might affect it? Later in the month, Ludovic Court s wrote a solo post describing adventures on the quest for long-term reproducible deployment. Ludovic s post touches on GNU Guix s aim to support time travel , the ability to reliably (and reproducibly) revert to an earlier point in time, employing the iconic image of Harold Lloyd hanging off the clock in Safety Last! (1925) to poetically illustrate both the slapstick nature of current modern technology and the gymnastics required to navigate hazards of our own making.

Two new Rust-based tools for post-processing determinism Zbigniew J drzejewski-Szmek announced add-determinism, a work-in-progress reimplementation of the Reproducible Builds project s own strip-nondeterminism tool in the Rust programming language, intended to be used as a post-processor in RPM-based distributions such as Fedora In addition, Yossi Kreinin published a blog post titled refix: fast, debuggable, reproducible builds that describes a tool that post-processes binaries in such a way that they are still debuggable with gdb, etc.. Yossi post details the motivation and techniques behind the (fast) performance of the tool.

Distribution work In Debian this month, since the testing framework no longer varies the build path, James Addison performed a bulk downgrade of the bug severity for issues filed with a level of normal to a new level of wishlist. In addition, 28 reviews of Debian packages were added, 38 were updated and 23 were removed this month adding to ever-growing knowledge about identified issues. As part of this effort, a number of issue types were updated, including Chris Lamb adding a new ocaml_include_directories toolchain issue [ ] and James Addison adding a new filesystem_order_in_java_jar_manifest_mf_include_resource issue [ ] and updating the random_uuid_in_notebooks_generated_by_nbsphinx to reference a relevant discussion thread [ ]. In addition, Roland Clobus posted his 24th status update of reproducible Debian ISO images. Roland highlights that the images for Debian unstable often cannot be generated due to changes in that distribution related to the 64-bit time_t transition. Lastly, Bernhard M. Wiedemann posted another monthly update for his reproducibility work in openSUSE.

Mailing list highlights Elsewhere on our mailing list this month:

Website updates There were made a number of improvements to our website this month, including:
  • Pol Dellaiera noticed the frequent need to correctly cite the website itself in academic work. To facilitate easier citation across multiple formats, Pol contributed a Citation File Format (CIF) file. As a result, an export in BibTeX format is now available in the Academic Publications section. Pol encourages community contributions to further refine the CITATION.cff file. Pol also added an substantial new section to the buy in page documenting the role of Software Bill of Materials (SBOMs) and ephemeral development environments. [ ][ ]
  • Bernhard M. Wiedemann added a new commandments page to the documentation [ ][ ] and fixed some incorrect YAML elsewhere on the site [ ].
  • Chris Lamb add three recent academic papers to the publications page of the website. [ ]
  • Mattia Rizzolo and Holger Levsen collaborated to add Infomaniak as a sponsor of amd64 virtual machines. [ ][ ][ ]
  • Roland Clobus updated the stable outputs page, dropping version numbers from Python documentation pages [ ] and noting that Python s set data structure is also affected by the PYTHONHASHSEED functionality. [ ]

Delta chat clients now reproducible Delta Chat, an open source messaging application that can work over email, announced this month that the Rust-based core library underlying Delta chat application is now reproducible.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 259, 260 and 261 to Debian and made the following additional changes:
  • New features:
    • Add support for the zipdetails tool from the Perl distribution. Thanks to Fay Stegerman and Larry Doolittle et al. for the pointer and thread about this tool. [ ]
  • Bug fixes:
    • Don t identify Redis database dumps as GNU R database files based simply on their filename. [ ]
    • Add a missing call to File.recognizes so we actually perform the filename check for GNU R data files. [ ]
    • Don t crash if we encounter an .rdb file without an equivalent .rdx file. (#1066991)
    • Correctly check for 7z being available and not lz4 when testing 7z. [ ]
    • Prevent a traceback when comparing a contentful .pyc file with an empty one. [ ]
  • Testsuite improvements:
    • Fix .epub tests after supporting the new zipdetails tool. [ ]
    • Don t use parenthesis within test skipping messages, as PyTest adds its own parenthesis. [ ]
    • Factor out Python version checking in test_zip.py. [ ]
    • Skip some Zip-related tests under Python 3.10.14, as a potential regression may have been backported to the 3.10.x series. [ ]
    • Actually test 7z support in the test_7z set of tests, not the lz4 functionality. (Closes: reproducible-builds/diffoscope#359). [ ]
In addition, Fay Stegerman updated diffoscope s monkey patch for supporting the unusual Mozilla ZIP file format after Python s zipfile module changed to detect potentially insecure overlapping entries within .zip files. (#362) Chris Lamb also updated the trydiffoscope command line client, dropping a build-dependency on the deprecated python3-distutils package to fix Debian bug #1065988 [ ], taking a moment to also refresh the packaging to the latest Debian standards [ ]. Finally, Vagrant Cascadian submitted an update for diffoscope version 260 in GNU Guix. [ ]

Upstream patches This month, we wrote a large number of patches, including: Bernhard M. Wiedemann used reproducibility-tooling to detect and fix packages that added changes in their %check section, thus failing when built with the --no-checks option. Only half of all openSUSE packages were tested so far, but a large number of bugs were filed, including ones against caddy, exiv2, gnome-disk-utility, grisbi, gsl, itinerary, kosmindoormap, libQuotient, med-tools, plasma6-disks, pspp, python-pypuppetdb, python-urlextract, rsync, vagrant-libvirt and xsimd. Similarly, Jean-Pierre De Jesus DIAZ employed reproducible builds techniques in order to test a proposed refactor of the ath9k-htc-firmware package. As the change produced bit-for-bit identical binaries to the previously shipped pre-built binaries:
I don t have the hardware to test this firmware, but the build produces the same hashes for the firmware so it s safe to say that the firmware should keep working.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, an enormous number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Sleep less after a so-called 404 package state has occurred. [ ]
    • Schedule package builds more often. [ ][ ]
    • Regenerate all our HTML indexes every hour, but only every 12h for the released suites. [ ]
    • Create and update unstable and experimental base systems on armhf again. [ ][ ]
    • Don t reschedule so many depwait packages due to the current size of the i386 architecture queue. [ ]
    • Redefine our scheduling thresholds and amounts. [ ]
    • Schedule untested packages with a higher priority, otherwise slow architectures cannot keep up with the experimental distribution growing. [ ]
    • Only create the stats_buildinfo.png graph once per day. [ ][ ]
    • Reproducible Debian dashboard: refactoring, update several more static stats only every 12h. [ ]
    • Document how to use systemctl with new systemd-based services. [ ]
    • Temporarily disable armhf and i386 continuous integration tests in order to get some stability back. [ ]
    • Use the deb.debian.org CDN everywhere. [ ]
    • Remove the rsyslog logging facility on bookworm systems. [ ]
    • Add zst to the list of packages which are false-positive diskspace issues. [ ]
    • Detect failures to bootstrap Debian base systems. [ ]
  • Arch Linux-related changes:
    • Temporarily disable builds because the pacman package manager is broken. [ ][ ]
    • Split reproducible_html_live_status and split the scheduling timing . [ ][ ][ ]
    • Improve handling when database is locked. [ ][ ]
  • Misc changes:
    • Show failed services that require manual cleanup. [ ][ ]
    • Integrate two new Infomaniak nodes. [ ][ ][ ][ ]
    • Improve IRC notifications for artifacts. [ ]
    • Run diffoscope in different systemd slices. [ ]
    • Run the node health check more often, as it can now repair some issues. [ ][ ]
    • Also include the string Bot in the userAgent for Git. (Re: #929013). [ ]
    • Document increased tmpfs size on our OUSL nodes. [ ]
    • Disable memory account for the reproducible_build service. [ ][ ]
    • Allow 10 times as many open files for the Jenkins service. [ ]
    • Set OOMPolicy=continue and OOMScoreAdjust=-1000 for both the Jenkins and the reproducible_build service. [ ]
Mattia Rizzolo also made the following changes:
  • Debian-related changes:
    • Define a systemd slice to group all relevant services. [ ][ ]
    • Add a bunch of quotes in scripts to assuage the shellcheck tool. [ ]
    • Add stats on how many packages have been built today so far. [ ]
    • Instruct systemd-run to handle diffoscope s exit codes specially. [ ]
    • Prefer the pgrep tool over grepping the output of ps. [ ]
    • Re-enable a couple of i386 and armhf architecture builders. [ ][ ]
    • Fix some stylistic issues flagged by the Python flake8 tool. [ ]
    • Cease scheduling Debian unstable and experimental on the armhf architecture due to the time_t transition. [ ]
    • Start a few more i386 & armhf workers. [ ][ ][ ]
    • Temporarly skip pbuilder updates in the unstable distribution, but only on the armhf architecture. [ ]
  • Other changes:
    • Perform some large-scale refactoring on how the systemd service operates. [ ][ ]
    • Move the list of workers into a separate file so it s accessible to a number of scripts. [ ]
    • Refactor the powercycle_x86_nodes.py script to use the new IONOS API and its new Python bindings. [ ]
    • Also fix nph-logwatch after the worker changes. [ ]
    • Do not install the stunnel tool anymore, it shouldn t be needed by anything anymore. [ ]
    • Move temporary directories related to Arch Linux into a single directory for clarity. [ ]
    • Update the arm64 architecture host keys. [ ]
    • Use a common Postfix configuration. [ ]
The following changes were also made by:
  • Jan-Benedict Glaw:
    • Initial work to clean up a messy NetBSD-related script. [ ][ ]
  • Roland Clobus:
    • Show the installer log if the installer fails to build. [ ]
    • Avoid the minus character (i.e. -) in a variable in order to allow for tags in openQA. [ ]
    • Update the schedule of Debian live image builds. [ ]
  • Vagrant Cascadian:
    • Maintenance on the virt* nodes is completed so bring them back online. [ ]
    • Use the fully qualified domain name in configuration. [ ]
Node maintenance was also performed by Holger Levsen, Mattia Rizzolo [ ][ ] and Vagrant Cascadian [ ][ ][ ][ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

24 March 2024

Jacob Adams: Regular Reboots

Uptime is often considered a measure of system reliability, an indication that the running software is stable and can be counted on. However, this hides the insidious build-up of state throughout the system as it runs, the slow drift from the expected to the strange. As Nolan Lawson highlights in an excellent post entitled Programmers are bad at managing state, state is the most challenging part of programming. It s why did you try turning it off and on again is a classic tech support response to any problem. In addition to the problem of state, installing regular updates periodically requires a reboot, even if the rest of the process is automated through a tool like unattended-upgrades. For my personal homelab, I manage a handful of different machines running various services. I used to just schedule a day to update and reboot all of them, but that got very tedious very quickly. I then moved the reboot to a cronjob, and then recently to a systemd timer and service. I figure that laying out my path to better management of this might help others, and will almost certainly lead to someone telling me a better way to do this. UPDATE: Turns out there s another option for better systemd cron integration. See systemd-cron below.

Stage One: Reboot Cron The first, and easiest approach, is a simple cron job. Just adding the following line to /var/spool/cron/crontabs/root1 is enough to get your machine to reboot once a month2 on the 6th at 8:00 AM3:
0 8 6 * * reboot
I had this configured for many years and it works well. But you have no indication as to whether it succeeds except for checking your uptime regularly yourself.

Stage Two: Reboot systemd Timer The next evolution of this approach for me was to use a systemd timer. I created a regular-reboot.timer with the following contents:
[Unit]
Description=Reboot on a Regular Basis
[Timer]
Unit=regular-reboot.service
OnBootSec=1month
[Install]
WantedBy=timers.target
This timer will trigger the regular-reboot.service systemd unit when the system reaches one month of uptime. I ve seen some guides to creating timer units recommend adding a Wants=regular-reboot.service to the [Unit] section, but this has the consequence of running that service every time it starts the timer. In this case that will just reboot your system on startup which is not what you want. Care needs to be taken to use the OnBootSec directive instead of OnCalendar or any of the other time specifications, as your system could reboot, discover its still within the expected window and reboot again. With OnBootSec your system will not have that problem. Technically, this same problem could have occurred with the cronjob approach, but in practice it never did, as the systems took long enough to come back up that they were no longer within the expected window for the job. I then added the regular-reboot.service:
[Unit]
Description=Reboot on a Regular Basis
Wants=regular-reboot.timer
[Service]
Type=oneshot
ExecStart=shutdown -r 02:45
You ll note that this service is actually scheduling a specific reboot time via the shutdown command instead of just immediately rebooting. This is a bit of a hack needed because I can t control when the timer runs exactly when using OnBootSec. This way different systems have different reboot times so that everything doesn t just reboot and fail all at once. Were something to fail to come back up I would have some time to fix it, as each machine has a few hours between scheduled reboots. One you have both files in place, you ll simply need to reload configuration and then enable and start the timer unit:
systemctl daemon-reload
systemctl enable --now regular-reboot.timer
You can then check when it will fire next:
# systemctl status regular-reboot.timer
  regular-reboot.timer - Reboot on a Regular Basis
     Loaded: loaded (/etc/systemd/system/regular-reboot.timer; enabled; preset: enabled)
     Active: active (waiting) since Wed 2024-03-13 01:54:52 EDT; 1 week 4 days ago
    Trigger: Fri 2024-04-12 12:24:42 EDT; 2 weeks 4 days left
   Triggers:   regular-reboot.service
Mar 13 01:54:52 dorfl systemd[1]: Started regular-reboot.timer - Reboot on a Regular Basis.

Sidenote: Replacing all Cron Jobs with systemd Timers More generally, I ve now replaced all cronjobs on my personal systems with systemd timer units, mostly because I can now actually track failures via prometheus-node-exporter. There are plenty of ways to hack in cron support to the node exporter, but just moving to systemd units provides both support for tracking failure and logging, both of which make system administration much easier when things inevitably go wrong.

systemd-cron An alternative to converting everything by hand, if you happen to have a lot of cronjobs is systemd-cron. It will make each crontab and /etc/cron.* directory into automatic service and timer units. Thanks to Alexandre Detiste for letting me know about this project. I have few enough cron jobs that I ve already converted, but for anyone looking at a large number of jobs to convert you ll want to check it out!

Stage Three: Monitor that it s working The final step here is confirm that these units actually work, beyond just firing regularly. I now have the following rule in my prometheus-alertmanager rules:
  - alert: UptimeTooHigh
    expr: (time() - node_boot_time_seconds job="node" ) / 86400 > 35
    annotations:
      summary: "Instance  Has Been Up Too Long!"
      description: "Instance  Has Been Up Too Long!"
This will trigger an alert anytime that I have a machine up for more than 35 days. This actually helped me track down one machine that I had forgotten to set up this new unit on4.

Not everything needs to scale Is It Worth The Time One of the most common fallacies programmers fall into is that we will jump to automating a solution before we stop and figure out how much time it would even save. In taking a slow improvement route to solve this problem for myself, I ve managed not to invest too much time5 in worrying about this but also achieved a meaningful improvement beyond my first approach of doing it all by hand.
  1. You could also add a line to /etc/crontab or drop a script into /etc/cron.monthly depending on your system.
  2. Why once a month? Mostly to avoid regular disruptions, but still be reasonably timely on updates.
  3. If you re looking to understand the cron time format I recommend crontab guru.
  4. In the long term I really should set up something like ansible to automatically push fleetwide changes like this but with fewer machines than fingers this seems like overkill.
  5. Of course by now writing about it, I ve probably doubled the amount of time I ve spent thinking about this topic but oh well

9 March 2024

Reproducible Builds: Reproducible Builds in February 2024

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.

Reproducible Builds at FOSDEM 2024 Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn t the only talk related to Reproducible Builds. However, please see our comprehensive FOSDEM 2024 news post for the full details and links.

Maintainer Perspectives on Open Source Software Security Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that 56% of [polled] projects support reproducible builds .

Mailing list highlights From our mailing list this month:

Distribution work In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian s knowledge about identified issues. A number of issue types were updated as well. [ ][ ][ ][ ] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours) and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report
Fedora developer Zbigniew J drzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects README file lists a number of fields will always or almost always vary and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.
Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.
Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:
  • Use a deterministic name instead of trusting gpg s use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [ ][ ]
  • Don t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). [ ]
  • Don t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. [ ]
  • Fix compatibility with pytest 8.0. [ ]
  • Temporarily fix support for Python 3.11.8. [ ]
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). [ ]
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. [ ]
  • Expand an older changelog entry with a CVE reference. [ ]
  • Make test_zip black clean. [ ]
In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [ ][ ] thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.

reprotest reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:
  • Create a (working) proof of concept for enabling a specific number of CPUs. [ ][ ]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [ ][ ]
  • Support a new --vary=build_path.path option. [ ][ ][ ][ ]

Website updates There were made a number of improvements to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Temporarily disable upgrading/bootstrapping Debian unstable and experimental as they are currently broken. [ ][ ]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. [ ]
    • Add an Erlang package set. [ ]
  • Other changes:
    • Grant Jan-Benedict Glaw shell access to the Jenkins node. [ ]
    • Enable debugging for NetBSD reproducibility testing. [ ]
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. [ ]
    • Revert reproducible nodes: mark osuosl2 as down . [ ]
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. [ ]
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. [ ]
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [ ][ ]
Vagrant Cascadian also made the following changes:
  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [ ][ ][ ]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [ ][ ] [ ][ ]
In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [ ], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [ ] and Zheng Junjie added a link to the GNU Guix tests [ ]. Lastly, node maintenance was performed by Holger Levsen [ ][ ][ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

31 January 2024

Bits from Debian: New Debian Developers and Maintainers (November and December 2023)

The following contributors got their Debian Developer accounts in the last two months: The following contributor was added as Debian Maintainer in the last two months: Congratulations!

9 January 2024

Louis-Philippe V ronneau: 2023 A Musical Retrospective

I ended 2022 with a musical retrospective and very much enjoyed writing that blog post. As such, I have decided to do the same for 2023! From now on, this will probably be an annual thing :) Albums In 2023, I added 73 new albums to my collection nearly 2 albums every three weeks! I listed them below in the order in which I acquired them. I purchased most of these albums when I could and borrowed the rest at libraries. If you want to browse though, I added links to the album covers pointing either to websites where you can buy them or to Discogs when digital copies weren't available. Once again this year, it seems that Punk (mostly O !) and Metal dominate my list, mostly fueled by Angry Metal Guy and the amazing Montr al Skinhead/Punk concert scene. Concerts A trend I started in 2022 was to go to as many concerts of artists I like as possible. I'm happy to report I went to around 80% more concerts in 2023 than in 2022! Looking back at my list, April was quite a busy month... Here are the concerts I went to in 2023: Although metalfinder continues to work as intended, I'm very glad to have discovered the Montr al underground scene has departed from Facebook/Instagram and adopted en masse Gancio, a FOSS community agenda that supports ActivityPub. Our local instance, askapunk.net is pretty much all I could ask for :) That's it for 2023!

31 December 2023

Chris Lamb: Favourites of 2023

This post should have marked the beginning of my yearly roundups of the favourite books and movies I read and watched in 2023. However, due to coming down with a nasty bout of flu recently and other sundry commitments, I wasn't able to undertake writing the necessary four or five blog posts In lieu of this, however, I will simply present my (unordered and unadorned) highlights for now. Do get in touch if this (or any of my previous posts) have spurred you into picking something up yourself

Books

Peter Watts: Blindsight (2006) Reymer Banham: Los Angeles: The Architecture of Four Ecologies (2006) Joanne McNeil: Lurking: How a Person Became a User (2020) J. L. Carr: A Month in the Country (1980) Hilary Mantel: A Memoir of My Former Self: A Life in Writing (2023) Adam Higginbotham: Midnight in Chernobyl (2019) Tony Judt: Postwar: A History of Europe Since 1945 (2005) Tony Judt: Reappraisals: Reflections on the Forgotten Twentieth Century (2008) Peter Apps: Show Me the Bodies: How We Let Grenfell Happen (2021) Joan Didion: Slouching Towards Bethlehem (1968)Erik Larson: The Devil in the White City (2003)

Films Recent releases

Unenjoyable experiences included Alejandro G mez Monteverde's Sound of Freedom (2023), Alex Garland's Men (2022) and Steven Spielberg's The Fabelmans (2022).
Older releases (Films released before 2022, and not including rewatches from previous years.) Distinctly unenjoyable watches included Ocean's Eleven (1960), El Topo (1970), L olo (1992), Hotel Mumbai (2018), Bulworth (1998) and and The Big Red One (1980).

29 December 2023

Ulrike Uhlig: How do kids conceive the internet? - part 4

Read all parts of the series Part 1 // Part 2 // Part 3 // Part 4 I ve been wanting to write this post for over a year, but lacked energy and time. Before 2023 is coming to an end, I want to close this series and share some more insights with you and hopefully provide you with a smile here and there. For this round of interviews, four more kids around the ages of 8 to 13 were interviewed, 3 of them have a US background these 3 interviews were done by a friend who recorded these interviews for me, thank you! As opposed to the previous interviews, these four kids have parents who have a more technical professional background. And this seems to make a difference: even though none of these kids actually knew much better how the internet really works than the other kids that I interviewed, specifically in terms of physical infrastructures, they were much more confident in using the internet, they were able to more correctly name things they see on the internet, and they had partly radical ideas about what they would like to learn or what they would want to change about the internet! Looking at these results, I think it s safe to say that social reproduction is at work and that we need to improve education for kids who do not profit from this type of social and cultural wealth at home. But let s dive into the details.

The boy and the aliens (I ll be mostly transribing the interview, which was short, and which I find difficult to sum up because some of the questions are written in a way to encourage the kids to tell a story, and this particular kid had a thing going on with aliens.) He s a 13 year old boy living in the US. He has his own computer, which technically belongs to his school but can be used by him freely and he can also take it home. He s the first kid saying he s reading the news on the internet; he does not actually use social media, besides sometimes watching TikTok. When asked: Imagine that aliens land and come to you and say: We ve heard about this internet thing you all talk about, what is it? What do you tell them? he replied:
Well, I mean they re aliens, so I don t know if I wanna tell them much.
(Parents laughing in the background.) Let s assume they re friendly aliens.
Well, I would say you can look anything up and play different games. And there are alien games. But mostly the enemies are aliens which you might be a little offended by. And you can get work done, if you needed to spy on humans. There s cameras, you can film yourself, yeah. And you can text people and call people who are far away
And what would be in a drawing that would explain the internet? Google, an alien using Twitch, Google search results, and the interface of an IM software on an iPhone drawn by a 13 year old boy And here s what he explains about his drawing:
First, I would draw what I see when you open a new tab, Google.
On the right side of the drawing we see something like Twitch.
I don t wanna offend the aliens, but you can film yourself playing a game, so here is the alien and he s playing a game.
And then you can ask questions like: How did aliens come to the Earth? And the answer will be here (below). And there ll be different websites that you can click on.
And you can also look up Who won the alien contest? And that would be Usmushgagu, and that guy won the alien contest.
Do you think the information about alien intergalactic football is already on the internet?
Yeah! That s how fast the internet is.
On the bottom of the drawing we see an iPhone and an instant messaging software.
There s also a device called an iPhone and with it you can text your friends. So here s the alien asking: How was ur day? and the friend might answer IDK [I don t know].
Imagine that a wise and friendly dragon could teach you one thing about the internet that you ve always wanted to know. What would you ask the dragon to teach you about?
Is there a way you don t have to pay for any channels or subscriptions and you can get through any firewall?
Imagine you could make the internet better for everyone. What would you do first?
Well you wouldn t have to pay for it [paywalls].
Can you describe what happens between your device and a website when you visit a website?
Well, it takes 0.025 seconds. [ ] It s connecting.
Wow, that s indeed fast! We were not able to obtain more details about what is that fast thing that s happening exactly

The software engineer s kid This kid identifies as neither boy nor girl, is 10 years old and lives in Germany. Their father works as a software engineer, or in the words of the child:
My dad knows everything.
The kid has a laptop and a mobile phone, both with parental control they don t think that the controlling is fair. This kid uses the internet foremostly for listening to music and watching prank channels on Youtube but also to work with Purple Mash (a teaching platform for the computing curriculum used at their school), finding 3d printing models (that they ask their father to print with them because they did not manage to use the printer by themselves yet). Interestingly, and very differently from the non-tech-parent kids, this kid insists on using Firefox and Signal - the latter is not only used by their dad to tell them to come downstairs for dinner, but also to call their grandmother. This kid also shops online, with the help of the father who does the actual shopping for them using money that the kid earned by reading books. If you would need to explain to an alien who has landed on Earth what the internet is, what would you tell them?
The internet is something where you search, for example, you can look for music. You can also watch videos from around the world, and you can program stuff.
Like most of the kids interviewed, this kid uses the internet mostly for media consumption, but with the difference that they also engage with technology by way of programming using Purple Mash. drawing of the internet by a 10 year old showing a Youtube prank channel, an external device trackpad, and headphones In their drawing we see a Youtube prank channel on a screen, an external trackpad on the right (likely it s not a touch screen), and headphones. Notice how there is no keyboard, or maybe it s folded away. If you could ask a nice and friendly dragon anything you d like to learn about the internet, what would it be?
How do I shutdown my dad s computer forever?
And what is it that he would do to improve the internet for everyone? Contrary to the kid living in the US, they think that
It takes too much time to load stuff!
I wonder if this kid experiences the internet as being slow because they use the mobile network or because their connection somehow gets throttled as a way to control media consumption, or if the German internet infrastructure is just so much worse in certain regions If you could improve the internet for everyone, what would you do first? I d make a new Firefox app that loads the internet much faster.

The software engineer s daughter This girl is only 8 years old, she hates unicorns, and her dad is also a software engineer. She uses a smartphone, controlled by her parents. My impression of the interview is that at this age, kids slightly mix up the internet with the devices that they use to access the internet. drawing of the internet by an 8 year old girl, Showing Google and the interface to call and text someone In her drawing, we see again Google - it s clearly everywhere - and also the interfaces for calling and texting someone. To explain what the internet is, besides the fact that one can use it for calling and listening to music, she says:
[The internet] is something that you can [use to] see someone who is far away, so that you don t need to take time to get to them.
Now, that s a great explanation, the internet providing the possibility for communication over a distance :) If she could ask a friendly dragon something she always wanted to know, she d ask how to make her phone come alive:
that it can talk to you, that it can see you, that it can smile and has eyes. It s like a new family member, you can talk to it.
Sounds a bit like Siri, Alexa, or Furby, doesn t it? If you could improve the internet for everyone, what would you do first? She d have the phone be able to decide over her free time, her phone time. That would make the world better, not for the kids, but certainly for the parents.

The antifascist kid This German boy s dad has a background in electrotechnical engineering. He s 10 years old and he told me he s using the internet a lot for searching things for example about his passion: the firefighters. For him, the internet is:
An invisible world. A virtual world. But there s also the darknet.
He told me he always watches that German show on public TV for kids that explains stuff: Checker Tobi. (In 2014, Checker Tobi actually produced an episode about the internet, which I d criticize for having only male characters, except for one female character: a secretary Google, a nice and friendly woman guiding the way through the huge library that s the internet ) This kid was the only one interviewed who managed to actually explain something about the internet, or rather about the hypertextual structure of the web. When I asked him to draw the internet, he made a drawing of a pin board. He explained:
Many items are attached to the pin board, and on the top left corner there s a computer, for example with Youtube and one can navigate like that between all the items, and start again from the beginning when done.
hypertext structure representing the internet drawn by a kid When I asked if he knew what actually happens between the device and a website he visits, he put forth the hypothesis of the existence of some kind of
Waves, internet waves - all this stuff somehow needs to be transmitted.
What he d like to learn:
How to get into the darknet? How do you become a Whitehat? I ve heard these words on the internet, the internet makes me clever.
And what would he change on the internet if he could?
I want that right wing extreme stuff is not accessible anymore, or at least, that it rains turds ( Kackw rste ) whenever people watch such stuff. Or that people are always told: This video is scum.
I suspect that his father has been talking with him about these things, and maybe these are also subjects he heard about when listening to punk music (he told me he does), or browsing Youtube.

Future projects To me this has been pretty insightful. I might share some more internet drawings by adults in the future, which I think are also really interesting, as they show very different things depending on the age of the person. I ve been using the information gathered to work on a children s book which I hope to be able to share with you next year.

13 December 2023

Melissa Wen: 15 Tips for Debugging Issues in the AMD Display Kernel Driver

A self-help guide for examining and debugging the AMD display driver within the Linux kernel/DRM subsystem. It s based on my experience as an external developer working on the driver, and are shared with the goal of helping others navigate the driver code. Acknowledgments: These tips were gathered thanks to the countless help received from AMD developers during the driver development process. The list below was obtained by examining open source code, reviewing public documentation, playing with tools, asking in public forums and also with the help of my former GSoC mentor, Rodrigo Siqueira.

Pre-Debugging Steps: Before diving into an issue, it s crucial to perform two essential steps: 1) Check the latest changes: Ensure you re working with the latest AMD driver modifications located in the amd-staging-drm-next branch maintained by Alex Deucher. You may also find bug fixes for newer kernel versions on branches that have the name pattern drm-fixes-<date>. 2) Examine the issue tracker: Confirm that your issue isn t already documented and addressed in the AMD display driver issue tracker. If you find a similar issue, you can team up with others and speed up the debugging process.

Understanding the issue: Do you really need to change this? Where should you start looking for changes? 3) Is the issue in the AMD kernel driver or in the userspace?: Identifying the source of the issue is essential regardless of the GPU vendor. Sometimes this can be challenging so here are some helpful tips:
  • Record the screen: Capture the screen using a recording app while experiencing the issue. If the bug appears in the capture, it s likely a userspace issue, not the kernel display driver.
  • Analyze the dmesg log: Look for error messages related to the display driver in the dmesg log. If the error message appears before the message [drm] Display Core v... , it s not likely a display driver issue. If this message doesn t appear in your log, the display driver wasn t fully loaded and you will see a notification that something went wrong here.
4) AMD Display Manager vs. AMD Display Core: The AMD display driver consists of two components:
  • Display Manager (DM): This component interacts directly with the Linux DRM infrastructure. Occasionally, issues can arise from misinterpretations of DRM properties or features. If the issue doesn t occur on other platforms with the same AMD hardware - for example, only happens on Linux but not on Windows - it s more likely related to the AMD DM code.
  • Display Core (DC): This is the platform-agnostic part responsible for setting and programming hardware features. Modifications to the DC usually require validation on other platforms, like Windows, to avoid regressions.
5) Identify the DC HW family: Each AMD GPU has variations in its hardware architecture. Features and helpers differ between families, so determining the relevant code for your specific hardware is crucial.
  • Find GPU product information in Linux/AMD GPU documentation
  • Check the dmesg log for the Display Core version (since this commit in Linux kernel 6.3v). For example:
    • [drm] Display Core v3.2.241 initialized on DCN 2.1
    • [drm] Display Core v3.2.237 initialized on DCN 3.0.1

Investigating the relevant driver code: Keep from letting unrelated driver code to affect your investigation. 6) Narrow the code inspection down to one DC HW family: the relevant code resides in a directory named after the DC number. For example, the DCN 3.0.1 driver code is located at drivers/gpu/drm/amd/display/dc/dcn301. We all know that the AMD s shared code is huge and you can use these boundaries to rule out codes unrelated to your issue. 7) Newer families may inherit code from older ones: you can find dcn301 using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks and helpers your driver utilizes to investigate the right portion. You can leverage ftrace for supplemental validation. To give an example, it was useful when I was updating DCN3 color mapping to correctly use their new post-blending color capabilities, such as: Additionally, you can use two different HW families to compare behaviours. If you see the issue in one but not in the other, you can compare the code and understand what has changed and if the implementation from a previous family doesn t fit well the new HW resources or design. You can also count on the help of the community on the Linux AMD issue tracker to validate your code on other hardware and/or systems. This approach helped me debug a 2-year-old issue where the cursor gamma adjustment was incorrect in DCN3 hardware, but working correctly for DCN2 family. I solved the issue in two steps, thanks for community feedback and validation: 8) Check the hardware capability screening in the driver: You can currently find a list of display hardware capabilities in the drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c file. More precisely in the dcn*_resource_construct() function. Using DCN301 for illustration, here is the list of its hardware caps:
	/*************************************************
	 *  Resource + asic cap harcoding                *
	 *************************************************/
	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
	pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
	pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
	dc->caps.max_downscale_ratio = 600;
	dc->caps.i2c_speed_in_khz = 100;
	dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
	dc->caps.max_cursor_size = 256;
	dc->caps.min_horizontal_blanking_period = 80;
	dc->caps.dmdata_alloc_size = 2048;
	dc->caps.max_slave_planes = 2;
	dc->caps.max_slave_yuv_planes = 2;
	dc->caps.max_slave_rgb_planes = 2;
	dc->caps.is_apu = true;
	dc->caps.post_blend_color_processing = true;
	dc->caps.force_dp_tps4_for_cp2520 = true;
	dc->caps.extended_aux_timeout_support = true;
	dc->caps.dmcub_support = true;
	/* Color pipeline capabilities */
	dc->caps.color.dpp.dcn_arch = 1;
	dc->caps.color.dpp.input_lut_shared = 0;
	dc->caps.color.dpp.icsc = 1;
	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
	dc->caps.color.dpp.post_csc = 1;
	dc->caps.color.dpp.gamma_corr = 1;
	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
	dc->caps.color.dpp.hw_3d_lut = 1;
	dc->caps.color.dpp.ogam_ram = 1;
	// no OGAM ROM on DCN301
	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
	dc->caps.color.dpp.ocsc = 0;
	dc->caps.color.mpc.gamut_remap = 1;
	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
	dc->caps.color.mpc.ogam_ram = 1;
	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
	dc->caps.color.mpc.ocsc = 1;
	dc->caps.dp_hdmi21_pcon_support = true;
	/* read VBIOS LTTPR caps */
	if (ctx->dc_bios->funcs->get_lttpr_caps)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_lttpr_enable = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
		dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
	 
	if (ctx->dc_bios->funcs->get_lttpr_interop)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_interop_enabled = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
		dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
	 
Keep in mind that the documentation of color capabilities are available at the Linux kernel Documentation.

Understanding the development history: What has brought us to the current state? 9) Pinpoint relevant commits: Use git log and git blame to identify commits targeting the code section you re interested in. 10) Track regressions: If you re examining the amd-staging-drm-next branch, check for regressions between DC release versions. These are defined by DC_VER in the drivers/gpu/drm/amd/display/dc/dc.h file. Alternatively, find a commit with this format drm/amd/display: 3.2.221 that determines a display release. It s useful for bisecting. This information helps you understand how outdated your branch is and identify potential regressions. You can consider each DC_VER takes around one week to be bumped. Finally, check testing log of each release in the report provided on the amd-gfx mailing list, such as this one Tested-by: Daniel Wheeler:

Reducing the inspection area: Focus on what really matters. 11) Identify involved HW blocks: This helps isolate the issue. You can find more information about DCN HW blocks in the DCN Overview documentation. In summary:
  • Plane issues are closer to HUBP and DPP.
  • Blending/Stream issues are closer to MPC, OPP and OPTC. They are related to DRM CRTC subjects.
This information was useful when debugging a hardware rotation issue where the cursor plane got clipped off in the middle of the screen. Finally, the issue was addressed by two patches: 12) Issues around bandwidth (glitches) and clocks: May be affected by calculations done in these HW blocks and HW specific values. The recalculation equations are found in the DML folder. DML stands for Display Mode Library. It s in charge of all required configuration parameters supported by the hardware for multiple scenarios. See more in the AMD DC Overview kernel docs. It s a math library that optimally configures hardware to find the best balance between power efficiency and performance in a given scenario. Finding some clk variables that affect device behavior may be a sign of it. It s hard for a external developer to debug this part, since it involves information from HW specs and firmware programming that we don t have access. The best option is to provide all relevant debugging information you have and ask AMD developers to check the values from your suspicions.
  • Do a trick: If you suspect the power setup is degrading performance, try setting the amount of power supplied to the GPU to the maximum and see if it affects the system behavior with this command: sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
I learned it when debugging glitches with hardware cursor rotation on Steam Deck. My first attempt was changing the clock calculation. In the end, Rodrigo Siqueira proposed the right solution targeting bandwidth in two steps:

Checking implicit programming and hardware limitations: Bring implicit programming to the level of consciousness and recognize hardware limitations. 13) Implicit update types: Check if the selected type for atomic update may affect your issue. The update type depends on the mode settings, since programming some modes demands more time for hardware processing. More details in the source code:
/* Surface update type is used by dc_update_surfaces_and_stream
 * The update type is determined at the very beginning of the function based
 * on parameters passed in and decides how much programming (or updating) is
 * going to be done during the call.
 *
 * UPDATE_TYPE_FAST is used for really fast updates that do not require much
 * logical calculations or hardware register programming. This update MUST be
 * ISR safe on windows. Currently fast update will only be used to flip surface
 * address.
 *
 * UPDATE_TYPE_MED is used for slower updates which require significant hw
 * re-programming however do not affect bandwidth consumption or clock
 * requirements. At present, this is the level at which front end updates
 * that do not require us to run bw_calcs happen. These are in/out transfer func
 * updates, viewport offset changes, recout size changes and pixel
depth changes.
 * This update can be done at ISR, but we want to minimize how often
this happens.
 *
 * UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
 * bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
 * end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
 * gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
 * a full update. This cannot be done at ISR level and should be a rare event.
 * Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
 * underscan we don't expect to see this call at all.
 */
enum surface_update_type  
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED,  /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
 ;

Using tools: Observe the current state, validate your findings, continue improvements. 14) Use AMD tools to check hardware state and driver programming: help on understanding your driver settings and checking the behavior when changing those settings.
  • DC Visual confirmation: Check multiple planes and pipe split policy.
  • DTN logs: Check display hardware state, including rotation, size, format, underflow, blocks in use, color block values, etc.
  • UMR: Check ASIC info, register values, KMS state - links and elements (framebuffers, planes, CRTCs, connectors). Source: UMR project documentation
15) Use generic DRM/KMS tools:
  • IGT test tools: Use generic KMS tests or develop your own to isolate the issue in the kernel space. Compare results across different GPU vendors to understand their implementations and find potential solutions. Here AMD also has specific IGT tests for its GPUs that is expect to work without failures on any AMD GPU. You can check results of HW-specific tests using different display hardware families or you can compare expected differences between the generic workflow and AMD workflow.
  • drm_info: This tool summarizes the current state of a display driver (capabilities, properties and formats) per element of the DRM/KMS workflow. Output can be helpful when reporting bugs.

Don t give up! Debugging issues in the AMD display driver can be challenging, but by following these tips and leveraging available resources, you can significantly improve your chances of success. Worth mentioning: This blog post builds upon my talk, I m not an AMD expert, but presented at the 2022 XDC. It shares guidelines that helped me debug AMD display issues as an external developer of the driver. Open Source Display Driver: The Linux kernel/AMD display driver is open source, allowing you to actively contribute by addressing issues listed in the official tracker. Tackling existing issues or resolving your own can be a rewarding learning experience and a valuable contribution to the community. Additionally, the tracker serves as a valuable resource for finding similar bugs, troubleshooting tips, and suggestions from AMD developers. Finally, it s a platform for seeking help when needed. Remember, contributing to the open source community through issue resolution and collaboration is mutually beneficial for everyone involved.

6 December 2023

Reproducible Builds: Reproducible Builds in November 2023

Welcome to the November 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a rather rapid recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries (more).

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Amazingly, the agenda and all notes from all sessions are all online many thanks to everyone who wrote notes from the sessions. As a followup on one idea, started at the summit, Alexander Couzens and Holger Levsen started work on a cache (or tailored front-end) for the snapshot.debian.org service. The general idea is that, when rebuilding Debian, you do not actually need the whole ~140TB of data from snapshot.debian.org; rather, only a very small subset of the packages are ever used for for building. It turns out, for amd64, arm64, armhf, i386, ppc64el, riscv64 and s390 for Debian trixie, unstable and experimental, this is only around 500GB ie. less than 1%. Although the new service not yet ready for usage, it has already provided a promising outlook in this regard. More information is available on https://rebuilder-snapshot.debian.net and we hope that this service becomes usable in the coming weeks. The adjacent picture shows a sticky note authored by Jan-Benedict Glaw at the summit in Hamburg, confirming Holger Levsen s theory that rebuilding all Debian packages needs a very small subset of packages, the text states that 69,200 packages (in Debian sid) list 24,850 packages in their .buildinfo files, in 8,0200 variations. This little piece of paper was the beginning of rebuilder-snapshot and is a direct outcome of the summit! The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Beyond Trusting FOSS presentation at SeaGL On November 4th, Vagrant Cascadian presented Beyond Trusting FOSS at SeaGL in Seattle, WA in the United States. Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. The summary of Vagrant s talk mentions that it will:
[ ] introduce the concepts of Reproducible Builds, including best practices for developing and releasing software, the tools available to help diagnose issues, and touch on progress towards solving decades-old deeply pervasive fundamental security issues Learn how to verify and demonstrate trust, rather than simply hoping everything is OK!
Germane to the contents of the talk, the slides for Vagrant s talk can be built reproducibly, resulting in a PDF with a SHA1 of cfde2f8a0b7e6ec9b85377eeac0661d728b70f34 when built on Debian bookworm and c21fab273232c550ce822c4b0d9988e6c49aa2c3 on Debian sid at the time of writing.

Human Factors in Software Supply Chain Security Marcel Fourn , Dominik Wermke, Sascha Fahl and Yasemin Acar have published an article in a Special Issue of the IEEE s Security & Privacy magazine. Entitled A Viewpoint on Human Factors in Software Supply Chain Security: A Research Agenda, the paper justifies the need for reproducible builds to reach developers and end-users specifically, and furthermore points out some under-researched topics that we have seen mentioned in interviews. An author pre-print of the article is available in PDF form.

Community updates On our mailing list this month:

openSUSE updates Bernhard M. Wiedemann has created a wiki page outlining an proposal to create a general-purpose Linux distribution which consists of 100% bit-reproducible packages albeit minus the embedded signature within RPM files. It would be based on openSUSE Tumbleweed or, if available, its Slowroll-variant. In addition, Bernhard posted another monthly update for his work elsewhere in openSUSE.

Ubuntu Launchpad now supports .buildinfo files Back in 2017, Steve Langasek filed a bug against Ubuntu s Launchpad code hosting platform to report that .changes files (artifacts of building Ubuntu and Debian packages) reference .buildinfo files that aren t actually exposed by Launchpad itself. This was causing issues when attempting to process .changes files with tools such as Lintian. However, it was noticed last month that, in early August of this year, Simon Quigley had resolved this issue, and .buildinfo files are now available from the Launchpad system.

PHP reproducibility updates There have been two updates from the PHP programming language this month. Firstly, the widely-deployed PHPUnit framework for the PHP programming language have recently released version 10.5.0, which introduces the inclusion of a composer.lock file, ensuring total reproducibility of the shipped binary file. Further details and the discussion that went into their particular implementation can be found on the associated GitHub pull request. In addition, the presentation Leveraging Nix in the PHP ecosystem has been given in late October at the PHP International Conference in Munich by Pol Dellaiera. While the video replay is not yet available, the (reproducible) presentation slides and speaker notes are available.

diffoscope changes diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including:
  • Improving DOS/MBR extraction by adding support for 7z. [ ]
  • Adding a missing RequiredToolNotFound import. [ ]
  • As a UI/UX improvement, try and avoid printing an extended traceback if diffoscope runs out of memory. [ ]
  • Mark diffoscope as stable on PyPI.org. [ ]
  • Uploading version 252 to Debian unstable. [ ]

Website updates A huge number of notes were added to our website that were taken at our recent Reproducible Builds Summit held between October 31st and November 2nd in Hamburg, Germany. In particular, a big thanks to Arnout Engelen, Bernhard M. Wiedemann, Daan De Meyer, Evangelos Ribeiro Tzaras, Holger Levsen and Orhun Parmaks z. In addition to this, a number of other changes were made, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Track packages marked as Priority: important in a new package set. [ ][ ]
    • Stop scheduling packages that fail to build from source in bookworm [ ] and bullseye. [ ].
    • Add old releases dashboard link in web navigation. [ ]
    • Permit re-run of the pool_buildinfos script to be re-run for a specific year. [ ]
    • Grant jbglaw access to the osuosl4 node [ ][ ] along with lynxis [ ].
    • Increase RAM on the amd64 Ionos builders from 48 GiB to 64 GiB; thanks IONOS! [ ]
    • Move buster to archived suites. [ ][ ]
    • Reduce the number of arm64 architecture workers from 24 to 16 in order to improve stability [ ], reduce the workers for amd64 from 32 to 28 and, for i386, reduce from 12 down to 8 [ ].
    • Show the entire build history of each Debian package. [ ]
    • Stop scheduling already tested package/version combinations in Debian bookworm. [ ]
  • Snapshot service for rebuilders
    • Add an HTTP-based API endpoint. [ ][ ]
    • Add a Gunicorn instance to serve the HTTP API. [ ]
    • Add an NGINX config [ ][ ][ ][ ]
  • System-health:
    • Detect failures due to HTTP 503 Service Unavailable errors. [ ]
    • Detect failures to update package sets. [ ]
    • Detect unmet dependencies. (This usually occurs with builds of Debian live-build.) [ ]
  • Misc-related changes:
    • do install systemd-ommd on jenkins. [ ]
    • fix harmless typo in squid.conf for codethink04. [ ]
    • fixup: reproducible Debian: add gunicorn service to serve /api for rebuilder-snapshot.d.o. [ ]
    • Increase codethink04 s Squid cache_dir size setting to 16 GiB. [ ]
    • Don t install systemd-oomd as it unfortunately kills sshd [ ]
    • Use debootstrap from backports when commisioning nodes. [ ]
    • Add the live_build_debian_stretch_gnome, debsums-tests_buster and debsums-tests_buster jobs to the zombie list. [ ][ ]
    • Run jekyll build with the --watch argument when building the Reproducible Builds website. [ ]
    • Misc node maintenance. [ ][ ][ ]
Other changes were made as well, however, including Mattia Rizzolo fixing rc.local s Bash syntax so it can actually run [ ], commenting away some file cleanup code that is (potentially) deleting too much [ ] and fixing the html_brekages page for Debian package builds [ ]. Finally, diagnosed and submitted a patch to add a AddEncoding gzip .gz line to the tests.reproducible-builds.org Apache configuration so that Gzip files aren t re-compressed as Gzip which some clients can t deal with (as well as being a waste of time). [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

11 November 2023

Reproducible Builds: Reproducible Builds in October 2023

Welcome to the October 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

Reproducible Builds Summit 2023 Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany! Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort, and this instance was no different. During this enriching event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. A number of concrete outcomes from the summit will documented in the report for November 2023 and elsewhere. Amazingly the agenda and all notes from all sessions are already online. The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.

Reflections on Reflections on Trusting Trust Russ Cox posted a fascinating article on his blog prompted by the fortieth anniversary of Ken Thompson s award-winning paper, Reflections on Trusting Trust:
[ ] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?
Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code which Russ requests and subsequently dissects in great but accessible detail.

Ecosystem factors of reproducible builds Rahul Bajaj, Eduardo Fernandes, Bram Adams and Ahmed E. Hassan from the Maintenance, Construction and Intelligence of Software (MCIS) laboratory within the School of Computing, Queen s University in Ontario, Canada have published a paper on the Time to fix, causes and correlation with external ecosystem factors of unreproducible builds. The authors compare various response times within the Debian and Arch Linux distributions including, for example:
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.
A full PDF of their paper is available online, as are many other interesting papers on MCIS publication page.

NixOS installation image reproducible On the NixOS Discourse instance, Arnout Engelen (raboof) announced that NixOS have created an independent, bit-for-bit identical rebuilding of the nixos-minimal image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn t until this week that we were back to having everything reproducible and being able to validate the complete chain.
Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.

CPython source tarballs now reproducible Seth Larson published a blog post investigating the reproducibility of the CPython source tarballs. Using diffoscope, reprotest and other tools, Seth documents his work that led to a pull request to make these files reproducible which was merged by ukasz Langa.

New arm64 hardware from Codethink Long-time sponsor of the project, Codethink, have generously replaced our old Moonshot-Slides , which they have generously hosted since 2016 with new KVM-based arm64 hardware. Holger Levsen integrated these new nodes to the Reproducible Builds continuous integration framework.

Community updates On our mailing list during October 2023 there were a number of threads, including:
  • Vagrant Cascadian continued a thread about the implementation details of a snapshot archive server required for reproducing previous builds. [ ]
  • Akihiro Suda shared an update on BuildKit, a toolkit for building Docker container images. Akihiro links to a interesting talk they recently gave at DockerCon titled Reproducible builds with BuildKit for software supply-chain security.
  • Alex Zakharov started a thread discussing and proposing fixes for various tools that create ext4 filesystem images. [ ]
Elsewhere, Pol Dellaiera made a number of improvements to our website, including fixing typos and links [ ][ ], adding a NixOS Flake file [ ] and sorting our publications page by date [ ]. Vagrant Cascadian presented Reproducible Builds All The Way Down at the Open Source Firmware Conference.

Distribution work distro-info is a Debian-oriented tool that can provide information about Debian (and Ubuntu) distributions such as their codenames (eg. bookworm) and so on. This month, Benjamin Drung uploaded a new version of distro-info that added support for the SOURCE_DATE_EPOCH environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues. Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

Software development The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including: In addition, Chris Lamb fixed an issue in diffoscope, where if the equivalent of file -i returns text/plain, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [ ] This was then uploaded to Debian (and elsewhere) as version 251.

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Refine the handling of package blacklisting, such as sending blacklisting notifications to the #debian-reproducible-changes IRC channel. [ ][ ][ ]
    • Install systemd-oomd on all Debian bookworm nodes (re. Debian bug #1052257). [ ]
    • Detect more cases of failures to delete schroots. [ ]
    • Document various bugs in bookworm which are (currently) being manually worked around. [ ]
  • Node-related changes:
    • Integrate the new arm64 machines from Codethink. [ ][ ][ ][ ][ ][ ]
    • Improve various node cleanup routines. [ ][ ][ ][ ]
    • General node maintenance. [ ][ ][ ][ ]
  • Monitoring-related changes:
    • Remove unused Munin monitoring plugins. [ ]
    • Complain less visibly about too many installed kernels. [ ]
  • Misc:
    • Enhance the firewall handling on Jenkins nodes. [ ][ ][ ][ ]
    • Install the fish shell everywhere. [ ]
In addition, Vagrant Cascadian added some packages and configuration for snapshot experiments. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 November 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)

TL;DR: This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties. Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline. Get a closer look at each hardware block s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API. In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we ll talk less about the color tools and more about where to find them in the hardware. We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks. That said, welcome back to the second part of our thrilling journey through AMD s color management realm!

AMD Display Driver in the Linux/DRM Subsystem: The Journey In my 2022 XDC talk I m not an AMD expert, but , I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1]. The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties. On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream. The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.

Exploring Color Capabilities of the AMD display hardware Now, let s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices. First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.

Predefined Transfer Functions (Named Fixed Curves): Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d Eon. ITU-R 2100 introduces three main types of transfer functions:
  • OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
  • EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
  • OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
  • Linear/Unity: linear/identity relationship between pixel value and luminance value;
  • Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
  • sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
  • BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
  • PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.

1D LUTs (1-dimensional Lookup Table): A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.

3D LUTs (3-dimensional Lookup Table): These tables work in three dimensions red, green, and blue. They re perfect for complex color transformations and adjustments between color channels. It s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations

CTM (Color Transformation Matrices): Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.

HDR Multiplier: HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.

AMD Color Capabilities in the Hardware Pipeline First, let s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties. Note: here s the catch there are some DRM CRTC color transformations that don t have a corresponding AMD MPC color block, and vice versa. It s like a puzzle, and we re here to solve it!

AMD Color Blocks and Capabilities We can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let s take the DCN3+ family as reference. What s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps. The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color pipeline capabilities described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion  (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT).  Gamma correction  is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New  Gamma Correction  block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. 
dc->caps.color.dpp.ogam_ram = 1; //  Blend Gamma  block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps. Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.

DPP Color Pipeline: Before Blending (Per Plane) Let s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
  • Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch

AMD Plane Degamma: TF and 1D LUT Described by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively. Pre-defined transfer functions:
  • they are hardcoded curves (read-only memory - ROM);
  • supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass. References:

AMD Plane 3x4 CTM (Color Transformation Matrix) AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass. References:

AMD Plane Shaper: TF + 1D LUT Described by: dc->caps.color.dpp.hw_3d_lut The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT. Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value. Pre-defined transfer functions:
  • there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass. References:

AMD Plane 3D LUT Described by: dc->caps.color.dpp.hw_3d_lut The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported. The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+	for (nib = 0; nib < 17; nib++)  
+		for (nig = 0; nig < 17; nig++)  
+			for (nir = 0; nir < 17; nir++)  
+				ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+				rgb_area[ind].red = rgb_lib[ind_lut + 0];
+				rgb_area[ind].green = rgb_lib[ind_lut + 1];
+				rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+				ind++;
+			 
+		 
+	 
In our driver-specific approach we opted to advertise it s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+		/* Stride and bit depth are not programmable by API yet.
+		 * Therefore, only supports 17x17x17 3D LUT (12-bit).
+		 */
+		lut->lut_3d.use_tetrahedral_9 = false;
+		lut->lut_3d.use_12bits = true;
+		lut->state.bits.initialized = 1;
+		__drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+					lut->lut_3d.use_tetrahedral_9,
+					MAX_COLOR_3DLUT_BITDEPTH);
A refined control of 3D LUT parameters should go through a follow-up version or generic API. Setting 3D LUT to NULL means bypass. References:

AMD Plane Blend/Out Gamma: TF + 1D LUT Described by: dc->caps.color.dpp.ogam_ram The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend. Pre-defined transfer function:
  • there is no DPP Blend ROM. Curves are calculated by AMD color modules;
  • supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

MPC Color Pipeline: After Blending (Per CRTC)

DRM CRTC Degamma 1D LUT The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass. Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.

DRM CRTC 3x3 CTM Described by: dc->caps.color.mpc.gamut_remap It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.

DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TF Described by: dc->caps.color.mpc.ogam_ram After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function. Pre-defined transfer functions:
  • there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
  • supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass. References:

Others

AMD CRTC Shaper and 3D LUT We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.

Input and Output Color Space Conversion There are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink. References:

The search for rainbow treasures is not over yet If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope: In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session. The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coru a-Spain (Igalia s HQ) to keep up the pace and continue advancing today s display needs on Linux. Finally, a HUGE thank you to everyone who worked with me on exploring AMD s color capabilities and making them available in userspace.

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

19 July 2023

Shirish Agarwal: RISC-V, Chips Act, Burning of Books, Manipur

RISC -V Motherboard, SBC While I didn t want to, a part of me is hyped about this motherboard. This would probably be launched somewhere in November. There are obvious issues in this, the first being unlike regular motherboards you wouldn t be upgrade as you would do.You can t upgrade your memory, can t upgrade the CPU (although new versions of instructions could be uploaded, similar to BIOS updates) but as the hardware is integrated (the quad-core SiFive Performance P550 core complex) it would really depend. If the final pricing is around INR 4-5k then it may be able to sell handsomely provided there are people to push and provide support around it. A 500 GB or 1 TB SSD coupled with it and a cheap display unit and you could use it anywhere although as the name says it s more for tinkering as the name suggests. Another board that could perhaps be of more immediate use would be the beagleboard. They launched the same couple of days back and called it Beagle V-Ahead. Again, costs are going to be a concern. Just a year before the pandemic the Beagleboard Black (BB) used to cost in the sub 4k range, today it costs 8k+ for the end user, more than twice the price. How much Brexit is to be blamed for this and how much the Indian customs we would never know. The RS Group that is behind that shop is head-quartered in the UK. As said before, we do not know the price of either board as it probably will take few months for v-ahead to worm its way in the Indian market, maybe another 6 months or so. Even so, with the limited info. on both the boards, I am tilting more towards the other HiFive one. We should come to know about the boards say in 3-5 months of time.

CHIPS Act I had shared about the Chips Act a few times here as well as on SM. Two articles do tell how the CHIPS Act 2023 is more of a political tool, an industrial defence policy rather than just business as most people tend to think.

Cancelation of Books, Books Burning etc. Almost 2400 years ago, Plato released his work called Plato s Republic and one of the seminal works within it is perhaps one of the most famous works was the Allegory of the Cave. That is used again and again in a myriad ways, mostly in science-fiction though and mostly to do with utopian, dystopian movies, webseries etc. I did share how books are being canceled in the States, also a bit here. But the most damning thing has happened throughout history, huge quantities of books burned almost all for politics  But part of it has been neglect as well as this time article shares. What we have lost and continue to lose is just priceless. Every book has a grain of truth in it, some more, some less but equally enjoyable. Most harmful is the neglect towards books and is more true today than any other time in history. Kids today have a wide variety of tools to keep themselves happy or occupied, from anime, VR, gaming the list goes on and on. In that scenario, how the humble books can compete. People think of Kindle but most e-readers like Kindle are nothing but obsolescence by design. I have tried out Kindle a few times but find it a bit on the flimsy side. Books are much better IMHO or call me old-school. While there are many advantages, one of the things that I like about books is that you can easily put yourself in either the protagonist or the antagonist or somewhere in the middle and think of the possible scenarios wherever you are in a particular book. I could go on but it will be a blog post or two in itself. Till later. Happy Reading.

Update:Manipur Extremely horrifying visuals, articles and statements continue to emanate from Manipur. Today, 19th July 2023, just couple of hours back, a video surfaced showing two Kuki women were shown as stripped, naked and Meitei men touching their private parts. Later on, we came to know that this was in response of a disinformation news spread by the Meitis of few women being raped although no documentary evidence of the same surfaced, no names nothing. While I don t want to share the video I will however share the statement shared by the Kuki-Zo tribal community on that. The print gives a bit more context to what has been going on.
Update, Few hours later : The Print also shared more of a context about six days ago. The reason we saw the video now was that for the last 2.5 months Manipur was in Internet shutdown so those videos got uploaded now. There was huge backlash from the Twitter community and GOI ordered the Manipur Police to issue this Press Release yesterday night or just few hours before with yesterday s time-stamp.
IndianExpress shared an article that does state that while an FIR had been registered immediately no arrests so far and this is when you can see the faces of all the accused. Not one of them tried to hide their face behind a mask or something. So, if the police wanted, they could have easily identified who they are. They know which community the accused belong to, they even know from where they came. If they wanted to, they could have easily used mobile data and triangulation to find the accused and their helpers. So, it does seem to be attempt to whitewash and protect a certain community while letting it prey on the other. Another news that did come in, is because of the furious reaction on Twitter, Youtube has constantly been taking down the video as some people are getting a sort of high more so from the majoritarian community and making lewd remarks. Twitter has been somewhat quick when people are making lewd remarks against the two girl/women. Quite a bit of the above seems like a cover-up. Lastly, apparently GOI has agreed to having a conversation about it in Lok Sabha but without any voting or passing any resolutions as of right now. Would update as an when things change. Update: Smriti Irani, the Child and Development Minister gave the weakest statement possible
As can be noticed, she said sexual assault rather than rape. The women were under police custody for safety when they were whisked away by the mob. No mention of that. She spoke to the Chief Minister who has been publicly known as one of the provocateurs or instigators for the whole thing. The CM had publicly called the Kukus and Nagas as foreigners although both of them claim to be residing for thousands of years and they apparently have documentary evidence of the same  . Also not clear who is doing the condemning here. No word of support for the women, no offer of intervention, why is she the Minister of Child and Women Development (CDW) if she can t use harsh words or give support to the women who have gone and going through horrific things  Update : CM Biren Singh s Statement after the video surfaced
This tweet is contradictory to the statements made by Mr. Singh couple of months ago. At that point in time, Mr. Singh had said that NIA, State Intelligence Departments etc. were giving him minute to minute report on the ground station. The Police itself has suo-moto (on its own) powers to investigate and apprehend criminals for any crime. In fact, the Police can call for questioning of anybody in any relation to any crime and question them for upto 48 hours before charging them. In fact, many cases have been lodged where innocent persons have been framed or they have served much more in the jail than the crime they are alleged to have been committed. For e.g. just a few days before there was a media report of a boy who has been in jail for 3 years. His alleged crime, stealing mere INR 200/- to feed himself. Court doesn t have time to listen to him yet. And there are millions like him. The quint eloquently shares the tale where it tells how both the State and the Centre have been explicitly complicit in the incidents ravaging Manipur. In fact, what has been shared in the article has been very true as far as greed for land is concerned. Just couple of weeks back there have been a ton of floods emanating from Uttarakhand and others. Just before the flooding began, what was the CM doing can be seen here. Apart from the newspapers I have shared and the online resources, most of the mainstream media has been silent on the above. In fact, they have been silent on the Manipur issue until the said video didn t come into limelight. Just now, in Lok Sabha everybody is present except the Prime Minister and the Home Minister. The PM did say that the law will take its own course, but that s about it. Again no support for the women concerned.  Update: CJI (Chief Justice of India) has taken suo-moto cognizance and has warned both the State and Centre to move quickly otherwise they will take the matter in their own hand.
Update: Within 2 hours of the CJI taking suo-moto cognizance, they have arrested one of the main accused Heera Das
The above tells you why the ban on Internet was put in the first place. They wanted to cover it all up. Of all the celebs, only one could find a bit of spine, a bit of backbone to speak about it, all the rest mum
Just imagine, one of the women is around my age while the young one could have been a daughter if I had married on time or a younger sister for sure. If ever I came face to face with them, I just wouldn t be able to look them in the eye. Even their whole whataboutery is built on sham. From their view Kukis are from Burma or Burmese descent. All of which could be easily proved by DNA of all. But let s leave that for a sec. Let s take their own argument that they are Burmese. Their idea of Akhand Bharat stretches all the way to Burma (now called Myanmar). They want all the land but no idea with what to do with the citizens living on it. Even after the video, the whataboutery isn t stopping, that shows how much hatred is there. And not knowing that they too will be victim of the same venom one or the other day  Update: Opposition was told there would be a debate on Manipur. The whole day went by, no debate. That s the shamelessness of this Govt.  Update 20th July 19:25 Center may act or not act against the perpetrators but they will act against Twitter who showed the crime. Talk about shooting the messenger
We are now in the last stage. In 2014, we were at 6

8 July 2023

Russell Coker: Sandboxing Phone Apps

As a follow up to Wayland [1]: A difficult problem with Linux desktop systems (which includes phones and tablets) is restricting application access so that applications can t mess with each other s data or configuration but also allowing them to share data as needed. This has been mostly solved for Android but that involved giving up all legacy Linux apps. I think that we need to get phones capable of running a full desktop environment and having Android level security on phone apps and regular desktop apps. My interest in this is phones running Debian and derivatives such as PureOS. But everything I describe in this post should work equally well for all full featured Linux distributions for phones such as Arch, Gentoo, etc and phone based derivatives of those such as Manjaro. It may be slightly less applicable to distributions such as Alpine Linux and it s phone derivative PostmarketOS, I would appreciate comments from contributors to PostmarketOS or Alpine Linux about this. I ve investigated some of the ways of solving these problems. Many of the ways of doing this involves namespaces. The LWN articles about namespaces are a good background to some of these technologies [2]. The LCA keynote lecture Containers aka crazy user space fun by Jess Frazelle has a good summary of some of the technology [3]. One part that I found particularly interesting was the bit about recognising the container access needed at compile time. This can also be part of recognising bad application design at compile time, it s quite common for security systems to flag bad security design in programs. Firejail To sandbox applications you need to have some method of restricting what they do, this means some combination of namespaces and similar features. Here s an article on sandboxing with firejail [4]. Firejail uses namespaces, seccomp-bpf, and capabilities to restrict programs. It allows bind mounts if run as root and if not run as root it can restrict file access by name and access to networking, system calls, and other features. It has a convenient learning mode that generates policy for you, so if you have a certain restricted set of tasks that an application is to perform you can run it once and then force it to do only the same operations in future. I recommend that everyone who is still reading at this point try out firejail. Here s an example of what you can do:
# create a profile
firejail --build=xterm.profile xterm
# now this run can only do what the previous run did
firejail --profile=xterm.profile xterm
Note that firejail is SETUID root so can potentially reduce system security and it has had security issues in the past. In spite of that it can be good for allowing a trusted user to run programs with less access to the system. Also it is a good way to start learning about such things. I don t think it s a good solution for what I want to do. But I won t rule out the possibility of using it at some future time for special situations. Bubblewrap I tried out firejail with the browser Epiphany (Debian package epiphany-browser) on my Librem5, but that didn t work as Epiphany uses /usr/bin/bwrap (bubblewrap) for it s internal sandboxing (here is an informative blog post about the history of bubblewrap AKA xdg-app-helper which was developed as part of flatpak [5]). The Epiphany bubblewrap sandbox is similar to the situation with Chrome/Chromium which have internal sandboxing that s incompatible with firejail. The firejail man page notes that it s not compatible with Snap, Flatpack, and similar technologies. One issue this raises is that we can t have a namespace based sandboxing system applied to all desktop apps with no extra configuration as some desktop apps won t work with it. Bubblewrap requires setting kernel.unprivileged_userns_clone=1 to run as non-root (IE provide the normal and expected functionality) which potentially reduces system security. Here is an example of a past kernel bug that was exploitable by creating a user namespace with CAP_SYS_ADMIN [6]. But it s the default in recent Debian kernels which means that the issues have been reviewed and determined to be a reasonable trade-off and also means that many programs will use the feature and break if it s disabled. Here is an example of how to use Bubblewrap on Debian, after installing the bubblewrap run the following command. Note that the new-session option (to prevent injecting characters in the keyboard buffer with TIOCSTI) makes the session mostly unusable for a shell.
bwrap --ro-bind /usr /usr --symlink usr/lib64 /lib64 --symlink usr/lib /lib --proc /proc --dev /dev --unshare-pid --die-with-parent bash
Here is an example of using Bubblewrap to sandbox the game Warzone2100 running with Wayland/Vulkan graphics and Pulseaudio sound.
bwrap --bind $HOME/.local/share/warzone2100 $HOME/.local/share/warzone2100 --bind /run/user/$UID/pulse /run/user/$UID/pulse --bind /run/user/$UID/wayland-0 /run/user/$UID/wayland-0 --bind /run/user/$UID/wayland-0.lock /run/user/$UID/wayland-0.lock --ro-bind /usr /usr --symlink usr/bin /bin --symlink usr/lib64 /lib64 --symlink usr/lib /lib --proc /proc --dev /dev --unshare-pid --dev-bind /dev/dri /dev/dri --ro-bind $HOME/.pulse $HOME/.pulse --ro-bind $XAUTHORITY $XAUTHORITY --ro-bind /sys /sys --new-session --die-with-parent warzone2100
Here is an example of using Bubblewrap to sandbox the Debian bug reporting tool reportbug
bwrap --bind /tmp /tmp --ro-bind /etc /etc --ro-bind /usr /usr --ro-bind /var/lib/dpkg /var/lib/dpkg --symlink usr/sbin /sbin --symlink usr/bin /bin --symlink usr/lib64 /lib64 --symlink usr/lib /lib --symlink /usr/lib32 /lib32 --symlink /usr/libx32 /libx32 --proc /proc --dev /dev --die-with-parent --unshare-ipc --unshare-pid reportbug
Here is an example shell script to wrap the build process for Debian packages. This needs to run with unshare-user and specifying the UID as 0 because fakeroot doesn t work in the container, I haven t worked out why but doing it through the container is a better method anyway. This script shares read-write the parent of the current directory as the Debian build process creates packages and metadata files in the parent directory. This will prevent the automatic signing scripts which is a feature not a bug, so after building packages you have to sign the .changes file with debsign. One thing I just learned is that the Debian build system Sbuild can use chroots for building packages for similar benefits [7]. Some people believe that sbuild is the correct way of doing it regardless of the chroot issue. I think it s too heavy-weight for most of my Debian package building, but even if I had been convinced I d still share the information about how to use bwrap as Debian is about giving users choice.
#!/bin/bash
set -e
BUILDDIR=$(realpath $(pwd)/..)
exec bwrap --bind /tmp /tmp --bind $BUILDDIR $BUILDDIR --ro-bind /etc /etc --ro-bind /usr /usr --ro-bind /var/lib/dpkg /var/lib/dpkg --symlink usr/bin /bin --symlink usr/lib64 /lib64 --symlink usr/lib /lib --proc /proc --dev /dev --die-with-parent --unshare-user --unshare-ipc --unshare-net --unshare-pid --new-session --uid 0 --gid 0 $@
Here is an informative blog post about using Bubblewrap with Seccomp (BPF) [8]. In a future post I ll write about how to get this sort of thing going but I ll just leave the URL here for people who want to do it on their own. The source for the flatpak-run program is the only example I could find of using Seccomp with Bubblewrap [9]. A lot of that code is worth copying for application sandboxing, maybe the entire program. Unshare The unshare command from the util-linux package has a large portion of the Bubblewrap functionality. The things that it doesn t do like creating a new session can be done by other utilities. Here is an example of creating a container with unshare and then using cgroups with it [10]. systemd --user Recent distributions have systemd support for running a user session, the Arch Linux Wiki has a good description of how this works [11]. The units for a user are .service files stored in /usr/lib/systemd/user/ (distribution provided), ~/.local/share/systemd/user/ (user installed applications in debian a link to ~/.config/systemd/user/), ~/.config/systemd/user/ (for manual user config), and /etc/systemd/user/ (local sysadmin provided) Here are some example commands for manipulating this:
# show units running for the current user
systemctl --user
# show status of one unit
systemctl --user status kmail.service
# add an environment variable to the list for all user units
systemctl --user import-environment XAUTHORITY
# start a user unit
systemctl --user start kmail.service
# analyse security for all units for the current user
systemd-analyze --user security
# analyse security for one unit
systemd-analyze --user security kmail.service
Here is a test kmail.service file I wrote to see what could be done for kmail, I don t think that kmail is the app most needing to be restricted it is in more need of being protected from other apps but it still makes a good test case. This service file took it from the default risk score of 9.8 (UNSAFE) to 6.3 (MEDIUM) even though I was getting the error code=exited, status=218/CAPABILITIES when I tried anything that used capabilities (apparently due to systemd having some issue talking to the kernel).
[Unit]
Description=kmail
[Service]
ExecStart=/usr/bin/kmail
# can not limit capabilities (code=exited, status=218/CAPABILITIES)
#CapabilityBoundingSet=~CAP_SYS_TIME CAP_SYS_PACCT CAP_KILL CAP_WAKE_ALARM CAP_DAC_OVERRIDE CAP_DAC_READ_SEARCH CAP_FOWNER CAP_IPC_OWNER CAP_LINUX_IMMUTABLE CAP_IPC_LOCK CAP_SYS_MODULE CAP_SYS_TTY_CONFIG CAP_SYS_BOOT CAP_SYS_CHROOT CAP_BLOCK_SUSPEND CAP_LEASE CAP_MKNOD CAP_CHOWN CAP_FSETID CAP_SETFCAP CAP_SETGID CAP_SETUID CAP_SETPCAP CAP_SYS_RAWIO CAP_SYS_PTRACE CAP_SYS_NICE CAP_SYS_RESOURCE CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SYS_ADMIN CAP_SYSLOG
# also 218 for ProtectKernelModules PrivateDevices ProtectKernelLogs ProtectClock
# MemoryDenyWriteExecute stops it displaying message content (bad)
# needs @resources and @mount to startup
# needs @privileged to display message content
SystemCallFilter=~@cpu-emulation @debug @raw-io @reboot @swap @obsolete
SystemCallArchitectures=native
UMask=077
NoNewPrivileges=true
ProtectControlGroups=true
PrivateMounts=false
RestrictNamespaces=~user pid net uts mnt cgroup ipc
RestrictSUIDSGID=true
ProtectHostname=true
LockPersonality=true
ProtectKernelTunables=true
RestrictAddressFamilies=~AF_PACKET
RestrictRealtime=true
ProtectSystem=strict
ProtectProc=invisible
PrivateUsers=true
[Install]
When I tried to use the TemporaryFileSystem=%h directive (to make the home directory a tmpfs the most basic step in restricting what a regular user application can do) I got the error (code=exited, status=226/NAMESPACE) . So I don t think the systemd user setup competes with bubblewrap for restricting user processes. But if anyone else can start where I left off and go further then that will be interesting. Systemd-run The following shell script runs firefox as a dynamic user via systemd-run, running this asks for the root password and any mechanism for allowing that sort of thing opens potential security holes. So at this time while it s an interesting feature I don t think it is suitable for running regular applications on a phone or Linux desktop.
#!/bin/bash
# systemd-run Firefox with DynamicUser and home directory.
#
# Run as a non-root user.
# Or, run as root and change $USER below.
SANDBOX_MINIMAL=(
    --property=DynamicUser=1
    --property=StateDirectory=openstreetmap
    # --property=RootDirectory=/debian_sid
)
SANDBOX_X11=(
    # Sharing Xorg always defeats security, regardless of any sandboxing tech,
    # but the config is almost ready for Wayland, and there's Xephyr.
#    --property=LoadCredential=.ICEauthority:/home/$USER/.ICEauthority
    --property=LoadCredential=.Xauthority:/home/$USER/.Xauthority
    --property=Environment=DISPLAY=:0
)
SANDBOX_FIREFOX=(
    # hardware-accelerated rendering
    --property=BindPaths=/dev/dri
    # webcam
    # --property=SupplementaryGroups=video
)
systemd-run  \
    "$ SANDBOX_MINIMAL[@] "  "$ SANDBOX_X11[@] " "$ SANDBOX_FIREFOX[@] " \
    bash -c '
        export XAUTHORITY="$CREDENTIALS_DIRECTORY"/.Xauthority
        export ICEAUTHORITY="$CREDENTIALS_DIRECTORY"/.ICEauthority
        export HOME="$STATE_DIRECTORY"/home
        firefox --no-remote about:blank
    '
Qubes OS Here is an interesting demo video of QubesOS [12] which shows how it uses multiple VMs to separate different uses. Here is an informative LCA presentation about Qubes which shows how it asks the user about communication between VMs and access to hardware [13]. I recommend that everyone who hasn t seen Qubes in operation watch the first video and everyone who isn t familiar with the computer science behind it watch the second video. Qubes appears to be a free software equivalent to NetTop as far as I can tell without ever being able to use NetTop. I don t think Qubes is a good match for my needs in general use and it definitely isn t a good option for phones (VMs use excessive CPU on phones). But it s methods for controlling access have some ideas that are worth copying. File Access XDG Desktop Portal One core issue for running sandboxed applications is to allow them to access files permitted by the user but no other files. There are two main parts to this problem, the easier one is to have each application have it s own private configuration directory which can be addressed by bind mounts, MAC systems, running each application under a different UID or GID, and many other ways. The hard part of file access is to allow the application to access random files that the user wishes. For example I want my email program, IM program, and web browser to be able to save files and also to be able to send arbitrary files via email, IM, and upload to web sites. But I don t want one of those programs to be able to access all the files from the others if it s compromised. So only giving programs access to arbitrary files when the user chooses such a file makes sense. There is a package xdg-desktop-portal which provides a dbus interface for opening files etc for a sandboxed application [14]. This portal has backends for KDE, GNOME, and Wayland among others which allow the user to choose which file or files the application may access. Chrome/Chromium is one well known program that uses the xdg-desktop-portal and does it s own sandboxing. To use xdg-desktop-portal an application must be modified to use that interface instead of opening files directly, so getting this going with all Internet facing applications will take some work. But the documentation notes that the portal API gives a consistent user interface for operations such as opening files so it can provide benefits even without a sandboxed environment. This technology was developed for Flatpak and is now also used for Snap. It also has a range of APIs for accessing other services [15]. Flatpak Flatpack is a system for distributing containerised versions of applications with some effort made to improve security. Their development of bubblewrap and xdg-desktop-portal is really good work. However the idea of having software packaged with all libraries it needs isn t a good one, here s a blog post covering some of the issues [16]. The web site flatkill.org has been registered to complain about some Flatpak problems [17]. They have some good points about the approach that Flatpak project developers have taken towards some issues. They also make some points about the people who package software not keeping up to date with security fixes and not determining a good security policy for their pak. But this doesn t preclude usefully using parts of the technology for real security benefits. If parts of Flatpak like Bubblewrap and xdg-portal are used with good security policies on programs that are well packaged for a distribution then these issues would be solved. The Flatpak app author s documentation about package requirements [18] has an overview of security features that is quite reasonable. If most paks follow that then it probably isn t too bad. I reviewed the manifests of a few of the recent paks and they seemed to have good settings. In the amount of time I was prepared to spend investigating this I couldn t find evidence to support the Flatkill claim about Flatpaks granting obviously inappropriate permissions. But the fact that the people who run Flathub decided to put a graph of installs over time on the main page for each pak while making the security settings only available by clicking the Manifest github link, clicking on a JSON or YAML file, and then searching for the right section in that shows where their priorities lie. The finish-args section of the Flatpak manifest (the section that describes the access to the system) seems reasonably capable and not difficult for users to specify as well as being in common use. It seems like it will be easy enough to take some code from Flatpak for converting the finish-args into Bubblewrap parameters and then use the manifest files from Flathub as a starting point for writing application security policy for Debian packages. Snap Snap is developed by Canonical and seems like their version of Flatpak with some Docker features for managing versions, the Getting Started document is worth reading [19]. They have Connections between different snaps and the system where a snap registers a plug that connects to a socket which can be exposed by the system (EG the camera socket) or another snap. The local admin can connect and disconnect them. The connections design reminds me of the Android security model with permitting access to various devices. The KDE Neon extension [20] has been written to add Snap support to KDE. Snap seems quite usable if you have an ecosystem of programs using it which Canonical has developed. But it has all the overheads of loopback mounts etc that you don t want on a mobile device and has the security issues of libraries included in snaps not being updated. A quick inspection of an Ubuntu 22.04 system I run (latest stable release) has Firefox 114.0.2-1 installed which includes libgcrypt.so.20.2.5 which is apparently libgcrypt 1.8.5 and there are CVEs relating to libgcrypt versions before 1.9.4 and 1.8.x versions before 1.8.8 which were published in 2021 and updated in 2022. Further investigation showed that libgcrypt came from the gnome-3-38-2004 snap (good that it doesn t require all shared objects to be in the same snap, but that it has old versions in dependencies). The gnome-3-38-2004 snap is the latest version so anyone using the Snap of Firefox seems to have no choice but to have what appears to be insecure libraries. The strict mode means that the Snap in question has no system access other than through interfaces [21]. SE Linux and Apparmor The Librem5 has Apparmor running by default. I looked into writing Apparmor policy to prevent Epiphany from accessing all files under the home directory, but that would be a lot of work. Also at least one person has given up on maintaining an Epiphany profile for Apparmor because it changes often and it s sandbox doesn t work well with Apparmor [22]. This was not a surprise to me at all, SE Linux policy has the same issues as Apparmor in this regard. The Ubuntu Security Team Application Confinement document [23] is worth reading. They have some good plans for using AppArmor as part of solving some of these problems. I plan to use SE Linux for that. Slightly Related Things One thing for the future is some sort of secure boot technology, the LCA lecture Becoming a tyrant: Implementing secure boot in embedded devices [24] has some ideas for the future. The Betrusted project seems really interesting, see Bunnie s lecture about how to create a phone size security device with custom OS [25]. The Betrusted project web page is worth reading too [26]. It would be ironic to have a phone as your main PC that is the same size as your security device, but that seems to be the logical solution to several computing problems. Whonix is a Linux distribution that has one VM for doing Tor stuff and another VM for all other programs which is only allowed to have network access via the Tor VM [27]. Xpra does for X programs what screen/tmux do for text mode programs [28]. It allows isolating X programs from each other in ways that are difficult to impossible with a regular X session. In an ideal situation we could probably get the benefits we need with just using Wayland, but if there are legacy apps that only have X support this could help. Conclusion I think that currently the best option for confining desktop apps is Bubblewrap on Wayland. Maybe with a modified version of Flatpak-run to run it and with app modifications to use the xdg-portal interfaces as much as possible. That should be efficient enough in storage space, storage IO performance, memory use, and CPU use to run on phones while giving some significant benefits. Things to investigate are how much code from Flatpak to use, how to most efficiently do the configuration (maybe the Flatpak way because it s known and seems effective), how to test this (and have regression tests), and what default settings to use. Also BPF is a big thing to investigate.

22 June 2023

Russ Allbery: Review: Furious Heaven

Review: Furious Heaven, by Kate Elliott
Series: Sun Chronicles #2
Publisher: Tor
Copyright: 2023
ISBN: 1-250-86701-0
Format: Kindle
Pages: 725
Furious Heaven is the middle book of a trilogy and a direct sequel to Unconquerable Sun. Don't start here. I also had some trouble remembering what happened in the previous book (grumble recaps mutter), and there are a lot of threads, so I would try to minimize the time between books unless you have a good memory for plot details. This is installment two of gender-swapped Alexander the Great in space. When we last left Sun and her Companions, Elliott had established the major players in this interstellar balance of power and set off some opening skirmishes, but the real battles were yet to come. Sun was trying to build her reputation and power base while carefully staying on the good side of Queen-Marshal Eirene, her mother and the person credited with saving the Republic of Chaonia from foreign dominance. The best parts of the first book weren't Sun herself but wily Persephone, one of her Companions, whose viewpoint chapters told a more human-level story of finding her place inside a close-knit pre-existing friendship group. Furious Heaven turns that all on its head. The details are spoilers (insofar as a plot closely tracking the life of Alexander the Great can contain spoilers), but the best parts of the second book are the chapters about or around Sun. What I find most impressive about this series so far is Elliott's ability to write Sun as charismatic in a way that I can believe as a reader. That was hit and miss at the start of the series, got better towards the end of Unconquerable Sun, and was wholly effective here. From me, that's high but perhaps unreliable praise; I typically find people others describe as charismatic to be some combination of disturbing, uncomfortable, dangerous, or obviously fake. This is a rare case of intentionally-written fictional charisma that worked for me. Elliott does not do this by toning down Sun's ambition. Sun, even more than her mother, is explicitly trying to gather power and bend the universe (and the people in it) to her will. She treats people as resources, even those she's the closest to, and she's ruthless in pursuit of her goals. But she's also honorable, straightforward, and generous to the people around her. She doesn't lie about her intentions; she follows a strict moral code of her own, keeps her friends' secrets, listens sincerely to their advice, and has the sort of battlefield charisma where she refuses to ask anyone else to take risks she personally wouldn't take. And her use of symbolism and spectacle isn't just superficial; she finds the points of connection between the symbols and her values so that she can sincerely believe in what she's doing. I am fascinated by how Elliott shapes the story around her charisma. Writing an Alexander analogue is difficult; one has to write a tactical genius with the kind of magnetic attraction that enabled him to lead an army across the known world, and make this believable to the reader. Elliott gives Sun good propaganda outlets and makes her astonishingly decisive (and, of course, uses the power of the author to ensure those decisions are good ones), but she also shows how Sun is constantly absorbing information and updating her assumptions to lay the groundwork for those split-second decisions. Sun uses her Companions like a foundation and a recovery platform, leaning on them and relying on them to gather her breath and flesh out her understanding, and then leaping from them towards her next goal. Elliott writes her as thinking just a tiny bit faster than the reader, taking actions I was starting to expect but slightly before I had put together my expectation. It's a subtle but difficult tightrope to walk as the writer, and it was incredibly effective for me. The downside of Furious Heaven is that, despite kicking the action into a much higher gear, this book sprawls. There are five viewpoint characters (Persephone and the Phene Empire character Apama from the first book, plus two new ones), as well as a few interlude chapters from yet more viewpoints. Apama's thread, which felt like a minor subplot of the first book, starts paying off in this book by showing the internal political details of Sun's enemy. That already means the reader has to track two largely separate and important stories. Add on a Persephone side plot about her family and a new plot thread about other political factions and it's a bit too much. Elliott does a good job avoiding reader confusion, but she still loses narrative momentum and reader interest due to the sheer scope. Persephone's thread in particular was a bit disappointing after being the highlight of the previous book. She spends a lot of her emotional energy on tedious and annoying sniping at Jade, which accomplishes little other than making them both seem immature and out of step with the significance of what's going on elsewhere. This is also a middle book of a trilogy, and it shows. It provides a satisfying increase in intensity and gets the true plot of the trilogy well underway, but nothing is resolved and a lot of new questions and plot threads are raised. I had similar problems with Cold Fire, the middle book of the other Kate Elliott trilogy I've read, and this book is 200 pages longer. Elliott loves world-building and huge, complex plots; I have a soft spot for them too, but they mean the story is full of stuff, and it's hard to maintain the same level of reader interest across all the complications and viewpoints. That said, I truly love the world-building. Elliott gives her world historical layers, with multiple levels of lost technology, lost history, and fallen empires, and backs it up with enough set pieces and fragments of invented history that I was enthralled. There are at least five major factions with different histories, cultures, and approaches to technology, and although they all share a history, they interpret that history in fascinatingly different ways. This world feels both lived in and full of important mysteries. Elliott also has a knack for backing the ambitions of her characters with symbolism that defines the shape of that ambition. The title comes from a (translated) verse of an in-universe song called the Hymn of Leaving, which is sung at funerals and is about the flight on generation ships from the now-lost Celestial Empire, the founding myth of this region of space:
Crossing the ocean of stars we leave our home behind us.
We are the spears cast at the furious heaven
And we will burn one by one into ashes
As with the last sparks we vanish.
This memory we carry to our own death which awaits us
And from which none of us will return.
Do not forget. Goodbye forever.
This is not great poetry, but it explains so much about the psychology of the characters. Sun repeatedly describes herself and her allies as spears cast at the furious heaven. Her mother's life mission was to make Chaonia a respected independent power. Hers is much more than that, reaching back into myth for stories of impossible leaps into space, burning brightly against the hostile power of the universe itself. A question about a series like this is why one should want to read about a gender-swapped Alexander the Great in space, rather than just reading about Alexander himself. One good (and sufficient) answer is that both the gender swap and the space parts are inherently interesting. But the other place that Elliott uses the science fiction background is to give Sun motives beyond sheer personal ambition. At a critical moment in the story, just like Alexander, Sun takes a detour to consult an Oracle. Because this is a science fiction novel, it's a great SF set piece involving a mysterious AI. But also because this is a science fiction story, Sun doesn't only ask about her personal ambitions. I won't spoil the exact questions; I think the moment is better not knowing what she'll ask. But they're science fiction questions, reader questions, the kinds of things Elliott has been building curiosity about for a book and a half by the time we reach that scene. Half the fun of reading a good epic space opera is learning the mysteries hidden in the layers of world-building. Aligning the goals of the protagonist with the goals of the reader is a simple storytelling trick, but oh, so effective. Structurally, this is not that great of a book. There's a lot of build-up and only some payoff, and there were several bits I found grating. But I am thoroughly invested in this universe now. The third book can't come soon enough. Followed by Lady Chaos, which is still being written at the time of this review. Rating: 7 out of 10

6 May 2023

Reproducible Builds: Reproducible Builds in April 2023

Welcome to the April 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. And, as always, if you are interested in contributing to the project, please visit our Contribute page on our website.

General news Trisquel is a fully-free operating system building on the work of Ubuntu Linux. This month, Simon Josefsson published an article on his blog titled Trisquel is 42% Reproducible!. Simon wrote:
The absolute number may not be impressive, but what I hope is at least a useful contribution is that there actually is a number on how much of Trisquel is reproducible. Hopefully this will inspire others to help improve the actual metric.
Simon wrote another blog post this month on a new tool to ensure that updates to Linux distribution archive metadata (eg. via apt-get update) will only use files that have been recorded in a globally immutable and tamper-resistant ledger. A similar solution exists for Arch Linux (called pacman-bintrans) which was announced in August 2021 where an archive of all issued signatures is publically accessible.
Joachim Breitner wrote an in-depth blog post on a bootstrap-capable GHC, the primary compiler for the Haskell programming language. As a quick background to what this is trying to solve, in order to generate a fully trustworthy compile chain, trustworthy root binaries are needed and a popular approach to address this problem is called bootstrappable builds where the core idea is to address previously-circular build dependencies by creating a new dependency path using simpler prerequisite versions of software. Joachim takes an somewhat recursive approach to the problem for Haskell, leading to the inadvertently humourous question: Can I turn all of GHC into one module, and compile that? Elsewhere in the world of bootstrapping, Janneke Nieuwenhuizen and Ludovic Court s wrote a blog post on the GNU Guix blog announcing The Full-Source Bootstrap, specifically:
[ ] the third reduction of the Guix bootstrap binaries has now been merged in the main branch of Guix! If you run guix pull today, you get a package graph of more than 22,000 nodes rooted in a 357-byte program something that had never been achieved, to our knowledge, since the birth of Unix.
More info about this change is available on the post itself, including:
The full-source bootstrap was once deemed impossible. Yet, here we are, building the foundations of a GNU/Linux distro entirely from source, a long way towards the ideal that the Guix project has been aiming for from the start. There are still some daunting tasks ahead. For example, what about the Linux kernel? The good news is that the bootstrappable community has grown a lot, from two people six years ago there are now around 100 people in the #bootstrappable IRC channel.

Michael Ablassmeier created a script called pypidiff as they were looking for a way to track differences between packages published on PyPI. According to Micahel, pypidiff uses diffoscope to create reports on the published releases and automatically pushes them to a GitHub repository. This can be seen on the pypi-diff GitHub page (example).
Eleuther AI, a non-profit AI research group, recently unveiled Pythia, a collection of 16 Large Language Model (LLMs) trained on public data in the same order designed specifically to facilitate scientific research. According to a post on MarkTechPost:
Pythia is the only publicly available model suite that includes models that were trained on the same data in the same order [and] all the corresponding data and tools to download and replicate the exact training process are publicly released to facilitate further research.
These properties are intended to allow researchers to understand how gender bias (etc.) can affected by training data and model scale.
Back in February s report we reported on a series of changes to the Sphinx documentation generator that was initiated after attempts to get the alembic Debian package to build reproducibly. Although Chris Lamb was able to identify the source problem and provided a potential patch that might fix it, James Addison has taken the issue in hand, leading to a large amount of activity resulting in a proposed pull request that is waiting to be merged.
WireGuard is a popular Virtual Private Network (VPN) service that aims to be faster, simpler and leaner than other solutions to create secure connections between computing devices. According to a post on the WireGuard developer mailing list, the WireGuard Android app can now be built reproducibly so that its contents can be publicly verified. According to the post by Jason A. Donenfeld, the F-Droid project now does this verification by comparing their build of WireGuard to the build that the WireGuard project publishes. When they match, the new version becomes available. This is very positive news.
Author and public speaker, V. M. Brasseur published a sample chapter from her upcoming book on corporate open source strategy which is the topic of Software Bill of Materials (SBOM):
A software bill of materials (SBOM) is defined as a nested inventory for software, a list of ingredients that make up software components. When you receive a physical delivery of some sort, the bill of materials tells you what s inside the box. Similarly, when you use software created outside of your organisation, the SBOM tells you what s inside that software. The SBOM is a file that declares the software supply chain (SSC) for that specific piece of software. [ ]

Several distributions noticed recent versions of the Linux Kernel are no longer reproducible because the BPF Type Format (BTF) metadata is not generated in a deterministic way. This was discussed on the #reproducible-builds IRC channel, but no solution appears to be in sight for now.

Community news On our mailing list this month: Holger Levsen gave a talk at foss-north 2023 in Gothenburg, Sweden on the topic of Reproducible Builds, the first ten years. Lastly, there were a number of updates to our website, including:
  • Chris Lamb attempted a number of ways to try and fix literal : .lead appearing in the page [ ][ ][ ], made all the Back to who is involved links italics [ ], and corrected the syntax of the _data/sponsors.yml file [ ].
  • Holger Levsen added his recent talk [ ], added Simon Josefsson, Mike Perry and Seth Schoen to the contributors page [ ][ ][ ], reworked the People page a little [ ] [ ], as well as fixed spelling of Arch Linux [ ].
Lastly, Mattia Rizzolo moved some old sponsors to a former section [ ] and Simon Josefsson added Trisquel GNU/Linux. [ ]

Debian
  • Vagrant Cascadian reported on the Debian s build-essential package set, which was inspired by how close we are to making the Debian build-essential set reproducible and how important that set of packages are in general . Vagrant mentioned that: I have some progress, some hope, and I daresay, some fears . [ ]
  • Debian Developer Cyril Brulebois (kibi) filed a bug against snapshot.debian.org after they noticed that there are many missing dinstalls that is to say, the snapshot service is not capturing 100% of all of historical states of the Debian archive. This is relevant to reproducibility because without the availability historical versions, it is becomes impossible to repeat a build at a future date in order to correlate checksums. .
  • 20 reviews of Debian packages were added, 21 were updated and 5 were removed this month adding to our knowledge about identified issues. Chris Lamb added a new build_path_in_line_annotations_added_by_ruby_ragel toolchain issue. [ ]
  • Mattia Rizzolo announced that the data for the stretch archive on tests.reproducible-builds.org has been archived. This matches the archival of stretch within Debian itself. This is of some historical interest, as stretch was the first Debian release regularly tested by the Reproducible Builds project.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope development diffoscope version 241 was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous months as well a change by Chris Lamb to add a missing raise statement that was accidentally dropped in a previous commit. [ ]

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In April, a number of changes were made, including:
  • Holger Levsen:
    • Significant work on a new Documented Jenkins Maintenance (djm) script to support logged maintenance of nodes, etc. [ ][ ][ ][ ][ ][ ]
    • Add the new APT repo url for Jenkins itself with a new signing key. [ ][ ]
    • In the Jenkins shell monitor, allow 40 GiB of files for diffoscope for the Debian experimental distribution as Debian is frozen around the release at the moment. [ ]
    • Updated Arch Linux testing to cleanup leftover files left in /tmp/archlinux-ci/ after three days. [ ][ ][ ]
    • Mark a number of nodes hosted by Oregon State University Open Source Lab (OSUOSL) as online and offline. [ ][ ][ ]
    • Update the node health checks to detect failures to end schroot sessions. [ ]
    • Filter out another duplicate contributor from the contributor statistics. [ ]
  • Mattia Rizzolo:



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

27 April 2023

Arturo Borrero Gonz lez: Kubecon and CloudNativeCon 2023 Europe summary

Post logo This post serves as a report from my attendance to Kubecon and CloudNativeCon 2023 Europe that took place in Amsterdam in April 2023. It was my second time physically attending this conference, the first one was in Austin, Texas (USA) in 2017. I also attended once in a virtual fashion. The content here is mostly generated for the sake of my own recollection and learnings, and is written from the notes I took during the event. The very first session was the opening keynote, which reunited the whole crowd to bootstrap the event and share the excitement about the days ahead. Some astonishing numbers were announced: there were more than 10.000 people attending, and apparently it could confidently be said that it was the largest open source technology conference taking place in Europe in recent times. It was also communicated that the next couple iteration of the event will be run in China in September 2023 and Paris in March 2024. More numbers, the CNCF was hosting about 159 projects, involving 1300 maintainers and about 200.000 contributors. The cloud-native community is ever-increasing, and there seems to be a strong trend in the industry for cloud-native technology adoption and all-things related to PaaS and IaaS. The event program had different tracks, and in each one there was an interesting mix of low-level and higher level talks for a variety of audience. On many occasions I found that reading the talk title alone was not enough to know in advance if a talk was a 101 kind of thing or for experienced engineers. But unlike in previous editions, I didn t have the feeling that the purpose of the conference was to try selling me anything. Obviously, speakers would make sure to mention, or highlight in a subtle way, the involvement of a given company in a given solution or piece of the ecosystem. But it was non-invasive and fair enough for me. On a different note, I found the breakout rooms to be often small. I think there were only a couple of rooms that could accommodate more than 500 people, which is a fairly small allowance for 10k attendees. I realized with frustration that the more interesting talks were immediately fully booked, with people waiting in line some 45 minutes before the session time. Because of this, I missed a few important sessions that I ll hopefully watch online later. Finally, on a more technical side, I ve learned many things, that instead of grouping by session I ll group by topic, given how some subjects were mentioned in several talks. On gitops and CI/CD pipelines Most of the mentions went to FluxCD and ArgoCD. At that point there were no doubts that gitops was a mature approach and both flux and argoCD could do an excellent job. ArgoCD seemed a bit more over-engineered to be a more general purpose CD pipeline, and flux felt a bit more tailored for simpler gitops setups. I discovered that both have nice web user interfaces that I wasn t previously familiar with. However, in two different talks I got the impression that the initial setup of them was simple, but migrating your current workflow to gitops could result in a bumpy ride. This is, the challenge is not deploying flux/argo itself, but moving everything into a state that both humans and flux/argo can understand. I also saw some curious mentions to the config drifts that can happen in some cases, even if the goal of gitops is precisely for that to never happen. Such mentions were usually accompanied by some hints on how to operate the situation by hand. Worth mentioning, I missed any practical information about one of the key pieces to this whole gitops story: building container images. Most of the showcased scenarios were using pre-built container images, so in that sense they were simple. Building and pushing to an image registry is one of the two key points we would need to solve in Toolforge Kubernetes if adopting gitops. In general, even if gitops were already in our radar for Toolforge Kubernetes, I think it climbed a few steps in my priority list after the conference. Another learning was this site: https://opengitops.dev/. Group On etcd, performance and resource management I attended a talk focused on etcd performance tuning that was very encouraging. They were basically talking about the exact same problems we have had in Toolforge Kubernetes, like api-server and etcd failure modes, and how sensitive etcd is to disk latency, IO pressure and network throughput. Even though Toolforge Kubernetes scale is small compared to other Kubernetes deployments out there, I found it very interesting to see other s approaches to the same set of challenges. I learned how most Kubernetes components and apps can overload the api-server. Because even the api-server talks to itself. Simple things like kubectl may have a completely different impact on the API depending on usage, for example when listing the whole list of objects (very expensive) vs a single object. The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get calls). I already knew some of this, and for example the jobs-framework-emailer was already making use of this ResourceVersion functionality. There have been a lot of improvements in the performance side of Kubernetes in recent times, or more specifically, in how resources are managed and used by the system. I saw a review of resource management from the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware scheduling decisions and dynamic resource claims (changing the pod resource claims without re-defining/re-starting the pods). On cluster management, bootstrapping and multi-tenancy I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers themselves. This was of interest to me because as of today we use it for Toolforge. They shared all the latest developments and improvements, and the plans and roadmap for the future, with a special mention to something they called kubeadm operator , apparently capable of auto-upgrading the cluster, auto-renewing certificates and such. I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was the best, from the point of view of being a well established and well-known workflow, plus having a very active contributor base. The kubeadm developers invited the audience to submit feature requests, so I did. The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play with it sometime down the road. On networking I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the network to achieve its goal. The conference program had sessions to cover topics ranging from network debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering it is the one we use for Toolforge Kubernetes. Slide On jobs I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning models and other fancy AI projects. It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some limitations that are now gone, or at least greatly improved. Some new functionalities have been added as well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger batch of data based on that index. It would allow for full grid-like features like sequential (or again, indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that something we would like to enable in Toolforge Jobs Framework? On policy and security A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice to learn two realities:
  1. kubernetes is capable of doing pretty much anything security-wise and create greatly secured environments.
  2. it does not by default. The defaults are not security-strict on purpose.
It kind of made sense to me: Kubernetes was used for a wide range of use cases, and developers didn t know beforehand to which particular setup they should accommodate the default security levels. One session in particular covered the most basic security features that should be enabled for any Kubernetes system that would get exposed to random end users. In my opinion, the Toolforge Kubernetes setup was already doing a good job in that regard. To my joy, some sessions referred to the Pod Security Admission mechanism, which is one of the key security features we re about to adopt (when migrating away from Pod Security Policy). I also learned a bit more about Secret resources, their current implementation and how to leverage a combo of CSI and RBAC for a more secure usage of external secrets. Finally, one of the major takeaways from the conference was learning about kyverno and kubeaudit. I was previously aware of the OPA Gatekeeper. From the several demos I saw, it was to me that kyverno should help us make Toolforge Kubernetes more sustainable by replacing all of our custom admission controllers with it. I already opened a ticket to track this idea, which I ll be proposing to my team soon. Final notes In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. List of sessions I attended on the first day: List of sessions I attended on the second day: List of sessions I attended on third day: The videos have been published on Youtube.

22 March 2023

Russ Allbery: Review: The Kaiju Preservation Society

Review: The Kaiju Preservation Society, by John Scalzi
Publisher: Tor
Copyright: 2022
ISBN: 0-7653-8913-4
Format: Kindle
Pages: 264
As this novel opens, Jamie Gray, our first-person narrator, is working for the business side of a startup food delivery service named f dm d. He's up for his six month performance review and has some great ideas for how to improve the company's market standing going into pandemic lockdown. His boss has other ideas: Jamie at the bottom of the corporate ladder, delivering food door-to-door. Tom is working for a semi-secret organization with a last-minute, COVID-induced worker shortage. He needs someone who can lift things. Jamie used to go to some of the same parties, can lift things, and is conveniently available. And that's how Jamie ends up joining the Kaiju Preservation Society, because it turns out the things that need lifting are in a different dimension. This book was so bad. I think this may be the worst-written novel at a technical level that I have read since I started writing book reviews. It's become trendy in some circles to hate Scalzi, so I want to be clear that I normally get along fine with his writing. Scalzi is an unabashedly commercial writer of light, occasionally humorous popcorn SF. It's not great literature, and he's unlikely to write a new favorite novel, but his books are easy to read and reliably deliver a few hours of comfortable entertainment. The key word is "reliably"; Scalzi doesn't have a lot of dynamic range, but you know what you're getting and can decide to read him when that matches your mood. When I give a book a bad review, it's usually because I found the ideas deeply unpleasant (genocidal theology, for instance, or creepy voyeuristic sexism). That's not the problem here. The ideas are fine: a variation on the Jurassic Park setup but with kaiju and less commercialism, an everyman narrator to look at everything for the reader, a few assholes thrown in to provide some conflict sure, sign me up, sounds like the kind of light entertainment I expect from a Scalzi novel. The excuse for interdimensional portals was clever (and consistent with kaiju story themes), and the biological handwaving created a lot of good story hooks. The material for a fun novel is all present. The problem, instead, is that this book was not finished. It's the bare skeleton of a story with almost-nonexistent characters and plot, stuck in a novel-shaped box and filled in with repetitive banter and dad jokes of the approximate consistency of styrofoam packing material. When I complain about the characterization, I fear people who haven't read the book won't understand what I mean. He's always had dialogue quirks that tend to show up in all of his characters and make them sound similar. I noticed this in other books, but it wasn't a big deal. The characterization problems in this book are a big deal. I can identify four characters, total, from the entire novel: the first-person protagonist, the villain, the pilot, and the woman who does the forest floor safety training. None of those characters are memorable or interesting, but at least they're somewhat distinct. Apart from them, you could write a computer program that randomly selected character names for each dialogue line and I wouldn't be able to tell the difference. I have never given up on character identity and started ignoring all the dialogue tags in a book before. Everyone says the same thing, makes the same jokes, has the same emotional reactions, and has the same total lack of interiority or distinguishing characteristics. The only way I can imagine telling the characters apart is if you memorized the association between names and professions, and I have no idea why you'd bother. The descriptions are, if anything, worse. Scalzi is not a heavily descriptive author, but usually he gives me something to hang my imagination on. You would think that if you were writing a book about kaiju one where kaiju are quite actively involved in the story, fighting, roaring, menacing, being central to the plot you would describe a kaiju at some point during the novel. They're visually impressive giant monsters! This is an inherently visual story genre! And at no point in this entire novel does Scalzi ever describe a kaiju in any detail. Not once! The most we get is that one has tentacles and a sort of eye spot. And sometimes there are wings. There are absolutely no overall impressions, comparisons, attempts to sketch what the characters are seeing, nothing. Or, for another example, consider the base, the place where the characters live for most of the story and where much of the dialogue happens. Here is the sum total of all sensory information I can recall about the characters' home: it has stairs, and there's a plant in Jamie's room. (The person who left the plant, who never appears on screen, gets more characterization in two pages than anyone else gets in the whole novel.) What does the base look like from the outside? The inside? How many stories does it have? What are the common spaces like? What does it smell like? Does it feel institutional, or welcoming, or dirty, or sparkling? How long does it take to get from one end of it to the other? Does it make weird noises at night? I have no impressions of this place whatsoever. Maybe a few of these things were mentioned in passing and I missed them, but that's because the narrator of this book never describes his surroundings in detail, stops to look at something eye-catching, thinks about how he feels about a place, or otherwise gives the reader any meaningful emotional engagement with the spaces around him. And it's not like this story was instead stuffed with action. There is barely a novelette's worth of plot and most of that is predictable: the setup, the initial confrontation, the discovery of the evil plan, the final confrontation. For most of the book, nothing of any consequence happens. It's just endless pages of vaguely bantering dialogue between totally indistinguishable characters while Jamie repeats "I lift things." (That was funny the first couple of times; by the fifth time, the funny wore off.) The climax, when it finally happens, is mostly monologuing and half-hearted repartee that is cringeworthy and vaguely embarrassing for everyone involved. I don't really blame Scalzi for this book. I wish he had realized that it was half-baked at best and needed some major revisions, but the author's note at the end makes it clear that the process of bringing this book into the world was a train wreck. It was written in two months, in a rush, after Scalzi had already missed a deadline for a different book that failed to come together. Life happens, and in 2020 and early 2021 a whole lot of life was happening. The tone of the author's note is vaguely apologetic; I think Scalzi realizes at some level that this is not his best work. The person I do blame is Patrick Nielsen Hayden, Scalzi's editor, who is a multiple-Hugo-award-winning book editor and the managing editor of science fiction at Tor and absolutely should have known better. It was his responsibility to look at this book and say "this is not ready yet"; this is part of the function of the traditional publishing apparatus. This could have been a good book. The ideas and the hook were there; it just needed some actual substance in the middle and a whole lot of character work. Instead, he was the one who made the decision to publish the book in this state. But, well, the joke's on me, because The Kaiju Preservation Society sold a ton of copies, got nominated for several awards, won an Alex Award, and made Amazon's best of 2022 list, so I guess this was a brilliant publishing decision and the book was everything it needed to be? Maybe I'm just bad at reading and have no sense of humor? I have no explanation; I am truly and completely baffled. There are books that I don't like but that have obvious merits for people who are not me. There are styles of writing that I don't like and other people do. But I would have sworn this book was objectively unfinished and half-assed at a craft and construction level, in ways that don't depend as much on personal taste. I recommend quietly forgetting it was ever published and waiting for a better Scalzi novel, but it has a 4.04 star rating on Goodreads with nearly 32,000 reviews, so what do I know. Anyway, I was warned that I wasn't going to like this book and I read it anyway for silly reasons because I figured it was a Scalzi novel and how bad could it be, really. I brought this on myself, and I at least got the fun of ranting about it. Apparently this book found its people and they got a lot of joy out of it, and good for them. Rating: 2 out of 10

Next.