Search Results: "cas"

15 October 2024

Jonathan Dowland: Whisper (pipewire tool)

It's time to mint a new blog tag I want to write to pour praise on some software I recently discovered. I'm not up to speed on Pipewire the latest piece of Linux plumbing related to audio nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what else?). I recently tried to plug something into the line-in port on my external audio interface, and wished to hear it on the machine. A simple task, you'd think. I'll refrain from writing about the stuff that didn't work well and focus on the thing that did: A little tool called Whisper, which is designed to let you listen to a microphone through your speakers.
_Whisper_'s UI. Screenshot from upstream. Whisper's UI. Screenshot from upstream.
Whisper does a great job of hiding the complexity of what lies beneath and asking two questions: which microphone, and which speakers? In my case this alone was not quite enough, as I was presented with two identically-named "SB Live Extigy" "microphone" devices, but that's easily resolved with trial and error. More stuff like this please!

Lukas M rdian: Waiting for a Linux system to be online

Designed by Freepik

What is an online system? Networking is a complex topic, and there is lots of confusion around the definition of an online system. Sometimes the boot process gets delayed up to two minutes, because the system still waits for one or more network interfaces to be ready. Systemd provides the network-online.target that other service units can rely on, if they are deemed to require network connectivity. But what does online actually mean in this context, is a link-local IP address enough, do we need a routable gateway and how about DNS name resolution? The requirements for an online network interface depend very much on the services using an interface. For some services it might be good enough to reach their local network segment (e.g. to announce Zeroconf services), while others need to reach domain names (e.g. to mount a NFS share) or reach the global internet to run a web server. On the other hand, the implementation of network-online.target varies, depending on which networking daemon is in use, e.g. systemd-networkd-wait-online.service or NetworkManager-wait-online.service. For Ubuntu, we created a specification that describes what we as a distro expect an online system to be. Having a definition in place, we are able to tackle the network-online-ordering issues that got reported over the years and can work out solutions to avoid delayed boot times on Ubuntu systems. In essence, we want systems to reach the following networking state to be considered online:
  1. Do not wait for optional interfaces to receive network configuration
  2. Have IPv6 and/or IPv4 link-local addresses on every network interface
  3. Have at least one interface with a globally routable connection
  4. Have functional domain name resolution on any routable interface

A common implementation NetworkManager and systemd-networkd are two very common networking daemons used on modern Linux systems. But they originate from different contexts and therefore show different behaviours in certain scenarios, such as wait-online. Luckily, on Ubuntu we already have Netplan as a unification layer on top of those networking daemons, that allows for common network configuration, and can also be used to tweak the wait-online logic. With the recent release of Netplan v1.1 we introduced initial functionality to tweak the behaviour of the systemd-networkd-wait-online.service, as used on Ubuntu Server systems. When Netplan is used to drive the systemd-networkd backend, it will emit an override configuration file in /run/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf, listing the specific non-optional interfaces that should receive link-local IP configuration. In parallel to that, it defines a list of network interfaces that Netplan detected to be potential global connections, and waits for any of those interfaces to reach a globally routable state. Such override config file might look like this:
[Unit]
ConditionPathIsSymbolicLink=/run/systemd/generator/network-online.target.wants/systemd-networkd-wait-online.service

[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online -i eth99.43:carrier -i lo:carrier -i eth99.42:carrier -i eth99.44:degraded -i bond0:degraded
ExecStart=/lib/systemd/systemd-networkd-wait-online --any -o routable -i eth99.43 -i eth99.45 -i bond0
In addition to the new features implemented in Netplan, we reached out to upstream systemd, proposing an enhancement to the systemd-networkd-wait-online service, integrating it with systemd-resolved to check for the availability of DNS name resolution. Once this is implemented upstream, we re able to fully control the systemd-networkd backend on Ubuntu Server systems, to behave consistently and according to the definition of an online system that was lined out above.

Future work The story doesn t end there, because Ubuntu Desktop systems are using NetworkManager as their networking backend. This daemon provides its very own nm-online utility, utilized by the NetworkManager-wait-online systemd service. It implements a much higher-level approach, looking at the networking daemon in general instead of the individual network interfaces. By default, it considers a system to be online once every autoconnect profile got activated (or failed to activate), meaning that either a IPv4 or IPv6 address got assigned. There are considerable enhancements to be implemented to this tool, for it to be controllable in a fine-granular way similar to systemd-networkd-wait-online, so that it can be instructed to wait for specific networking states on selected interfaces.

A note of caution Making a service depend on network-online.target is considered an antipattern in most cases. This is because networking on Linux systems is very dynamic and the systemd target can only ever reflect the networking state at a single point in time. It cannot guarantee this state to be remained over the uptime of your system and has the potentially to delay the boot process considerably. Cables can be unplugged, wireless connectivity can drop, or remote routers can go down at any time, affecting the connectivity state of your local system. Therefore, instead of wondering what to do about network.target, please just fix your program to be friendly to dynamically changing network configuration. [source].

Iustin Pop: Optical media lifetime - one data point

Way back (more than 10 years ago) when I was doing DVD-based backups, I knew that normal DVDs/Blu-Rays are no long-term archival solutions, and that if I was real about doing optical media backups, I need to switch to M-Disc. I actually bought a (small stack) of M-Disc Blu-Rays, but never used them. I then switched to other backups solutions, and forgot about the whole topic. Until, this week, while sorting stuff, I happened upon a set of DVD backups from a range of years, and was very curious whether they are still readable after many years. And, to my surprise, there were no surprises! Went backward in time, and: I also found stack of dual-layer DVD+R from 2012-2014, some for sure Verbatim, and some unmarked (they were intended to be printed on), but likely Verbatim as well. All worked just fine. Just that, even at ~8GiB per disk, backing up raw photo files took way too many disks, even in 2014 . At this point I was happy that all 12+ DVDs I found, ranging from 10 to 14 years, are all good. Then I found a batch of 3 CDs! Here the results were mixed: I think the takeaway is that for all explicitly selected media - TDK, JVC and Verbatim - they hold for 10-20 years. Valid reads from summer 2003 is mind boggling for me, for (IIRC) organic media - not sure about the TDK metallic substrate. And when you just pick whatever ( Creation ), well, the results are mixed. Note that in all this, it was about CDs and DVDs. I have no idea how Blu-Rays behave, since I don t think I ever wrote a Blu-Ray. In any case, surprising to me, and makes me rethink a bit my backup options. Sizes from 25 to 100GB Blu-Rays are reasonable for most critical data. And they re WORM, as opposed to most LTO media, which is re-writable (and to some small extent, prone to accidental wiping). Now, I should check those M-Disks to see if they can still be written to, after 10 years

13 October 2024

Andy Simpkins: The state of the art

A long time ago . A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of repetitive mathematics typically for accounting and stock / order processing. Then along came Lyons, who deployed an artificial computer to perform the same task, only with fewer errors in less time. Modern day computing was born we had entered the age of the Digital Computer. These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results. Over time the huge mainframe digital computers have shrunk in size, increased in performance, and consume far less power so much so that they often didn t need the specialist CFC based, refrigerated liquid cooling systems of their bigger mainframe counterparts, only requiring forced air flow, and occasionally just convection cooling. They shrank so far and became cheep enough that the Personal Computer became to be, replacing the mainframe with its time shared resources with a machine per user. Desktop or even portable laptop computers were everywhere. We networked them together, so now we can share information around the office, a few computers were given specialist tasks of being available all the time so we could share documents, or host databases these servers were basically PCs designed to operate 24 7, usually more powerful than their desktop counterparts (or at least with faster storage and networking). Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age. Fast forward a few years and all of a sudden we need huge data-halls filled with Rack scale machines augmented with exotic GPUs and NPUs again with refrigerated liquid cooling, all to do the same task that we were doing previously without the magical buzzword that has been named AI; because we all need another dot com bubble or block chain band waggon to jump aboard. Our AI enabled searches take slightly longer, consume magnitudes more power, and best of all the results we are given may or may not be correct . Progress, less precise answers, taking longer, consuming more power, without any verification and often giving a different result if you repeat your question AND we still need a personal computing device to access this wondrous thing. Remind me again why we are here? (time lines and huge swaves of history simply ignored to make an attempted comic point this is intended to make a point and not be scholarly work)

Taavi V n nen: Bulk downloading Wikimedia Commons categories

Wikimedia Commons, the Wikimedia project for freely licensed media files, also contains a bunch of photos by me and photos of me at various events. While I don't think Commons is going away anytime soon, I would still like to have a local copy of those images available on my own storage hardware. Obviously this requires some way to query for photos you want to download. I'm using Commons categories for this, since that's easy to implement and works for both use cases. The Commons community tends to come up with very specific categories that you can use, and if not, you can usually categorize the files yourself.
Me replying 'shh' to a Discord message showing myself categorizing photos about me and accusing me of COI editing
thankfully Commons has no such thing as a Conflict of interest (COI) policy
There is almost an existing tool for this: Sam Wilson's mwcli project has support for exporting images one has uploaded to Commons. However I couldn't use that to upload photos of me others have uploaded, plus it's written in PHP and I don't exactly want to deal with the problem of figuring out how to package it in a way I could neatly install it on my NAS. So I wrote my own tool for it, called comload. It's written in Python because Python is easy to deploy (I can just throw it in a .deb and upload it to my internal repository), and because I did not find a Go library to handle Action API pagination for me. The basic usage is like this:
$ comload --subcats "Taavi V n nen"
This will download any files in Category:Taavi V n nen and its sub-categories to the current directory. Former image versions, as well as the image description and SDC data, if any, is also included. And it's smart enough to not download any files that are already there on future runs, so you can just throw it in a systemd timer to get any future files. I'd still like it to handle moved files without creating a duplicate copy, but otherwise I'm really happy with the current state. comload is available from PyPI and from my Git server directly, and is licensed under the GPLv3.

12 October 2024

Jonathan Dowland: Code formatting in documents

I've been exploring typesetting and formatting code within text documents such as papers, or my thesis. Up until now, I've been using the listings package without thinking much about it. By default, some sample Haskell code processed by listings looks like this (click any of the images to see larger, non-blurry versions):
default output of listings on a Haskell code sample
It's formatted with a monospaced font, with some keywords highlighted, but not syntactic symbols. There are several other options for typesetting and formatting code in LaTeX documents. For Haskell in particular, there is the preprocessor lhs2tex, The default output of which looks like this:
default output of lhs2tex on a Haskell code sample
A proportional font, but it's taken pains to preserve vertical alignment, which is syntactically significant for Haskell. It looks a little cluttered to me, and I'm not a fan of nearly everything being italic. Again, symbols aren't differentiated, but it has substituted them for more typographically pleasing alternatives: -> has become , and \ is now . Another option is perhaps the newest, the LaTeX package minted, which leverages the Python Pygments program. Here's the same code again. It defaults to monospace (the choice of font seems a lot clearer to me than the default for listings), no symbolic substitution, and liberal use of colour:
default output of minted on a Haskell code sample
An informal survey of the samples so far showed that the minted output was the most popular. All of these packages can be configured to varying degrees. Here are some examples of what I've achieved with a bit of tweaking
_listings_ adjusted with colour and some symbols substituted (but sadly not the two together) listings adjusted with colour and some symbols substituted (but sadly not the two together)
_lhs2tex_ adjusted to be less italic, sans-serif and use some colour lhs2tex adjusted to be less italic, sans-serif and use some colour
All of this has got me wondering whether there are straightforward empirical answers to some of these questions of style. Firstly, I'm pretty convinced that symbolic substitution is valuable. When writing Haskell, we write ->, \, /= etc. not because it's most legible, but because it's most practical to type those symbols on the most widely available keyboards and popular keyboard layouts.1 Of the three options listed here, symbolic substitution is possible with listings and lhs2tex, but I haven't figured out if minted can do it (which is really the question: can pygments do it?) I'm unsure about proportional versus monospaced fonts. We typically use monospaced fonts for editing computer code, but that's at least partly for historical reasons. Vertical alignment is often very important in source code, and it can be easily achieved with monospaced text; it's also sometimes important to have individual characters (., etc.) not be de-emphasised by being smaller than any other character. lhs2tex, at least, addresses vertical alignment whilst using proportional fonts. I guess the importance of identifying individual significant characters is just as true in a code sample within a larger document as it is within plain source code. From a (brief) scan of research on this topic, it seems that proportional fonts result in marginally quicker reading times for regular prose. It's not clear whether those results carry over into reading computer code in particular, and the margin is slim in any case. The drawbacks of monospaced text mostly apply when the volume of text is large, which is not the case for the short code snippets I am working with. I still have a few open questions:

  1. The Haskell package Data.List.Unicode lets the programmer use a range of unicode symbols in place of ASCII approximations, such as instead of elem, instead of /=. Sadly, it's not possible to replace the denotation for an anonymous function, \, with this way.

11 October 2024

Steve McIntyre: Rock 5 ITX

It's been a while since I've posted about arm64 hardware. The last machine I spent my own money on was a SolidRun Macchiatobin, about 7 years ago. It's a small (mini-ITX) board with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like a DIMM socket for memory, lots of networking, 3 SATA disk interfaces. The Macchiatobin was a nice machine compared to many earlier systems, but it took quite a bit of effort to get it working to my liking. I replaced the on-board U-Boot firmware binary with an EDK2 build, and that helped. After a few iterations we got a new build including graphical output on a PCIe graphics card. Now it worked much more like a "normal" x86 computer. I still have that machine running at home, and it's been a reasonably reliable little build machine for arm development and testing. It's starting to show its age, though - the onboard USB ports no longer work, and so it's no longer useful for doing things like installation testing. :-/ So... I was involved in a conversation in the #debian-arm IRC channel a few weeks ago, and diederik suggested the Radxa Rock 5 ITX. It's another mini-ITX board, this time using a Rockchip RK3588 CPU. Things have moved on - the CPU is now an 8-core big.LITTLE config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board, 4*SATA, built-in Mali graphics from the CPU, soldered-on memory. Just about everything you need on an SBC for a small low-power desktop, a NAS or whatever. And for about half the price I paid for the Macchiatobin. I hit "buy" on one of the listed websites. :-) A few days ago, the new board landed. I picked the version with 24GB of RAM and bought the matching heatsink and fan. I set it up in an existing case borrowed from another old machine and tried the Radxa "Debian" build. All looked OK, but I clearly wasn't going to stay with that. Onwards to running a native Debian setup! I installed an EDK2 build from https://github.com/edk2-porting/edk2-rk3588 onto the onboard SPI flash, then rebooted with a Debian 12.7 (Bookworm) arm64 installer image on a USB stick. How much trouble could this be? I was shocked! It Just Worked (TM) I'm running a standard Debian arm64 system. The graphical installer ran just fine. I installed onto the NVMe, adding an Xfce desktop for some simple tests. Everything Just Worked. After many years of fighting with a range of different arm machines (from simple SBCs to desktops and servers), this was without doubt the most straightforward setup I've ever done. Wow! It's possible to go and spend a lot of money on an Ampere machine, and I've seen them work well too. But for a hobbyist user (or even a smaller business), the Rock 5 ITX is a lovely option. Total cost to me for the board with shipping fees, import duty, etc. was just over 240. That's great value, and I can wholeheartedly recommend this board! The two things that are missing compared to the Macchiatobin? This is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't have a PCIe slot, but it has sufficient onboard network, video and storage interfaces that I think it will cover most people's needs. Where's the catch? It seems these are very popular right now, so it can be difficult to find these machines in stock online. FTAOD, I should also point out: I bought this machine entirely with my own money, for my own use for development and testing. I've had no contact with the Radxa or Rockchip folks at all here, I'm just so happy with this machine that I've felt the need to shout about it! :-) Here's some pictures... Rock 5 ITX top view Rock 5 ITX back panel view Rock 5 EDK2 startuo Rock 5 xfce login Rock 5 ITX running Firefox

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 18 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 7.0h (out of 0.0h assigned and 14.0h from previous period), thus carrying over 7.0h to the next month.
  • Adrian Bunk did 51.75h (out of 9.25h assigned and 55.5h from previous period), thus carrying over 13.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 0.0h assigned and 10.0h from previous period).
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 20.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 4.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 26.0h assigned), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 23.5h (out of 22.25h assigned and 37.75h from previous period), thus carrying over 36.5h to the next month.
  • Guilhem Moulin did 22.25h (out of 20.0h assigned and 2.5h from previous period), thus carrying over 0.25h to the next month.
  • Lucas Kanashiro did 10.0h (out of 5.0h assigned and 15.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 6.5h (out of 14.5h assigned and 9.5h from previous period), thus carrying over 17.5h to the next month.
  • Roberto C. S nchez did 24.75h (out of 21.0h assigned and 3.75h from previous period).
  • Santiago Ruano Rinc n did 19.0h (out of 19.0h assigned).
  • Sean Whitton did 0.75h (out of 4.0h assigned and 2.0h from previous period), thus carrying over 5.25h to the next month.
  • Sylvain Beucler did 16.0h (out of 42.0h assigned and 18.0h from previous period), thus carrying over 44.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 17.0h (out of 7.5h assigned and 9.5h from previous period).

Evolution of the situation In September, we have released 52 DLAs. September marked the first full month of Debian 11 bullseye under the responsibility of the LTS Team and the team immediately got to work, publishing more than 4 dozen updates. Some notable updates include ruby2.7 (denial-of-service, information leak, and remote code execution), git (various arbitrary code execution vulnerabilities), firefox-esr (multiple issues), gnutls28 (information disclosure), thunderbird (multiple issues), cacti (cross site scripting and SQL injection), redis (unauthorized access, denial of service, and remote code execution), mariadb-10.5 (arbitrary code execution), cups (arbitrary code execution). Several LTS contributors have also contributed package updates which either resulted in a DSA (a Debian Security Announcement, which applies to Debian 12 bookworm) or in an upload that will be published at the next stable point release of Debian 12 bookworm. This list of packages includes cups, cups-filters, booth, nghttp2, puredata, python3.11, sqlite3, and wireshark. This sort of work, contributing fixes to newer Debian releases (and sometimes even to unstable), helps to ensure that upgrades from a release in the LTS phase of its lifecycle to a newer release do not expose users to vulnerabilities which have been closed in the older release. Looking beyond Debian, LTS contributor Bastien Roucari s has worked with the upstream developers of apache2 to address regressions introduced upstream by some recent vulnerability fixes and he has also reached out to the community regarding a newly discovered security issue in the dompurify package. LTS contributor Santiago Ruano Rinc n has undertaken the work of triaging and reproducing nearly 4 dozen CVEs potentially affecting the freeimage package. The upstream development of freeimage appears to be dormant and some of the issues have languished for more than 5 years. It is unclear how much can be done without the aid of upstream, but we will do our best to provide as much help to the community as we can feasibly manage. Finally, it is sometimes necessary to limit or discontinue support for certain packages. The transition of a release from being under the responsibility of the Debian Security Team to that of the LTS Team is an occasion where we assess any pending decisions in this area and formalize them. Please see the announcement for a complete list of packages which have been designated as unsupported.

Thanks to our sponsors Sponsors that joined recently are in bold.

10 October 2024

Gunnar Wolf: Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I m not really that much of a Python guy But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it s the language I find my students can best cope with. But delivering a talk to ChiPy? On the other hand, I have long used a very simplistic and limited filesystem I ve designed as an implementation project at class: FIUnamFS (for Facultad de Ingenier a, Universidad Nacional Aut noma de M xico : the Engineering Faculty for Mexico s National University, where I teach. Sorry, the link is in Spanish but you will find several implementations of it from the students ). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk. As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE. Python FUSE But, in the six semesters I ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation. Maybe this is because it s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros , and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4 But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?). And of course, there isn t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy Where to start from? So I setup to try and help. Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but maybe my documentation ends up obfuscating the intent? I hope not and, read on, I ve provided some remediation). I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS). So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem. I think providing fun or useful examples is also a good way to get students to use what I m teaching, so I ve added some ideas I ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem. They all provide something that could be seen as useful, in a way that s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! So I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time. Of course, I will also share this project with my students in the next couple of weeks And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report! Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A hand-holding guide to writing FUSE-based filesystems in Python, and after presenting at Jornadas Regionales, I present you the video in Spanish here: Aprendiendo y ense ando a escribir sistemas de archivo en espacio de usuario con FUSE y Python.

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

9 October 2024

Ben Hutchings: FOSS activity in September 2024

7 October 2024

Reproducible Builds: Reproducible Builds in September 2024

Welcome to the September 2024 report from the Reproducible Builds project! Our reports attempt to outline what we ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website. Table of contents:
  1. New binsider tool to analyse ELF binaries
  2. Unreproducibility of GHC Haskell compiler 95% fixed
  3. Mailing list summary
  4. Towards a 100% bit-for-bit reproducible OS
  5. Two new reproducibility-related academic papers
  6. Distribution work
  7. diffoscope
  8. Other software development
  9. Android toolchain core count issue reported
  10. New Gradle plugin for reproducibility
  11. Website updates
  12. Upstream patches
  13. Reproducibility testing framework

New binsider tool to analyse ELF binaries Reproducible Builds developer Orhun Parmaks z has announced a fantastic new tool to analyse the contents of ELF binaries. According to the project s README page:
Binsider can perform static and dynamic analysis, inspect strings, examine linked libraries, and perform hexdumps, all within a user-friendly terminal user interface!
More information about Binsider s features and how it works can be found within Binsider s documentation pages.

Unreproducibility of GHC Haskell compiler 95% fixed A seven-year-old bug about the nondeterminism of object code generated by the Glasgow Haskell Compiler (GHC) received a recent update, consisting of Rodrigo Mesquita noting that the issue is:
95% fixed by [merge request] !12680 when -fobject-determinism is enabled. [ ]
The linked merge request has since been merged, and Rodrigo goes on to say that:
After that patch is merged, there are some rarer bugs in both interface file determinism (eg. #25170) and in object determinism (eg. #25269) that need to be taken care of, but the great majority of the work needed to get there should have been merged already. When merged, I think we should close this one in favour of the more specific determinism issues like the two linked above.

Mailing list summary On our mailing list this month:
  • Fay Stegerman let everyone know that she started a thread on the Fediverse about the problems caused by unreproducible zlib/deflate compression in .zip and .apk files and later followed up with the results of her subsequent investigation.
  • Long-time developer kpcyrd wrote that there has been a recent public discussion on the Arch Linux GitLab [instance] about the challenges and possible opportunities for making the Linux kernel package reproducible , all relating to the CONFIG_MODULE_SIG flag. [ ]
  • Bernhard M. Wiedemann followed-up to an in-person conversation at our recent Hamburg 2024 summit on the potential presence for Reproducible Builds in recognised standards. [ ]
  • Fay Stegerman also wrote about her worry about the possible repercussions for RB tooling of Debian migrating from zlib to zlib-ng as reproducibility requires identical compressed data streams. [ ]
  • Martin Monperrus wrote the list announcing the latest release of maven-lockfile that is designed aid building Maven projects with integrity . [ ]
  • Lastly, Bernhard M. Wiedemann wrote about potential role of reproducible builds in combatting silent data corruption, as detailed in a recent Tweet and scholarly paper on faulty CPU cores. [ ]

Towards a 100% bit-for-bit reproducible OS Bernhard M. Wiedemann began writing on journey towards a 100% bit-for-bit reproducible operating system on the openSUSE wiki:
This is a report of Part 1 of my journey: building 100% bit-reproducible packages for every package that makes up [openSUSE s] minimalVM image. This target was chosen as the smallest useful result/artifact. The larger package-sets get, the more disk-space and build-power is required to build/verify all of them.
This work was sponsored by NLnet s NGI Zero fund.

Distribution work In Debian this month, 14 reviews of Debian packages were added, 12 were updated and 20 were removed, all adding to our knowledge about identified issues. A number of issue types were updated as well. [ ][ ] In addition, Holger opened 4 bugs against the debrebuild component of the devscripts suite of tools. In particular:
  • #1081047: Fails to download .dsc file.
  • #1081048: Does not work with a proxy.
  • #1081050: Fails to create a debrebuild.tar.
  • #1081839: Fails with E: mmdebstrap failed to run error.
Last month, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org. This month, this issue was closed by Santiago R. R., nicely explaining that build path variation is no longer the default, and, if desired, how developers may enable it again. In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading version 278 to Debian:
  • New features:
    • Add a helpful contextual message to the output if comparing Debian .orig tarballs within .dsc files without the ability to fuzzy-match away the leading directory. [ ]
  • Bug fixes:
    • Drop removal of calculated os.path.basename from GNU readelf output. [ ]
    • Correctly invert X% similar value and do not emit 100% similar . [ ]
  • Misc:
    • Temporarily remove procyon-decompiler from Build-Depends as it was removed from testing (via #1057532). (#1082636)
    • Update copyright years. [ ]
For trydiffoscope, the command-line client for the web-based version of diffoscope, Chris Lamb also:
  • Added an explicit python3-setuptools dependency. (#1080825)
  • Bumped the Standards-Version to 4.7.0. [ ]

Other software development disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues. This month, version 0.5.11-4 was uploaded to Debian unstable by Holger Levsen making the following changes:
  • Replace build-dependency on the obsolete pkg-config package with one on pkgconf, following a Lintian check. [ ]
  • Bump Standards-Version field to 4.7.0, with no related changes needed. [ ]

In addition, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, version 0.7.28 was uploaded to Debian unstable by Holger Levsen including a change by Jelle van der Waa to move away from the pipes Python module to shlex, as the former will be removed in Python version 3.13 [ ].

Android toolchain core count issue reported Fay Stegerman reported an issue with the Android toolchain where a part of the build system generates a different classes.dex file (and thus a different .apk) depending on the number of cores available during the build, thereby breaking Reproducible Builds:
We ve rebuilt [tag v3.6.1] multiple times (each time in a fresh container): with 2, 4, 6, 8, and 16 cores available, respectively:
  • With 2 and 4 cores we always get an unsigned APK with SHA-256 14763d682c9286ef .
  • With 6, 8, and 16 cores we get an unsigned APK with SHA-256 35324ba4c492760 instead.

New Gradle plugin for reproducibility A new plugin for the Gradle build tool for Java has been released. This easily-enabled plugin results in:
reproducibility settings [being] applied to some of Gradle s built-in tasks that should really be the default. Compatible with Java 8 and Gradle 8.3 or later.

Website updates There were a rather substantial number of improvements made to our website this month, including:

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In September, a number of changes were made by Holger Levsen, including:
  • Debian-related changes:
    • Upgrade the osuosl4 node to Debian trixie in anticipation of running debrebuild and rebuilderd there. [ ][ ][ ]
    • Temporarily mark the osuosl4 node as offline due to ongoing xfs_repair filesystem maintenance. [ ][ ]
    • Do not warn about (very old) broken nodes. [ ]
    • Add the risc64 architecture to the multiarch version skew tests for Debian trixie and sid. [ ][ ][ ]
    • Mark the virt 32,64 b nodes as down. [ ]
  • Misc changes:
    • Add support for powercycling OpenStack instances. [ ]
    • Update the fail2ban to ban hosts for 4 weeks in total [ ][ ] and take care to never ban our own Jenkins instance. [ ]
In addition, Vagrant Cascadian recorded a disk failure for the virt32b and virt64b nodes [ ], performed some maintenance of the cbxi4a node [ ][ ] and marked most armhf architecture systems as being back online.

Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

1 October 2024

Colin Watson: Free software activity in September 2024

Almost all of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. Pydantic My main Debian project for the month turned out to be getting Pydantic back into a good state in Debian testing. I ve used Pydantic quite a bit in various projects, most recently in Debusine, so I have an interest in making sure it works well in Debian. However, it had been stalled on 1.10.17 for quite a while due to the complexities of getting 2.x packaged. This was partly making sure everything else could cope with the transition, but in practice mostly sorting out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Learning Rust is on my to-do list, but merely not knowing a language hasn t stopped me before. So I learned how the Debian Rust team s packaging works, upgraded a few packages to new upstream versions (including rust-half and upstream rust-idna test fixes), and packaged rust-jiter. After a lot of waiting around for various things and chasing some failures in other packages I was eventually able to get current versions of both pydantic-core and pydantic into testing. I m looking forward to being able to drop our clunky v1 compatibility code once debusine can rely on running on trixie! OpenSSH I upgraded the Debian packaging to OpenSSH 9.9p1. YubiHSM I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new upstream versions. I noticed that I could enable some tests in python-yubihsm and yubihsm-shell; I d previously thought the whole test suite required a real YubiHSM device, but when I looked closer it turned out that this was only true for some tests. I fixed yubihsm-shell build failures on some 32-bit architectures (upstream PRs #431, #432), and also made it build reproducibly. Thanks to Helmut Grohne, I fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes. Python team setuptools 72.0.0 removed the venerable setup.py test command. This caused some fallout in Debian, some of which was quite non-obvious as packaging helpers sometimes fell back to different ways of running test suites that didn t quite work. I fixed django-guardian, manuel, python-autopage, python-flask-seeder, python-pgpdump, python-potr, python-precis-i18n, python-stopit, serpent, straight.plugin, supervisor, and zope.i18nmessageid. As usual for new language versions, the addition of Python 3.13 caused some problems. I fixed psycopg2, python-time-machine, and python-traits. I fixed build/autopkgtest failures in keymapper, python-django-test-migrations, python-rosettasciio, routes, transmissionrpc, and twisted. buildbot was in a bit of a mess due to being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it upstream had committed a workable set of patches, and the main difficulty was figuring out what to cherry-pick since they haven t made a new upstream release with all of that yet. I figured this out and got us up to 4.0.3. Adrian Bunk asked whether python-zipp should be removed from trixie. I spent some time investigating this and concluded that the answer was no, but looking into it was an interesting exercise anyway. On the other hand, I looked into flask-appbuilder, concluded that it should be removed, and filed a removal request. I upgraded some embedded CSS files in nbconvert. I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings, pylint (fixing a test failure), python-aiohttp-session, python-apptools, python-asyncssh, python-django-celery-beat, python-django-rules, python-limits, python-multidict, python-persistent, python-pkginfo, python-rt, python-spur, python-zipp, stravalib, transmissionrpc, vulture, zodbpickle, zope.exceptions (adopting it), zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions. debmirror The experimental and *-proposed-updates suites used to not have Contents-* files, and a long time ago debmirror was changed to just skip those files in those suites. They were added to the Debian archive some time ago, but debmirror carried on skipping them anyway. Once I realized what was going on, I removed these unnecessary special cases (#819925, #1080168).

Guido G nther: Free Software Activities September 2024

Another short status update of what happened on my side last month. Besides the usual amount of housekeeping last month was a lot about getting old issues resolved by finishing some stale merge requests and work in pogress MRs. I also pushed out the Phosh 0.42.0 Release phosh phoc phosh-mobile-settings libphosh-rs phosh-osk-stub phosh-wallpapers meta-phosh Debian ModemManager Calls bluez gnome-text-editor feedbackd Chatty libcall-ui glib wlr-protocols git-buildpackage iio-sensor-proxy Fotema Help Development If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

29 September 2024

Reproducible Builds: Supporter spotlight: Kees Cook on Linux kernel security

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the eighth installment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project, and followed this up with a post about the Ford Foundation as well as recent ones about ARDC, the Google Open Source Security Team (GOSST), Bootstrappable Builds, the F-Droid project, David A. Wheeler and Simon Butler. Today, however, we will be talking with Kees Cook, founder of the Kernel Self-Protection Project.

Vagrant Cascadian: Could you tell me a bit about yourself? What sort of things do you work on? Kees Cook: I m a Free Software junkie living in Portland, Oregon, USA. I have been focusing on the upstream Linux kernel s protection of itself. There is a lot of support that the kernel provides userspace to defend itself, but when I first started focusing on this there was not as much attention given to the kernel protecting itself. As userspace got more hardened the kernel itself became a bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project because the work necessary was way more than my time and expertise could do alone. So I just try to get people to help as much as possible; people who understand the ARM architecture, people who understand the memory management subsystem to help, people who understand how to make the kernel less buggy.
Vagrant: Could you describe the path that lead you to working on this sort of thing? Kees: I have always been interested in security through the aspect of exploitable flaws. I always thought it was like a magic trick to make a computer do something that it was very much not designed to do and seeing how easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I started working at Canonical on Ubuntu and was mainly focusing on bringing Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo s security hardening efforts. Both had really pioneered a lot of userspace hardening with compiler flags and ELF stuff and many other things for hardened binaries. On the whole, Debian had not really paid attention to it. Debian s packaging building process at the time was sort of a chaotic free-for-all as there wasn t centralized build methodology for defining things. Luckily that did slowly change over the years. In Ubuntu we had the opportunity to apply top down build rules for hardening all the packages. In 2011 Chrome OS was following along and took advantage of a bunch of the security hardening work as they were based on ebuild out of Gentoo and when they looked for someone to help out they reached out to me. We recognized the Linux kernel was pretty much the weakest link in the Chrome OS security posture and I joined them to help solve that. Their userspace was pretty well handled but the kernel had a lot of weaknesses, so focusing on hardening was the next place to go. When I compared notes with other users of the Linux kernel within Google there were a number of common concerns and desires. Chrome OS already had an upstream first requirement, so I tried to consolidate the concerns and solve them upstream. It was challenging to land anything in other kernel team repos at Google, as they (correctly) wanted to minimize their delta from upstream, so I needed to work on any major improvements entirely in upstream and had a lot of support from Google to do that. As such, my focus shifted further from working directly on Chrome OS into being entirely upstream and being more of a consultant to internal teams, helping with integration or sometimes backporting. Since the volume of needed work was so gigantic I needed to find ways to inspire other developers (both inside and outside of Google) to help. Once I had a budget I tried to get folks paid (or hired) to work on these areas when it wasn t already their job.
Vagrant: So my understanding of some of your recent work is basically defining undefined behavior in the language or compiler? Kees: I ve found the term undefined behavior to have a really strict meaning within the compiler community, so I have tried to redefine my goal as eliminating unexpected behavior or ambiguous language constructs . At the end of the day ambiguity leads to bugs, and bugs lead to exploitable security flaws. I ve been taking a four-pronged approach: supporting the work people are doing to get rid of ambiguity, identify new areas where ambiguity needs to be removed, actually removing that ambiguity from the C language, and then dealing with any needed refactoring in the Linux kernel source to adapt to the new constraints. None of this is particularly novel; people have recognized how dangerous some of these language constructs are for decades and decades but I think it is a combination of hard problems and a lot of refactoring that nobody has the interest/resources to do. So, we have been incrementally going after the lowest hanging fruit. One clear example in recent years was the elimination of C s implicit fall-through in switch statements. The language would just fall through between adjacent cases if a break (or other code flow directive) wasn t present. But this is ambiguous: is the code meant to fall-through, or did the author just forget a break statement? By defining the [[fallthrough]] statement, and requiring its use in Linux, all switch statements now have explicit code flow, and the entire class of bugs disappeared. During our refactoring we actually found that 1 in 10 added [[fallthrough]] statements were actually missing break statements. This was an extraordinarily common bug! So getting rid of that ambiguity is where we have been. Another area I ve been spending a bit of time on lately is looking at how defensive security work has challenges associated with metrics. How do you measure your defensive security impact? You can t say because we installed locks on the doors, 20% fewer break-ins have happened. Much of our signal is always secondary or retrospective, which is frustrating: This class of flaw was used X much over the last decade so, and if we have eliminated that class of flaw and will never see it again, what is the impact? Is the impact infinity? Attackers will just move to the next easiest thing. But it means that exploitation gets incrementally more difficult. As attack surfaces are reduced, the expense of exploitation goes up.
Vagrant: So it is hard to identify how effective this is how bad would it be if people just gave up? Kees: I think it would be pretty bad, because as we have seen, using secondary factors, the work we have done in the industry at large, not just the Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else is doing for their respective software ecosystems, has shown that the price of functional exploits in the black market has gone up. Especially for really egregious stuff like a zero-click remote code execution. If those were cheap then obviously we are not doing something right, and it becomes clear that it s trivial for anyone to attack the infrastructure that our lives depend on. But thankfully we have seen over the last two decades that prices for exploits keep going up and up into millions of dollars. I think it is important to keep working on that because, as a central piece of modern computer infrastructure, the Linux kernel has a giant target painted on it. If we give up, we have to accept that our computers are not doing what they were designed to do, which I can t accept. The safety of my grandparents shouldn t be any different from the safety of journalists, and political activists, and anyone else who might be the target of attacks. We need to be able to trust our devices otherwise why use them at all?
Vagrant: What has been your biggest success in recent years? Kees: I think with all these things I am not the only actor. Almost everything that we have been successful at has been because of a lot of people s work, and one of the big ones that has been coordinated across the ecosystem and across compilers was initializing stack variables to 0 by default. This feature was added in Clang, GCC, and MSVC across the board even though there were a lot of fears about forking the C language. The worry was that developers would come to depend on zero-initialized stack variables, but this hasn t been the case because we still warn about uninitialized variables when the compiler can figure that out. So you still still get the warnings at compile time but now you can count on the contents of your stack at run-time and we drop an entire class of uninitialized variable flaws. While the exploitation of this class has mostly been around memory content exposure, it has also been used for control flow attacks. So that was politically and technically a large challenge: convincing people it was necessary, showing its utility, and implementing it in a way that everyone would be happy with, resulting in the elimination of a large and persistent class of flaws in C.
Vagrant: In a world where things are generally Reproducible do you see ways in which that might affect your work? Kees: One of the questions I frequently get is, What version of the Linux kernel has feature $foo? If I know how things are built, I can answer with just a version number. In a Reproducible Builds scenario I can count on the compiler version, compiler flags, kernel configuration, etc. all those things are known, so I can actually answer definitively that a certain feature exists. So that is an area where Reproducible Builds affects me most directly. Indirectly, it is just being able to trust the binaries you are running are going to behave the same for the same build environment is critical for sane testing.
Vagrant: Have you used diffoscope? Kees: I have! One subset of tree-wide refactoring that we do when getting rid of ambiguous language usage in the kernel is when we have to make source level changes to satisfy some new compiler requirement but where the binary output is not expected to change at all. It is mostly about getting the compiler to understand what is happening, what is intended in the cases where the old ambiguity does actually match the new unambiguous description of what is intended. The binary shouldn t change. We have used diffoscope to compare the before and after binaries to confirm that yep, there is no change in binary .
Vagrant: You cannot just use checksums for that? Kees: For the most part, we need to only compare the text segments. We try to hold as much stable as we can, following the Reproducible Builds documentation for the kernel, but there are macros in the kernel that are sensitive to source line numbers and as a result those will change the layout of the data segment (and sometimes the text segment too). With diffoscope there s flexibility where I can exclude or include different comparisons. Sometimes I just go look at what diffoscope is doing and do that manually, because I can tweak that a little harder, but diffoscope is definitely the default. Diffoscope is awesome!
Vagrant: Where has reproducible builds affected you? Kees: One of the notable wins of reproducible builds lately was dealing with the fallout of the XZ backdoor and just being able to ask the question is my build environment running the expected code? and to be able to compare the output generated from one install that never had a vulnerable XZ and one that did have a vulnerable XZ and compare the results of what you get. That was important for kernel builds because the XZ threat actor was working to expand their influence and capabilities to include Linux kernel builds, but they didn t finish their work before they were noticed. I think what happened with Debian proving the build infrastructure was not affected is an important example of how people would have needed to verify the kernel builds too.
Vagrant: What do you want to see for the near or distant future in security work? Kees: For reproducible builds in the kernel, in the work that has been going on in the ClangBuiltLinux project, one of the driving forces of code and usability quality has been the continuous integration work. As soon as something breaks, on the kernel side, the Clang side, or something in between the two, we get a fast signal and can chase it and fix the bugs quickly. I would like to see someone with funding to maintain a reproducible kernel build CI. There have been places where there are certain architecture configurations or certain build configuration where we lose reproducibility and right now we have sort of a standard open source development feedback loop where those things get fixed but the time in between introduction and fix can be large. Getting a CI for reproducible kernels would give us the opportunity to shorten that time.
Vagrant: Well, thanks for that! Any last closing thoughts? Kees: I am a big fan of reproducible builds, thank you for all your work. The world is a safer place because of it.
Vagrant: Likewise for your work!


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

28 September 2024

Dave Hibberd: EuroBSDCon 2024 Report

This year I attended EuroBSDCon 2024 in Dublin. I always appreciate an excuse to head over to Ireland, and this seemed like a great chance to spend some time in Dublin and learn new things. Due to constraints on my time I didn t go to the 2 day devsummit that precedes the conference, only the main event itself.

The Event EuroBSDCon was attended by about 200-250 people, the hardcore of the BSD community! Attendees came from all over, I met Canadians, USAians, Germanians, Belgians and Irelandians amongst other nationalities! The event was at UCD Dublin, which is a gorgeous university campus about 10km south of Dublin proper in Stillorgan. The speaker hotel was a 20 minute walk (at my ~9min/km pace) from the hotel, or a quick bus journey. It was a pleasant walk, through the leafy campus and then along some pretty broad pavements, albeit beside a dual carriageway. The cycle infrastructure was pretty excellent too, but I sadly was unable to lease a city bike and make my way around on 2 wheels - Dublinbikes don t extend that far out the city. Lunch each day was Irish themed food - Saturday was beef stew (a Frenchman asked me what it was called - his only equivalent words were Beef Bourguignon ) and Sunday was Bangers & Mash! The kitchen struggled a bit - food was brought out in bowls in waves, and that ensured there was artificial scarcity that clearly left anxiety for some that they weren t going to be fed! Everyone I met was friendly from the day I arrived, and that set me very much at ease and made the event much more enjoyable - things are better shared with others. Big shout out to dch and Blake Willis for spending a lot of time talking to me over the weekend!

Talks I Attended

Keynote: Evidence based Policy formation in the EU Tom Smyth This talk given by Tom Smyth was an interesting look into his work with EU Policymakers in ensuring fair competition for his small, Irish ISP. It was an enlightening look into the workings of the EU and the various bodies that set, and manage policy. It truly is a complicated beast, but the feeling I left with was that there are people all through the organisation who are desperate to do the right thing for EU citizens at all costs. Sadly none of it is directly applicable to me living in the UK, but I still get to have a say on policy and vote in polls as an Irish citizen abroad.

10(ish) years of FreeBSD/arm64 Andrew Turner I have been a fan of ARM platforms for a long, long time. I had an early ARM Chromebook and have been equal measures excited and frustrated by the raspberry pi since first contact. I tend to find other ARM people at events and this was no exception! It was an interesting view into one person s dedication to making arm64 a platform for FreeBSD, starting out with no documentation or hardware to becoming a first-class platform. It s interesting to see the roadmap and things upcoming too and makes me hopeful for the future of arm64 in various OSes!

1-800-RC(8)-HELP: Dial Into FreeBSD Service Scripts Mastery! Mateusz Piotrowski rc scripts and startup applications scare me a bit. I m better at systemd units than sysvinit scripts, but that isn t really a transferable skill! This was a deep dive into lots of the functionality that FreeBSD s RC offers, and highlighted things that I only thought were limited to Linux s systemd. I am much more aware of what it s capable of now, but I m still scared to take it on! Afterwards I had a great chat in the hallway with Mateusz about our OS s different approaches to this problem and was impressed with the pragmatic view he had on startup, systemd, rc and the future!

Package management without borders. Using Ravenports on multiple BSDs Michael Reim Ports on the BSDs interest me, but I hadn t realised that outside of each major BSD s collection there were other, cross platform ports collections offered. Ravenports is one of these under developments, and it was good to understand the hows, why and what s happenings of the system. Plus, with my hibbian obsession on building other people s software as my own packages, it s interesting to see how others are doing it!

Building a Modern Packet Radio Network using Open Software me I spoke for 45 minutes to share my passion and frustration for amateur radio, packet radio, the law, the technology and what we re doing in the UK Packet network. This was a lot of fun - it felt like I had a busy room, lots of people interested in the stupid stuff I do with technology and I had lots of conversations after the fact about radio, telecoms, networking and at one point was cornered by what I describe as the Erlang Mafia to talk about how they could help!

Hacking - 30 years ago Walter Belgers This unrecorded talk looked at the history of the Dutch hacker scene, and a young group of hackers explorations of the early internet before modern security was a thing. It was exciting, enrapturing, well presented and a great story of a well spent youth in front of computers.

Social By 1730 I was pretty drained so I took myself back to the hotel missing the last talks, had some down time, and got the DART train to the social event at Brewdog. This invovled about an hour s walking and some train time and that was a nice time to reset my head and just watch the world. The train I was on had a particularly interesting feature where when the motors were not loaded (slowdown or coasting) the lights slowly flickered dim-bright-dim. I don t know if this is across the fleet or just this one, but it was fun to pontificate as I looked out the window at South Dublin passing by. The social was good - a few beer tokens (cider in my case, trying to avoid beer-driven hangovers still), some pleasant junk food and plenty of good company to talk to, lots of people wanting to talk about radio and packet to me! Brewdog struggled a bit - both in bar speed (a linear queue formed despite the staff preferring the crowd-around method of queue) and buffet food appeared in somewhat disjointed waves, meaning that people loitered around the food tables and cleared the plates of wings, sliders, fries, onion rings, mac & cheese as they appeared 4-5 plates at a time. Perhaps a few hundred hungry bodies was a bit too much for them to feed at once. They had shuffleboard that was played all night by various groups! I caught the last bus home, which was relatively painless!

Is our software sustainable? Kent Inge Fagerland Simonsen This was an interesting look into reducing the footprint of software to make it a net benefit. Lots of examples of how little changes can barrel up to big, gigawatthour changes when aggregated over the entire installbase of android or iOS!

A Packet s Journey Through the OpenBSD Network Stack Alexander Bluhm This was an analysis of what happens at each stage of networking in OpenBSD and was pretty interesting to see. Lots of it was out my depth, but it s cool to get an explanation and appreciation for various elements of how software handles each packet that arrives and the differences in the ipv4 and ipv6 stack!

FreeBSD at 30 Years: Its Secrets to Success Kirk McKusick This was a great statistical breakdown of FreeBSD since inception, including top committers, why certain parts of the system and community work so well and what has given it staying power compared to some projects on the internet that peter out after just a few years! Kirk s excitement and passion for the project really shone through, and I want to read his similarly titled article in the FreeBSD Journal now!

Building an open native FreeBSD CI system from scratch with lua, C, jails & zfs Dave Cottlehuber Dave spoke pretty excitedly about his work on a CI system using tools that FreeBSD ships with, and introduced me to the integration of C and Lua which I wasn t fully aware of before. Or I was, and my brain forgot it! With my interest in software build this year, it was quite a timely look at how others are thinking of doing things (I am doing similar stuff with zfs!). I look forward to playing with it when it finally is released to the Real World!

Building an Appliance Allan Jude This was an interesting look into the tools that FreeBSD provides which can be used to make immutable, appliance OSes without too much overhead. Fail safe upgrades and boots with ZFS, running approved code with secure boot, factory resetting and more were discussed! I have had thoughts around this in the recent past, so it was good to have some ideas validated, some challenged and gave me food for thought.

Experience as a speaker I really enjoyed being a speaker at the event! I ve spoken at other things before, but this really was a cut above. The event having money to provide me a hotel was a really welcome surprise, and also receiving a gorgeous scarf as a speaker gift was a great surprise (and it has already been worn with the change of temperature here in Scotland this week!). I would definitely consider returning, either as an attendee or as a speaker. The community of attendees were pragmatic, interesting, engaging and welcoming, the organising committee were spot-on in their work making it happen and the whole event, while turning my brain to mush with all the information, was really enjoyable and I left energised and excited by things instead of ground down and tired.

26 September 2024

Vasudev Kamath: Disabling Lockdown Mode with Secure Boot on Distro Kernel

In my previous post, I mentioned that Lockdown mode is activated when Secure Boot is enabled. One way to override this was to use a self-compiled upstream kernel. However, sometimes we might want to use the distribution kernel itself. This post explains how to disable lockdown mode while keeping Secure Boot enabled with a distribution kernel.
Understanding Secure Boot Detection To begin, we need to understand how the kernel detects if Secure Boot is enabled. This is done by the efi_get_secureboot function, as shown in the image below: Secure Boot status check
Disabling Kernel Lockdown The kernel code uses the value of MokSBStateRT to identify the Secure Boot state, assuming that Secure Boot can only be enabled via shim. This assumption holds true when using the Microsoft certificate for signature validation (as Microsoft currently only signs shim). However, if we're using our own keys, we don't need shim and can sign the bootloader ourselves. In this case, the Secure Boot state of the system doesn't need to be tied to the MokSBStateRT variable. To disable kernel lockdown, we need to set the UEFI runtime variable MokSBStateRT. This essentially tricks the kernel into thinking Secure Boot is disabled when it's actually enabled. This is achieved using a UEFI initializing driver. The code for this was written by an anonymous colleague who also assisted me with various configuration guidance for setting up UKI and Secure Boot on my system. The code is available here.
Implementation Detailed instructions for compiling and deploying the code are provided in the repository, so I won't repeat them here.
Results I've tested this method with the default distribution kernel on my Debian unstable system, and it successfully disables lockdown while maintaining Secure Boot integrity. See the screenshot below for confirmation: Distribution kernel lockdown disabled

Melissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year s event in A Coru a, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.
  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.
Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.
  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.
Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.
  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.
  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session:

Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session:

Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow emulating HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session:

Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display wish list , which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements.
  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest We can t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coru a: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo (Red Hat), Jiri Koten (RedHat), Jonas dahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel D nzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

25 September 2024

Russell Coker: The PiKVM

Hardware I have just setup a PiKVM, here s the Amazon link for the KVM hardware (case and Pi hat etc) and here s an Amazon link for a Pi4 to match. The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren t covered in the videos, the device I received had some improvements that made it easier to assemble which weren t in the video. When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn t work for reasons I don t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it. The system has a bright OLED display to show the IP address and some other information which is very handy. The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn t test. I definitely recommend this. Software This is the download link for the RaspberryPi images for the PiKVM [3]. The v3 image matches the hardware from the Amazon link I provided. The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication). If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.
rw
systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown
systemctl enable --now kvmd-fan
ro
The default webadmin username/password is admin/admin. To change the passwords run the following commands:
rw
kvmd-htpasswd set admin
passwd root
ro
It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot. By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select advanced and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL. The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely lsusb on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse. Managing Computers For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the OTG port on the KVM, then it should all just work. For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the OTG port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings. It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn t seem useful to me as the use cases for this device don t require that. If you are using it for a server that doesn t have iDRAC/ILO or other management hardware there will be no other monitor and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time. For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is display a login screen on anything that s available . Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration? Conclusion The PiKVM is a well engineered and designed product that does what s expected at a low price. There are lots of minor issues with using it which aren t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements. The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive. The web UI wasn t as user friendly as it might have been, but it s a lot better than iDRAC so I don t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.

Melissa Wen: Reflections on 2024 Linux Display Next Hackfest

Hey everyone! The 2024 Linux Display Next hackfest concluded in May, and its outcomes continue to shape the Linux Display stack. Igalia hosted this year s event in A Coru a, Spain, bringing together leading experts in the field. Samuel Iglesias and I organized this year s edition and this blog post summarizes the experience and its fruits. One of the highlights of this year s hackfest was the wide range of backgrounds represented by our 40 participants (both on-site and remotely). Developers and experts from various companies and open-source projects came together to advance the Linux Display ecosystem. You can find the list of participants here. The event covered a broad spectrum of topics affecting the development of Linux projects, user experiences, and the future of display technologies on Linux. From cutting-edge topics to long-term discussions, you can check the event agenda here.

Organization Highlights The hackfest was marked by in-depth discussions and knowledge sharing among Linux contributors, making everyone inspired, informed, and connected to the community. Building on feedback from the previous year, we refined the unconference format to enhance participant preparation and engagement. Structured Agenda and Timeboxes: Each session had a defined scope, time limit (1h20 or 2h10), and began with an introductory talk on the topic.
  • Participant-Led Discussions: We pre-selected in-person participants to lead discussions, allowing them to prepare introductions, resources, and scope.
  • Transparent Scheduling: The schedule was shared in advance as GitHub issues, encouraging participants to review and prepare for sessions of interest.
Engaging Sessions: The hackfest featured a variety of topics, including presentations and discussions on how participants were addressing specific subjects within their companies.
  • No Breakout Rooms, No Overlaps: All participants chose to attend all sessions, eliminating the need for separate breakout rooms. We also adapted run-time schedule to keep everybody involved in the same topics.
  • Real-time Updates: We provided notifications and updates through dedicated emails and the event matrix room.
Strengthening Community Connections: The hackfest offered ample opportunities for networking among attendees.
  • Social Events: Igalia sponsored coffee breaks, lunches, and a dinner at a local restaurant.
  • Museum Visit: Participants enjoyed a sponsored visit to the Museum of Estrela Galicia Beer (MEGA).

Fruitful Discussions and Follow-up The structured agenda and breaks allowed us to cover multiple topics during the hackfest. These discussions have led to new display feature development and improvements, as evidenced by patches, merge requests, and implementations in project repositories and mailing lists. With the KMS color management API taking shape, we discussed refinements and best approaches to cover the variety of color pipeline from different hardware-vendors. We are also investigating techniques for a performant SDR<->HDR content reproduction and reducing latency and power consumption when using the color blocks of the hardware.

Color Management/HDR Color Management and HDR continued to be the hottest topic of the hackfest. We had three sessions dedicated to discuss Color and HDR across Linux Display stack layers.

Color/HDR (Kernel-Level) Harry Wentland (AMD) led this session. Here, kernel Developers shared the Color Management pipeline of AMD, Intel and NVidia. We counted with diagrams and explanations from HW-vendors developers that discussed differences, constraints and paths to fit them into the KMS generic color management properties such as advertising modeset needs, IN\_FORMAT, segmented LUTs, interpolation types, etc. Developers from Qualcomm and ARM also added information regarding their hardware. Upstream work related to this session:

Color/HDR (Compositor-Level) Sebastian Wick (RedHat) led this session. It started with Sebastian s presentation covering Wayland color protocols and compositor implementation. Also, an explanation of APIs provided by Wayland and how they can be used to achieve better color management for applications and discussions around ICC profiles and color representation metadata. There was also an intensive Q&A about LittleCMS with Marti Maria. Upstream work related to this session:

Color/HDR (Use Cases and Testing) Christopher Cameron (Google) and Melissa Wen (Igalia) led this session. In contrast to the other sessions, here we focused less on implementation and more on brainstorming and reflections of real-world SDR and HDR transformations (use and validation) and gainmaps. Christopher gave a nice presentation explaining HDR gainmap images and how we should think of HDR. This presentation and Q&A were important to put participants at the same page of how to transition between SDR and HDR and somehow emulating HDR. We also discussed on the usage of a kernel background color property. Finally, we discussed a bit about Chamelium and the future of VKMS (future work and maintainership).

Power Savings vs Color/Latency Mario Limonciello (AMD) led this session. Mario gave an introductory presentation about AMD ABM (adaptive backlight management) that is similar to Intel DPST. After some discussions, we agreed on exposing a kernel property for power saving policy. This work was already merged on kernel and the userspace support is under development. Upstream work related to this session:

Strategy for video and gaming use-cases Leo Li (AMD) led this session. Miguel Casas (Google) started this session with a presentation of Overlays in Chrome/OS Video, explaining the main goal of power saving by switching off GPU for accelerated compositing and the challenges of different colorspace/HDR for video on Linux. Then Leo Li presented different strategies for video and gaming and we discussed the userspace need of more detailed feedback mechanisms to understand failures when offloading. Also, creating a debugFS interface came up as a tool for debugging and analysis.

Real-time scheduling and async KMS API Xaver Hugl (KDE/BlueSystems) led this session. Compositor developers have exposed some issues with doing real-time scheduling and async page flips. One is that the Kernel limits the lifetime of realtime threads and if a modeset takes too long, the thread will be killed and thus the compositor as well. Also, simple page flips take longer than expected and drivers should optimize them. Another issue is the lack of feedback to compositors about hardware programming time and commit deadlines (the lastest possible time to commit). This is difficult to predict from drivers, since it varies greatly with the type of properties. For example, color management updates take much longer. In this regard, we discusssed implementing a hw_done callback to timestamp when the hardware programming of the last atomic commit is complete. Also an API to pre-program color pipeline in a kind of A/B scheme. It may not be supported by all drivers, but might be useful in different ways.

VRR/Frame Limit, Display Mux, Display Control, and more and beer We also had sessions to discuss a new KMS API to mitigate headaches on VRR and Frame Limit as different brightness level at different refresh rates, abrupt changes of refresh rates, low frame rate compensation (LFC) and precise timing in VRR more. On Display Control we discussed features missing in the current KMS interface for HDR mode, atomic backlight settings, source-based tone mapping, etc. We also discussed the need of a place where compositor developers can post TODOs to be developed by KMS people. The Content-adaptive Scaling and Sharpening session focused on sharpening and scaling filters. In the Display Mux session, we discussed proposals to expose the capability of dynamic mux switching display signal between discrete and integrated GPUs. In the last session of the 2024 Display Next Hackfest, participants representing different compositors summarized current and future work and built a Linux Display wish list , which includes: improvements to VTTY and HDR switching, better dmabuf API for multi-GPU support, definition of tone mapping, blending and scaling sematics, and wayland protocols for advertising to clients which colorspaces are supported. We closed this session with a status update on feature development by compositors, including but not limited to: plane offloading (from libcamera to output) / HDR video offloading (dma-heaps) / plane-based scrolling for web pages, color management / HDR / ICC profiles support, addressing issues such as flickering when color primaries don t match, etc. After three days of intensive discussions, all in-person participants went to a guided tour at the Museum of Extrela Galicia beer (MEGA), pouring and tasting the most famous local beer.

Feedback and Future Directions Participants provided valuable feedback on the hackfest, including suggestions for future improvements.
  • Schedule and Break-time Setup: Having a pre-defined agenda and schedule provided a better balance between long discussions and mental refreshments, preventing the fatigue caused by endless discussions.
  • Action Points: Some participants recommended explicitly asking for action points at the end of each session and assigning people to follow-up tasks.
  • Remote Participation: Remote attendees appreciated the inclusive setup and opportunities to actively participate in discussions.
  • Technical Challenges: There were bandwidth and video streaming issues during some sessions due to the large number of participants.

Thank you for joining the 2024 Display Next Hackfest We can t help but thank the 40 participants, who engaged in-person or virtually on relevant discussions, for a collaborative evolution of the Linux display stack and for building an insightful agenda. A big thank you to the leaders and presenters of the nine sessions: Christopher Cameron (Google), Harry Wentland (AMD), Leo Li (AMD), Mario Limoncello (AMD), Sebastian Wick (RedHat) and Xaver Hugl (KDE/BlueSystems) for the effort in preparing the sessions, explaining the topic and guiding discussions. My acknowledge to the others in-person participants that made such an effort to travel to A Coru a: Alex Goins (NVIDIA), David Turner (Raspberry Pi), Georges Stavracas (Igalia), Joan Torres (SUSE), Liviu Dudau (Arm), Louis Chauvet (Bootlin), Robert Mader (Collabora), Tian Mengge (GravityXR), Victor Jaquez (Igalia) and Victoria Brekenfeld (System76). It was and awesome opportunity to meet you and chat face-to-face. Finally, thanks virtual participants who couldn t make it in person but organized their days to actively participate in each discussion, adding different perspectives and valuable inputs even remotely: Abhinav Kumar (Qualcomm), Chaitanya Borah (Intel), Christopher Braga (Qualcomm), Dor Askayo, Jiri Koten (RedHat), Jonas dahl (Red Hat), Leandro Ribeiro (Collabora), Marti Maria (Little CMS), Marijn Suijten, Mario Kleiner, Martin Stransky (Red Hat), Michel D nzer (Red Hat), Miguel Casas-Sanchez (Google), Mitulkumar Golani (Intel), Naveen Kumar (Intel), Niels De Graef (Red Hat), Pekka Paalanen (Collabora), Pichika Uday Kiran (AMD), Shashank Sharma (AMD), Sriharsha PV (AMD), Simon Ser, Uma Shankar (Intel) and Vikas Korjani (AMD). We look forward to another successful Display Next hackfest, continuing to drive innovation and improvement in the Linux display ecosystem!

Next.

Previous.