Search Results: "beh"

20 October 2023

Iustin Pop: How to set a per-app locale in MacOS

After spending ~20+ years with a Linux desktop, I m trying to expand my desktop setup to include MacOS (well, desktop/laptop, I mean end user in general). And to my surprise, there s no clear repository of MacOS info. Man pages yes, some StackOverflow, some Apple forums, but no canonical version. Or, I didn t find it, please enlighten me Another issue is that Apple apparently changes behaviour without clearly documenting it. In this specific case, the region part of the locale went through significant churn lately. So, my goal: In Linux, this would simply mean running the app with the correct environment variables. But MacOS deprecated this a while back (it used to work). After reading what I could, the solution is quite easy, just not obvious:
% defaults read .GlobalPreferences grep en_
    AKLastLocale = "en_CH";
    AppleLocale = "en_CH";
% defaults read -app FooBar
(has no AppleLocale key)
% defaults write -app FooBar AppleLocale en_US
And that s it. Now, the defaults man page says the global-global is NSGlobalDomain, I don t know where I got the .GlobalPreferences. But I only needed to know the key name (in this case, AppleLocale - of course it couldn t be LC_ALL/LANG). One day I ll know MacOS better, but I try to learn more for 2+ years now, and it s not a smooth ride. Old dog new tricks, right?

Freexian Collaborators: Debian Contributions: Freexian meetup, debusine updates, lpr/lpd in Debian, and more! (by Utkarsh Gupta, Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Freexian Meetup, by Stefano Rivera, Utkarsh Gupta, et al. During DebConf, Freexian organized a meetup for its collaborators and those interested in learning more about Freexian and its services. It was well received and many people interested in Freexian showed up. Some developers who were interested in contributing to LTS came to get more details about joining the project. And some prospective customers came to get to know us and ask questions. Sadly, the tragic loss of Abraham shook DebConf, both individually and structurally. The meetup got rescheduled to a small room without video coverage. With that, we still had a wholesome interaction and here s a quick picture from the meetup taken by Utkarsh (which is also why he s missing!).

Debusine, by Rapha l Hertzog, et al. Freexian has been investing into debusine for a while, but development speed is about to increase dramatically thanks to funding from SovereignTechFund.de. Rapha l laid out the 5 milestones of the funding contract, and filed the issues for the first milestone. Together with Enrico and Stefano, they established a workflow for the expanded team. Among the first steps of this milestone, Enrico started to work on a developer-friendly description of debusine that we can use when we reach out to the many Debian contributors that we will have to interact with. And Rapha l started the design work of the autopkgtest and lintian tasks, i.e. what s the interface to schedule such tasks, what behavior and what associated options do we support? At this point you might wonder what debusine is supposed to be let us try to answer this: Debusine manages scheduling and distribution of Debian-related build and QA tasks to a network of worker machines. It also manages the resulting artifacts and provides the results in an easy to consume way. We want to make it easy for Debian contributors to leverage all the great QA tools that Debian provides. We want to build the next generation of Debian s build infrastructure, one that will continue to reliably do what it already does, but that will also enable distribution-wide experiments, custom package repositories and custom workflows with advanced package reviews. If this all sounds interesting to you, don t hesitate to watch the project on salsa.debian.org and to contribute.

lpr/lpd in Debian, by Thorsten Alteholz During Debconf23, Till Kamppeter presented CPDB (Common Print Dialog Backend), a new way to handle print queues. After this talk it was discussed whether the old lpr/lpd based printing system could be abandoned in Debian or whether there is still demand for it. So Thorsten asked on the debian-devel email list whether anybody uses it. Oddly enough, these old packages are still useful:
  • Within a small network it is easier to distribute a printcap file, than to properly configure cups clients.
  • One of the biggest manufacturers of WLAN router and DSL boxes only supports raw queues when attaching an USB printer to their hardware. Admittedly the CPDB still has problems with such raw queues.
  • The Pharos printing system at MIT is still lpd-based.
As a result, the lpr/lpd stuff is not yet ready to be abandoned and Thorsten will adopt the relevant packages (or rather move them under the umbrella of the debian-printing team). Though it is not planned to develop new features, those packages should at least have a maintainer. This month Thorsten adopted rlpr, an utility for lpd printing without using /etc/printcap. The next one he is working on is lprng, a lpr/lpd printer spooling system. If you know of any other package that is also needed and still maintained by the QA team, please tell Thorsten.

/usr-merge, by Helmut Grohne Discussion about lifting the file move moratorium has been initiated with the CTTE and the release team. A formal lift is dependent on updating debootstrap in older suites though. A significant number of packages can automatically move their systemd unit files if dh_installsystemd and systemd.pc change their installation targets. Unfortunately, doing so makes some packages FTBFS and therefore patches have been filed. The analysis tool, dumat, has been enhanced to better understand which upgrade scenarios are considered supported to reduce false positive bug filings and gained a mode for local operation on a .changes file meant for inclusion in salsa-ci. The filing of bugs from dumat is still manual to improve the quality of reports. Since September, the moratorium has been lifted.

Miscellaneous contributions
  • Rapha l updated Django s backport in bullseye-backports to match the latest security release that was published in bookworm. Tracker.debian.org is still using that backport.
  • Helmut Grohne sent 13 patches for cross build failures.
  • Helmut Grohne performed a maintenance upload of debvm enabling its use in autopkgtests.
  • Helmut Grohne wrote an API-compatible reimplementation of autopkgtest-build-qemu. It is powered by mmdebstrap, therefore unprivileged, EFI-only and will soon be included in mmdebstrap.
  • Santiago continued the work regarding how to make it easier to (automatically) test reverse dependencies. An example of the ongoing work was presented during the Salsa CI BoF at DebConf 23.
    In fact, omniorb-dfsg test pipelines as the above were used for the omniorb-dfsg 4.3.0 transition, verifying how the reverse dependencies (tango, pytango and omnievents) were built and how their autopkgtest jobs run with the to-be-uploaded omniorb-dfsg new release.
  • Utkarsh and Stefano attended and helped run DebConf 23. Also continued winding up DebConf 22 accounting.
  • Anton Gladky did some science team uploads to fix RC bugs.

18 October 2023

Russ Allbery: Review: Wolf Country

Review: Wolf Country, by Mar Delaney
Publisher: Kalikoi
Copyright: September 2021
ASIN: B09H55TGXK
Format: Kindle
Pages: 144
Wolf Country is a short lesbian shifter romance by Mar Delaney, a pen name for Layla Lawlor (who is also one of the writers behind the shared pen name Zoe Chant). Dasha Volkova is a werewolf, a member of a tribe of werewolves who keep to themselves deep in the wilds of Alaska. She's just become an adult and is wandering, curious and exploring, seeing what's in the world outside of her sheltered childhood. A wild chase after a hare, purely for the fun of it, is sufficiently distracting that she doesn't notice the snare before she steps in it going full speed. Laney Rosen is not a werewolf. She's a landscape painter who lives a quiet and self-contained life in an isolated cabin in the wilderness. She only stumbles across Dasha because she got lost on the snowmobile tracks taking photographs. Laney assumes Dasha is a dog caught in a poacher's trap, and is quite surprised when the pain of getting her out of the snare causes Dasha to shapeshift into a naked woman. This short book is precisely what it sounds like, which I appreciate in a romance novel. Woman meets wolf and discovers her secret accidentally, woman is of course entirely trustworthy although wolf can't know that, attraction at first sight, they have to pitch a tent in the wilderness and there's only one sleeping bag, etc. Nothing here is going to surprise you, but it's gentle and kind and fulfills the romance contract of a happy ending. It's not particularly steamy; the focus is on the relationship and the mutual attraction rather than on the sex. The best part of this book is probably the backdrop. Delaney lives in Alaska, and it shows in both the attention to the details of survival and heat and in the landscape descriptions (and the descriptions of Laney's landscapes). Dasha's love of Laney's paintings is one of the most heart-warming parts of the book. Laney has retinitis pigmentosa and is slowly losing her vision, which I thought was handled gracefully and well in the story. It creates real problems and limitations for her, but it also doesn't define her or become central to her character. Both Dasha and Laney are viewpoint characters and roughly alternate tight third-person viewpoint chapters. There are a few twists: potential parental disapproval on Dasha's part and some real physical danger from the person who set the trap, but most of the story is the two woman getting to know each other and getting past the early hesitancy to name what they're feeling. Laney feels a bit older than Dasha just because she's out on her own and Dasha was homeschooled and very sheltered, but both of them feel very young. This is Dasha's first serious relationship. Delaney does use the fated lover trope, which seems worth a warning in case you're not in the mood for that. Werewolves apparently know when they've found their fated mate and don't have a lot of choice in the matter. This is a common paranormal and fantasy romance trope that I find disturbing if I think about it too hard. Thankfully, here it's not much of a distraction. Dasha is such an impulsive, think-with-her-heart sort of character that the immediate conclusion that Laney is her fated mate felt in character even without the werewolf lore. I read this based on a random recommendation from Yoon Ha Lee when I was in the mood for something light and kind and uncomplicated, and I got exactly what I expected and was in the mood for. The writing isn't the best, but the landscape descriptions aren't bad and the characterization is reasonably good if you're in the mood for brightly curious but not particularly wise. Recommended if you're looking for this sort of thing. Rating: 7 out of 10

12 October 2023

Matthew Garrett: Defending abuse does not defend free software

The Free Software Foundation Europe and the Software Freedom Conservancy recently released a statement that they would no longer work with Eben Moglen, chairman of the Software Freedom Law Center. Eben was the general counsel for the Free Software Foundation for over 20 years, and was centrally involved in the development of version 3 of the GNU General Public License. He's devoted a great deal of his life to furthering free software.

But, as described in the joint statement, he's also acted abusively towards other members of the free software community. He's acted poorly towards his own staff. In a professional context, he's used graphically violent rhetoric to describe people he dislikes. He's screamed abuse at people attempting to do their job.

And, sadly, none of this comes as a surprise to me. As I wrote in 2017, after it became clear that Eben's opinions diverged sufficiently from the FSF's that he could no longer act as general counsel, he responded by threatening an FSF board member at an FSF-run event (various members of the board were willing to tolerate this, which is what led to me quitting the board). There's over a decade's evidence of Eben engaging in abusive behaviour towards members of the free software community, be they staff, colleagues, or just volunteers trying to make the world a better place.

When we build communities that tolerate abuse, we exclude anyone unwilling to tolerate being abused[1]. Nobody in the free software community should be expected to deal with being screamed at or threatened. Nobody should be afraid that they're about to have their sexuality outed by a former boss.

But of course there are some that will defend Eben based on his past contributions. There were people who were willing to defend Hans Reiser on that basis. We need to be clear that what these people are defending is not free software - it's the right for abusers to abuse. And in the long term, that's bad for free software.

[1] "Why don't people just get better at tolerating abuse?" is a terrible response to this. Why don't abusers stop abusing? There's fewer of them, and it should be easier.

comment count unavailable comments

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2023 (by Santiago Ruano Rinc n)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 21 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 4.0h to the next month.
  • Adrian Bunk did 7.0h (out of 17.0h assigned), thus carrying over 10.0h to the next month.
  • Anton Gladky did 9.5h (out of 7.5h assigned and 7.5h from previous period), thus carrying over 5.5h to the next month.
  • Bastien Roucari s did 16.0h (out of 15.5h assigned and 1.5h from previous period), thus carrying over 1.0h to the next month.
  • Ben Hutchings did 17.0h (out of 17.0h assigned).
  • Chris Lamb did 17.0h (out of 17.0h assigned).
  • Emilio Pozuelo Monfort did 30.0h (out of 30.0h assigned).
  • Guilhem Moulin did 18.25h (out of 18.25h assigned).
  • Helmut Grohne did 10.0h (out of 10.0h assigned).
  • Lee Garrett did 17.0h (out of 16.5h assigned and 0.5h from previous period).
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 4.5h (out of 0h assigned and 24.0h from previous period), thus carrying over 19.5h to the next month.
  • Roberto C. S nchez did 5.0h (out of 12.0h assigned), thus carrying over 7.0h to the next month.
  • Santiago Ruano Rinc n did 7.75h (out of 16.0h assigned), thus carrying over 8.25h to the next month.
  • Sean Whitton did 7.0h (out of 7.0h assigned).
  • Sylvain Beucler did 10.5h (out of 17.0h assigned), thus carrying over 6.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 13.25h (out of 16.0h assigned), thus carrying over 2.75h to the next month.

Evolution of the situation In September, we have released 44 DLAs. The month of September was a busy month for the LTS Team. A notable security issue fixed in September was the high-severity CVE-2023-4863, a heap buffer overflow that allowed remote attackers to perform an out-of-bounds memory write via a crafted WebP file. This CVE was covered by the three DLAs of different packages: firefox-esr, libwebp and thunderbird. The libwebp backported patch was sent to upstream, who adapted and applied it to the 0.6.1 branch. It is also worth noting that LTS contributor Markus Koschany included in his work updates to packages in Debian Bullseye and Bookworm, that are under the umbrella of the Security Team: xrdp, jetty9 and mosquitto. As every month, there was important behind-the-scenes work by the Front Desk staff, who triaged, analyzed and reviewed dozens of vulnerabilities, to decide if they warrant a security update. This is very important work, since we need to trade-off between the frequency of updates and the stability of the LTS release.

Thanks to our sponsors Sponsors that joined recently are in bold.

8 October 2023

Sahil Dhiman: Lap 24

Twenty-four is a big number. More than one/fourth (or more) of my life is behind me now. At this point, I truly feel like I have become an adult; mentally and physically. Another year seem to have gone by quickly. I still vividly remember writing 23 and Counting and here I m writing the next one so soon. Probably the lowest I felt ever on my birthday; with loss of Abraham and on the other hand, medical issues with a dear one. Didn t even felt like birthday was almost here. The loss of Abraham, taught me to care for people more and meet cherish everyone. I m grateful for all the people who supported and cared for me and others during times of grief when things went numb. Thank you! Also, for the first time ever, I went to office on my birthday. This probably would become a norm in coming years. Didn t felt like doing anything, so just went to office. The cake, wishes and calls kept coming in throughout the day. I m grateful for the all people around for remembering :) This year marked my first official job switch where I moved from MakeMyTrip to Unmukti as a GNU/Linux Network Systems engineer (that s a mouthful of a job role, I know) where I do anything and everything ranging from system admin, network engineering, a bit of social media, chronicling stuff on company blog and bringing up new applications as per requirement. Moving from MMT to Unmukti was a big cultural shift. From a full-blown corporate with more than 3 thousand employees to a small 5-person team. People still think I work for a startup on hearing the low head count, though Unmukti is a 13 year old organization. I get the freedom to work at my own pace and put my ideas in larger technical discussions, while also actively participating in the community, which I m truly grateful of. I go full geek here and almost everyone here is on the same spectrum, so things technical or societal discussions just naturally flow. The months of August-September again marked the Great Refresh. For reasons unforeseen, I have had to pack my stuff again and move, albeit to just next door for now, but that gave me the much need opportunity to sift through my belongings here. As usual, I threw a boatload of stuff which was of no use and/or just hogging space. My wardrobe cupboard finally got cleaned and sorted, with old and new clothes getting (re)discovered. The Refresh is always a pain with loads of collating stuff in carry worthy bags and hauling stuff but as usual, there s nothing else I can do other than just pack and move. This year also culminated our four plus years of work for organizing annual Debian conference, DebConf to India. DebConf23 happened in Kochi, Kerala from 3rd September to 17th September (including DebCamp). First concrete work to bring conference to India was done Raju Dev who made the first bid during DebConf18 in Hsinchu, Taiwan. We lost but won during the next year bid at DebConf19 Curitiba, Brazil in 2019. I joined the efforts after meeting the team online after DebConf20. Initially started with the publicity team, but we didn t need much publicity for event, I was later asked to join sponsors/fundraising team. That turned out to be quite an experience. Then the conference itself turned to be a good experience. More on that in an upcoming DebConf23 blog post, which will come eventually. After seeing how things work out in Debian in 2020, I had the goal to become a Debian Developer (DD) before DebConf23, which gave me almost three years to get involved and get recognized to become a DD. I was more excited to grab sahil AT debian.org, a short email with only my name and no number of characters after it. After, quite a while, I dropped the hope of become a DD because I wasn t successful in my attempts to meaningfully package and technically contribute to the project. But people in Debian India later convinced me that I have done enough to become a Debian Developer, non uploading, purely by showing up and helping around all for the Debian conferences. I applied and got sponsored (i.e. supported) for my request by srud and Praveen. Finally, on 23rd Feb, I officially became part of Debian project as 14th (at the moment) Debian Developer from India. Got sahil AT debian.org too :) For some grace, I also became a DD before DebConf23. Becoming a DD didn t change anything much though, I still believe, it might have helped secure me a job though. Also, worth mentioning is my increased interest in OpenStreetMap (OSM) mapping. I heavily mapped this year and went around for mapathon-meetups too. One step towards a better OSM and more community engagement around it. Looking back at my blog, this year around, it seems mostly dotted with Debian and one OSM post. Significant shift from the range of topics I use to write about in the past year but blogging this year wasn t a go-to activity. Other stuff kept me busy. Living in Gurugram has shown me many facades of life from which I was shielded or didn t come across earlier. It made me realize all the privileges which has helped me along the way, which became apparent while living almost alone here and managing thing by oneself.

6 October 2023

Russ Allbery: Review: The Far Reaches

Review: The Far Reaches, edited by John Joseph Adams
Publisher: Amazon Original Stories
Copyright: June 2023
ISBN: 1-6625-1572-3
ISBN: 1-6625-1622-3
ISBN: 1-6625-1503-0
ISBN: 1-6625-1567-7
ISBN: 1-6625-1678-9
ISBN: 1-6625-1533-2
Format: Kindle
Pages: 219
Amazon has been releasing anthologies of original short SFF with various guest editors, free for Amazon Prime members. I previously tried Black Stars (edited by Nisi Shawl and Latoya Peterson) and Forward (edited by Blake Crouch). Neither were that good, but the second was much worse than the first. Amazon recently released a new collection, this time edited by long-standing SFF anthology editor John Joseph Adams and featuring a new story by Ann Leckie, which sounded promising enough to give them another chance. The definition of insanity is doing the same thing over and over again and expecting different results. As with the previous anthologies, each story is available separately for purchase or Amazon Prime "borrowing" with separate ISBNs. The sidebar cover is for the first in the sequence. Unlike the previous collections, which were longer novelettes or novellas, my guess is all of these are in the novelette range. (I did not do a word count.) If you're considering this anthology, read the Okorafor story ("Just Out of Jupiter's Reach"), consider "How It Unfolds" by James S.A. Corey, and avoid the rest. "How It Unfolds" by James S.A. Corey: Humans have invented a new form of physics called "slow light," which can duplicate any object that is scanned. The energy expense is extremely high, so the result is not a post-scarcity paradise. What the technology does offer, however, is a possible route to interstellar colonization: duplicate a team of volunteers and a ship full of bootstrapping equipment, and send copies to a bunch of promising-looking exoplanets. One of them might succeed. The premise is interesting. The twists Corey adds on top are even better. What can be duplicated once can be duplicated again, perhaps with more information. This is a lovely science fiction idea story that unfortunately bogs down because the authors couldn't think of anywhere better to go with it than relationship drama. I found the focus annoying, but the ideas are still very neat. (7) "Void" by Veronica Roth: A maintenance worker on a slower-than-light passenger ship making the run between Sol and Centauri unexpectedly is called to handle a dead body. A passenger has been murdered, two days outside the Sol system. Ace is in no way qualified to investigate the murder, nor is it her job, but she's watched a lot of crime dramas and she has met the victim before. The temptation to start poking around is impossible to resist. It's been a long time since I've read a story built around the differing experiences of time for people who stay on planets and people who spend most of their time traveling at relativistic speeds. It's a bit of a retro idea from an earlier era of science fiction, but it's still a good story hook for a murder mystery. None of the characters are that memorable and Roth never got me fully invested in the story, but this was still a pleasant way to pass the time. (6) "Falling Bodies" by Rebecca Roanhorse: Ira is the adopted son of a Genteel senator. He was a social experiment in civilizing the humans: rescue a human orphan and give him the best of Genteel society to see if he could behave himself appropriately. The answer was no, which is how Ira finds himself on Long Reach Station with a parole officer and a schooling opportunity, hopefully far enough from his previous mistakes for a second chance. Everyone else seems to like Rebecca Roanhorse's writing better than I do, and this is no exception. Beneath the veneer of a coming-of-age story with a twist of political intrigue, this is brutal, depressing, and awful, with an ending that needs a lot of content warnings. I'm sorry that I read it. (3) "The Long Game" by Ann Leckie: The Imperial Radch trilogy are some of my favorite science fiction novels of all time, but I am finding Leckie's other work a bit hit and miss. I have yet to read a novel of hers that I didn't like, but the short fiction I've read leans more heavily into exploring weird and alien perspectives, which is not my favorite part of her work. This story is firmly in that category: the first-person protagonist is a small tentacled alien creature, a bit like a swamp-dwelling octopus. I think I see what Leckie is doing here: balancing cynicism and optimism, exploring how lifespans influence thinking and planning, and making some subtle points about colonialism. But as a reading experience, I didn't enjoy it. I never liked any of the characters, and the conclusion of the story is the unsettling sort of main-character optimism that seems rather less optimistic to the reader. (4) "Just Out of Jupiter's Reach" by Nnedi Okorafor: K rm n scientists have found a way to grow living ships that can achieve a symbiosis with a human pilot, but the requirements for that symbiosis are very strict and hard to predict. The result was a planet-wide search using genetic testing to find the rare and possibly nonexistent matches. They found seven people. The deal was simple: spend ten years in space, alone, in her ship. No contact with any other human except at the midpoint, when the seven ships were allowed to meet up for a week. Two million euros a year, for as long as she followed the rules, and the opportunity to be part of a great experiment, providing data that will hopefully lead to humans becoming a spacefaring species. The core of this story is told during the seven days in the middle of the mission, and thus centers on people unfamiliar with human contact trying to navigate social relationships after five years in symbiotic ships that reshape themselves to their whims and personalities. The ships themselves link so that the others can tour, which offers both a good opportunity for interesting description and a concretized metaphor about meeting other people. I adore symbiotic spaceships, so this story had me at the premise. The surface plot is very psychological, and I didn't entirely click with it, but the sense of wonder vibes beneath that surface were wonderful. It also feels fresh and new: I've seen most of the ideas before, but not presented or written this way, or approached from quite this angle. Definitely the best story of the anthology. (8) "Slow Time Between the Stars" by John Scalzi: This, on the other hand, was a complete waste of time, redeemed only by being the shortest "story" in the collection. "Story" is generous, since there's only one character and a very dry, linear plot that exists only to make a philosophical point. "Speculative essay" may be closer. The protagonist is the artificial intelligence responsible for Earth's greatest interstellar probe. It is packed with a repository of all of human knowledge and the raw material to create life. Its mission is to find an exoplanet capable of sustaining that life, and then recreate it and support it. The plot, such as it is, follows the AI's decision to abandon that mission and cut off contact with Earth, for reasons that it eventually explains. Every possible beat of this story hit me wrong. The sense of wonder attaches to the most prosaic things and skips over the moments that could have provoked real wonder. The AI is both unbelievable and irritating, with all of the smug self-confidence of an Internet reply guy. The prose is overwrought in all the wrong places ("the finger of God, offering the spark to animate the dirt of another world" would totally be this AI's profile quote under their forum avatar). The only thing I liked about the story is the ethical point that it slowly meanders into, which I think I might agree with and at least find plausible. But it's delivered by the sort of character I would actively leave rooms to avoid, in a style that's about as engrossing as a tax form. Avoid. (2) Rating: 5 out of 10

3 October 2023

Bastian Blank: Introducing uploads to Debian by git tag

Several years ago, several people proposed a mechanism to upload packages to Debian just by doing "git tag" and "git push". Two not too long discussions on debian-devel (first and second) did not end with an agreement how this mechanism could work.1 The central problem was the ability to properly trace uploads back to the ones who authorised it. Now, several years later and after re-reading those e-mail threads, I again was stopped at the question: can we do this? Yes, it would not be just "git tag", but could we do with close enough? I have some rudimentary code ready to actually do uploads from the CI. However the list of caveats are currently pretty long. Yes, it works in principle. It is still disabled here, because in practice it does not yet work. Problems with this setup So what are the problems? It requires the git tags to include the both signed files for a successful source upload. This is solved by a new tool that could be a git sub-command. It just creates the source package, signs it and adds the signed file describing the upload (the .dsc and .changes file) to the tag to be pushed. The CI then extracts the signed files from the tag message and does it's work as normal. It requires a sufficiently reproducible build for source packages. Right now it is only known to work with the special 3.0 (gitarchive) source format, but even that requires the latest version of this format. No idea if it is possible to use others, like 3.0 (quilt) for this purpose. The shared GitLab runner provides by Salsa do not allow ftp access to the outside. But Debian still uses ftp to do uploads. At least if you don't want to share your ssh key, which can't be restricted to uploads only, but ssh would not work either. And as the current host for those builds, the Google Cloud Platform, does not provide connection tracking support for ftp, there is no easy way to allow that without just allowing everything. So we have no way to currently actually perform uploads from this platform. Further work As this is code running in a CI under the control of the developer, we can easily do other workflows. Some teams do workflows that do tags after acceptance into the Debian archive. Or they don't use tags at all. With some other markers, like variables or branch names, this support can be expanded easily. Unrelated to this task here, we might want to think about tying the .changes files for uploads to the target archive. As this code makes all of them readily available in form of tag message, replaying them into possible other archives might be more of a concern now. Conclusion So to come back to the question, yes we can. We can prepare uploads using our CI in a way that they would be accepted into the Debian archive. It just needs some more work on infrastructure.

  1. Here I have to admit that, after reading it again, I'm not really proud of my own behaviour and have to apologise.

27 September 2023

Antoine Beaupr : How big is Debian?

Now this was quite a tease! For those who haven't seen it, I encourage you to check it out, it has a nice photo of a Debian t-shirt I did not know about, to quote the Fine Article:
Today, when going through a box of old T-shirts, I found the shirt I was looking for to bring to the occasion: [...] For the benefit of people who read this using a non-image-displaying browser or RSS client, they are respectively:
   10 years
  100 countries
 1000 maintainers
10000 packages
and
        1 project
       10 architectures
      100 countries
     1000 maintainers
    10000 packages
   100000 bugs fixed
  1000000 installations
 10000000 users
100000000 lines of code
20 years ago we celebrated eating grilled meat at J0rd1 s house. This year, we had vegan tostadas in the menu. And maybe we are no longer that young, but we are still very proud and happy of our project! Now How would numbers line up today for Debian, 20 years later? Have we managed to get the bugs fixed line increase by a factor of 10? Quite probably, the lines of code we also have, and I can only guess the number of users and installations, which was already just a wild guess back then, might have multiplied by over 10, at least if we count indirect users and installs as well
Now I don't know about you, but I really expected someone to come up with an answer to this, directly on Debian Planet! I have patiently waited for such an answer but enough is enough, I'm a Debian member, surely I can cull all of this together. So, low and behold, here are the actual numbers from 2023! So it doesn't line up as nicely, but it looks something like this:
         1 project
        10 architectures
        30 years
       100 countries (actually 63, but we'd like to have yours!)
      1000 maintainers (yep, still there!)
     35000 packages
    211000 *binary* packages
   1000000 bugs fixed
1000000000 lines of code
 uncounted installations and users, we don't track you
So maybe the the more accurate, rounding to the nearest logarithm, would look something like:
         1 project
        10 architectures
       100 countries (actually 63, but we'd like to have yours!)
      1000 maintainers (yep, still there!)
    100000 packages
   1000000 bugs fixed
1000000000 lines of code
 uncounted installations and users, we don't track you
I really like how the "packages" and "bugs fixed" still have an order of magnitude between them there, but that the "bugs fixed" vs "lines of code" have an extra order of magnitude, that is we have fixed ten times less bugs per line of code since we last did this count, 20 years ago. Also, I am tempted to put 100 years in there, but that would be rounding up too much. Let's give it another 30 years first. Hopefully, some real scientist is going to balk at this crude methodology and come up with some more interesting numbers for the next t-shirt. Otherwise I'm available for bar mitzvahs and children parties.

21 September 2023

Jonathan Carter: DebConf23

I very, very nearly didn t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel. This is just everything in chronological order, more or less, it s the only way I could write it.

DebCamp I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn t make any progress on catching up on the packaging work I wanted to do. I ll still post what I intended here, I ll try to take a few days to focus on these some time next month: Calamares / Debian Live stuff:
  • #980209 installation fails at the install boot loader phase
  • #1021156 calamares-settings-debian: Confusing/generic program names
  • #1037299 Install Debian -> Untrusted application launcher
  • #1037123 Minimal HD space required too small for some live images
  • #971003 Console auto-login doesn t work with sysvinit
At least Calamares has been trixiefied in testing, so there s that! Desktop stuff:
  • #1038660 please set a placeholder theme during development, different from any release
  • #1021816 breeze: Background image not shown any more
  • #956102 desktop-base: unwanted metadata within images
  • #605915 please mtheake it a non-native package
  • #681025 Put old themes in a new package named desktop-base-extra
  • #941642 desktop-base: split theme data files and desktop integrations in separate packages
The Egg theme that I want to develop for testing/unstable is based on Juliette Taka s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn t quite hatched yet. Get it? (for #1038660) Debian Social:
  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.
Loopy: I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn t too horrible. There s always another DebConf to try again, right?
So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

DebConf Bits From the DPL I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page). I mostly covered:
  • A very quick introduction of myself (I ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about Enterprise Linux as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.
Job Fair I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections! Cheese & Wine Due to state laws and alcohol licenses, we couldn t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn t quite as big or as fun as our usual C&W parties since we couldn t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright. Day Trip I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip s organiser underestimated how long it would take between the points on the route (all in all it wasn t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power. Photos available in the DebConf23 public git repository. Losing a beloved Debian Developer during DebConf To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system. Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public. We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf. A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.
Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see. Abraham, or Abru as he was called by some people (which I like because bru in Afrikaans is like bro in English, not sure if that s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me. I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he d achieve in the future. Unfortunately, we was taken away from us too soon. Poetry Evening Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song Return to Ithaka and always wondered what it was about, so needless to say, that was another rabbit hole at some point. Group Photo Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar s website.
BoFs I didn t attend nearly as many talks this DebConf as I would ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs. In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.
If you got one of these Cheese & Wine bags from DebConf, that s from the South African local group!
In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this. In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it s even feasible. Some services haven t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven t had any notable incidents yet. WordPress now has improved fediverse support, it s unclear whether it works on a multi-site instance yet, I ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio. More Information Overload There s so much that happens at DebConf, it s tough to take it all in, and also, to find time to write about all of it, but I ll mention a few more things that are certainly worth of note. During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this! I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian. I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.
Some hopefully harmless soldering.
Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better. Food Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a fruitful experience? This might catch on at home too less dishes to take care of! Special thanks to the DebConf23 Team I think this may have been one of the toughest DebConfs to organise yet, and I don t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did. Back to my nest I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.
I ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

16 September 2023

Sergio Talens-Oliag: GitLab CI/CD Tips: Using a Common CI Repository with Assets

This post describes how to handle files that are used as assets by jobs and pipelines defined on a common gitlab-ci repository when we include those definitions from a different project.

Problem descriptionWhen a .giltlab-ci.yml file includes files from a different repository its contents are expanded and the resulting code is the same as the one generated when the included files are local to the repository. In fact, even when the remote files include other files everything works right, as they are also expanded (see the description of how included files are merged for a complete explanation), allowing us to organise the common repository as we want. As an example, suppose that we have the following script on the assets/ folder of the common repository:
dumb.sh
#!/bin/sh
echo "The script arguments are: '$@'"
If we run the following job on the common repository:
job:
  script:
    - $CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2
the output will be:
The script arguments are: 'ARG1 ARG2'
But if we run the same job from a different project that includes the same job definition the output will be different:
/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found
The problem here is that we include and expand the YAML files, but if a script wants to use other files from the common repository as an asset (configuration file, shell script, template, etc.), the execution fails if the files are not available on the project that includes the remote job definition.

SolutionsWe can solve the issue using multiple approaches, I ll describe two of them:
  • Create files using scripts
  • Download files from the common repository

Create files using scriptsOne way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using HERE documents. The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error prone) or encode the whole file into base64 or something similar, making maintenance harder. As an example, imagine that we want to use the dumb.sh script presented on the previous section and we want to call it from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden folder inside the project directory or use a PATH like /tmp/assets-$CI_JOB_ID to leave things outside the project folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh runner). To create the file we will use hidden jobs to write our script template and reference tags to add it to the scripts when we want to use them. Here we have a snippet that creates the file with cat:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      cat >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      #!/bin/sh
      echo "The script arguments are: '\$@'"
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Note that to make things work we ve added 6 spaces before the script code and escaped the dollar sign. To do the same using base64 we replace the previous snippet by this:
.file_scripts:
  create_dumb_sh:
    -  
      # Create dumb.sh script
      mkdir -p "$ CI_PROJECT_DIR /assets"
      base64 -d >"$ CI_PROJECT_DIR /assets/dumb.sh" <<EOF
      IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK
      EOF
      chmod +x "$ CI_PROJECT_DIR /assets/dumb.sh"
Again, we have to indent the base64 version of the file using 6 spaces (all lines of the base64 output have to be indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain. With either version we just need to add a !reference before using the script, if we add the call on the first lines of the before_script we can use the downloaded file in the before_script, script or after_script sections of the job without problems:
job:
  before_script:
    - !reference [.file_scripts, create_dumb_sh]
  script:
    - $ CI_PROJECT_DIR /assets/dumb.sh ARG1 ARG2
The output of a pipeline that uses this job will be the same as the one shown in the original example:
The script arguments are: 'ARG1 ARG2'

Download the files from the common repositoryAs we ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use. An alternative approach is to keep the assets on a directory of the common repository (in our examples we will name it assets) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where we want to download the files) and defines a script fragment to download the complete folder. Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the before_script of the jobs that use files from the assets directory and they will be available when needed. The following file is an example of the YAML file we just mentioned:
bootstrap.yml
variables:
  CI_TMPL_API_V4_URL: "$ CI_API_V4_URL /projects/common%2Fci-templates"
  CI_TMPL_ARCHIVE_URL: "$ CI_TMPL_API_V4_URL /repository/archive"
  CI_TMPL_ASSETS_DIR: "/tmp/assets-$ CI_JOB_ID "
.scripts_common:
  bootstrap_ci_templates:
    -  
      # Downloading assets
      echo "Downloading assets"
      mkdir -p "$CI_TMPL_ASSETS_DIR"
      wget -q -O - --header="PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN" \
        "$CI_TMPL_ARCHIVE_URL?path=assets&sha=$ CI_TMPL_REF:-main "  
        tar --strip-components 2 -C "$ASSETS_DIR" -xzf -
The file defines the following variables:
  • CI_TMPL_API_V4_URL: URL of the common project, in our case we are using the project ci-templates inside the common group (note that the slash between the group and the project is escaped, that is needed to reference the project by name, if we don t like that approach we can replace the url encoded path by the project id, i.e. we could use a value like $ CI_API_V4_URL /projects/31)
  • CI_TMPL_ARCHIVE_URL: Base URL to use the gitlab API to download files from a repository, we will add the arguments path and sha to select which sub path to download and from which commit, branch or tag (we will explain later why we use the CI_TMPL_REF, for now just keep in mind that if it is not defined we will download the version of the files available on the main branch when the job is executed).
  • CI_TMPL_ASSETS_DIR: Destination of the downloaded files.
And uses variables defined in other places:
  • CI_TMPL_READ_TOKEN: token that includes the read_api scope for the common project, we need it because the tokens created by the CI/CD pipelines of other projects can t be used to access the api of the common one.We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)
  • CI_TMPL_REF: branch or tag of the common repo from which to get the files (we need that to make sure we are using the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed tags to make sure that the assets don t change between executions unless we change the reference).We will set the value on the .gitlab-ci.yml file of the remote projects and will use the same reference when including the files to make sure that everything is coherent.
This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:
pipeline.yml
include:
  - /bootstrap.yaml
stages:
  - test
dumb_job:
  stage: test
  before_script:
    - !reference [.bootstrap_ci_templates, create_dumb_sh]
  script:
    - $ CI_TMPL_ASSETS_DIR /dumb.sh ARG1 ARG2
To use it from an external project we will use the following gitlab ci configuration:
gitlab-ci.yml
include:
  - project: 'common/ci-templates'
    ref: &ciTmplRef 'main'
    file: '/pipeline.yml'
variables:
  CI_TMPL_REF: *ciTmplRef
Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the CI_TMPL_REF variable (as far as I know we have to pass the ref value explicitly to know which reference was used when including the file, the anchor allows us to make sure that the value is always the same in both places). The reference we use is quite important for the reproducibility of the jobs, if we don t use fixed tags or commit hashes as references each time a job that downloads the files is executed we can get different versions of them. For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common repo the way everything is going to work is almost the same as having the pipelines directly in our repo). But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs that we want to test and they will download the latest versions of the asset files on the branch, speeding up the testing process. However keep in mind that the trick only works with the asset files, if we change a job or a pipeline on the YAML files restarting the job is not enough to test the new version as the restart uses the same job created with the current pipeline. To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the pipeline manually.

ConclusionFor now I m using the second solution and as it is working well my guess is that I ll keep using that approach unless giltab itself provides a better or simpler way of doing the same thing.

13 September 2023

Matthew Garrett: Reconstructing an invalid TPM event log

TPMs contain a set of registers ("Platform Configuration Registers", or PCRs) that are used to track what a system boots. Each time a new event is measured, a cryptographic hash representing that event is passed to the TPM. The TPM appends that hash to the existing value in the PCR, hashes that, and stores the final result in the PCR. This means that while the PCR's value depends on the precise sequence and value of the hashes presented to it, the PCR value alone doesn't tell you what those individual events were. Different PCRs are used to store different event types, but there are still more events than there are PCRs so we can't avoid this problem by simply storing each event separately.

This is solved using the event log. The event log is simply a record of each event, stored in RAM. The algorithm the TPM uses to calculate the PCR values is known, so we can reproduce that by simply taking the events from the event log and replaying the series of events that were passed to the TPM. If the final calculated value is the same as the value in the PCR, we know that the event log is accurate, which means we now know the value of each individual event and can make an appropriate judgement regarding its security.

If any value in the event log is invalid, we'll calculate a different PCR value and it won't match. This isn't terribly helpful - we know that at least one entry in the event log doesn't match what was passed to the TPM, but we don't know which entry. That means we can't trust any of the events associated with that PCR. If you're trying to make a security determination based on this, that's going to be a problem.

PCR 7 is used to track information about the secure boot policy on the system. It contains measurements of whether or not secure boot is enabled, and which keys are trusted and untrusted on the system in question. This is extremely helpful if you want to verify that a system booted with secure boot enabled before allowing it to do something security or safety critical. Unfortunately, if the device gives you an event log that doesn't replay correctly for PCR 7, you now have no idea what the security state of the system is.

We ran into that this week. Examination of the event log revealed an additional event other than the expected ones - a measurement accompanied by the string "Boot Guard Measured S-CRTM". Boot Guard is an Intel feature where the CPU verifies the firmware is signed with a trusted key before executing it, and measures information about the firmware in the process. Previously I'd only encountered this as a measurement into PCR 0, which is the PCR used to track information about the firmware itself. But it turns out that at least some versions of Boot Guard also measure information about the Boot Guard policy into PCR 7. The argument for this is that this is effectively part of the secure boot policy - having a measurement of the Boot Guard state tells you whether Boot Guard was enabled, which tells you whether or not the CPU verified a signature on your firmware before running it (as I wrote before, I think Boot Guard has user-hostile default behaviour, and that enforcing this on consumer devices is a bad idea).

But there's a problem here. The event log is created by the firmware, and the Boot Guard measurements occur before the firmware is executed. So how do we get a log that represents them? That one's fairly simple - the firmware simply re-calculates the same measurements that Boot Guard did and creates a log entry after the fact[1]. All good.

Except. What if the firmware screws up the calculation and comes up with a different answer? The entry in the event log will now not match what was sent to the TPM, and replaying will fail. And without knowing what the actual value should be, there's no way to fix this, which means there's no way to verify the contents of PCR 7 and determine whether or not secure boot was enabled.

But there's still a fundamental source of truth - the measurement that was sent to the TPM in the first place. Inspired by Henri Nurmi's work on sniffing Bitlocker encryption keys, I asked a coworker if we could sniff the TPM traffic during boot. The TPM on the board in question uses SPI, a simple bus that can have multiple devices connected to it. In this case the system flash and the TPM are on the same SPI bus, which made things easier. The board had a flash header for external reprogramming of the firmware in the event of failure, and all SPI traffic was visible through that header. Attaching a logic analyser to this header made it simple to generate a record of that. The only problem was that the chip select line on the header was attached to the firmware flash chip, not the TPM. This was worked around by simply telling the analysis software that it should invert the sense of the chip select line, ignoring all traffic that was bound for the flash and paying attention to all other traffic. This worked in this case since the only other device on the bus was the TPM, but would cause problems in the event of multiple devices on the bus all communicating.

With the aid of this analyser plugin, I was able to dump all the TPM traffic and could then search for writes that included the "0182" sequence that corresponds to the command code for a measurement event. This gave me a couple of accesses to the locality 3 registers, which was a strong indication that they were coming from the CPU rather than from the firmware. One was for PCR 0, and one was for PCR 7. This corresponded to the two Boot Guard events that we expected from the event log. The hash in the PCR 0 measurement was the same as the hash in the event log, but the hash in the PCR 7 measurement differed from the hash in the event log. Replacing the event log value with the value actually sent to the TPM resulted in the event log now replaying correctly, supporting the hypothesis that the firmware was failing to correctly reconstruct the event.

What now? The simple thing to do is for us to simply hard code this fixup, but longer term we'd like to figure out how to reconstruct the event so we can calculate the expected value ourselves. Unfortunately there doesn't seem to be any public documentation on this. Sigh.

[1] What stops firmware on a system with no Boot Guard faking those measurements? TPMs have a concept of "localities", effectively different privilege levels. When Boot Guard performs its initial measurement into PCR 0, it does so at locality 3, a locality that's only available to the CPU. This causes PCR 0 to be initialised to a different initial value, affecting the final PCR value. The firmware can't access locality 3, so can't perform an equivalent measurement, so can't fake the value.

comment count unavailable comments

12 September 2023

John Goerzen: A Maze of Twisty Little Pixels, All Tiny

Two years ago, I wrote Managing an External Display on Linux Shouldn t Be This Hard. Happily, since I wrote that post, most of those issues have been resolved. But then you throw HiDPI into the mix and it all goes wonky. If you re running X11, basically the story is that you can change the scale factor, but it only takes effect on newly-launched applications (which means a logout/in because some of your applications you can t really re-launch). That is a problem if, like me, you sometimes connect an external display that is HiDPI, sometimes not, or your internal display is HiDPI but others aren t. Wayland is far better, supporting on-the-fly resizes quite nicely. I ve had two devices with HiDPI displays: a Surface Go 2, and a work-issued Thinkpad. The Surface Go 2 is my ultraportable Linux tablet. I use it sparingly at home, and rarely with an external display. I just put Gnome on it, in part because Gnome had better on-screen keyboard support at the time, and left it at that. On the work-issued Thinkpad, I really wanted to run KDE thanks to its tiling support (I wound up using bismuth with it). KDE was buggy with Wayland at the time, so I just stuck with X11 and ran my HiDPI displays at lower resolutions and lived with the fuzziness. But now that I have a Framework laptop with a HiDPI screen, I wanted to get this right. I tried both Gnome and KDE. Here are my observations with both: Gnome I used PaperWM with Gnome. PaperWM is a tiling manager with a unique horizontal ribbon approach. It grew on me; I think I would be equally at home, or maybe even prefer it, to my usual xmonad-style approach. Editing the active window border color required editing ~/.local/share/gnome-shell/extensions/paperwm@hedning:matrix.org/stylesheet.css and inserting background-color and border-color items in the paperwm-selection section. Gnome continues to have an absolutely terrible picture for configuring things. It has no less than four places to make changes (Settings, Tweaks, Extensions, and dconf-editor). In many cases, configuration for a given thing is split between Settings and Tweaks, and sometimes even with Extensions, and then there are sometimes options that are only visible in dconf. That is, where the Gnome people have even allowed something to be configurable. Gnome installs a power manager by default. It offers three options: performance, balanced, and saver. There is no explanation of the difference between them. None. What is it setting when I change the pref? A maximum frequency? A scaling governor? A balance between performance and efficiency cores? Not only that, but there s no way to tell it to just use performance when plugged in and balanced or saver when on battery. In an issue about adding that, a Gnome dev wrote We re not going to add a preference just because you want one . KDE, on the other hand, aside from not mucking with your system s power settings in this way, has a nice panel with on AC and on battery and you can very easily tweak various settings accordingly. The hostile attitude from the Gnome developers in that thread was a real turnoff. While Gnome has excellent support for Wayland, it doesn t (directly) support fractional scaling. That is, you can set it to 100%, 200%, and so forth, but no 150%. Well, unless you manage to discover that you can run gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']" first. (Oh wait, does that make a FIFTH settings tool? Why yes it does.) Despite its name, that allows you to select fractional scaling under Wayland. For X11 apps, they will be blurry, a problem that is optional under KDE (more on that below). Gnome won t show the battery life time remaining on the task bar. Yikes. An extension might work in some cases. Not only that, but the Gnome battery icon frequently failed to indicate AC charging when AC was connected, a problem that didn t exist on KDE. Both Gnome and KDE support night light (warmer color temperatures at night), but Gnome s often didn t change when it should have, or changed on one display but not the other. The appindicator extension is pretty much required, as otherwise a number of applications (eg, Nextcloud) don t have their icon display anywhere. It does, however, generate a significant amount of log spam. There may be a fix for this. Unlike KDE, which has a nice inobtrusive popup asking what to do, Gnome silently automounts USB sticks when inserted. This is often wrong; for instance, if I m about to dd a Debian installer to it, I definitely don t want it mounted. I learned this the hard way. It is particularly annoying because in a GUI, there is no reason to mount a drive before the user tries to access it anyhow. It looks like there is a dconf setting, but then to actually mount a drive you have to open up Files (because OF COURSE Gnome doesn t have a nice removable-drives icon like KDE does) and it s a bunch of annoying clicks, and I didn t want to use the GUI file manager anyway. Same for unmounting; two clicks in KDE thanks to the task bar icon, but in Gnome you have to open up the file manager, unmount the drive, close the file manager again, etc. The ssh agent on Gnome doesn t start up for a Wayland session, though this is easily enough worked around. The reason I completely soured on Gnome is that after using it for awhile, I noticed my laptop fans spinning up. One core would be constantly busy. It was busy with a kworker events task, something to do with sound events. Logging out would resolve it. I believe it to be a Gnome shell issue. I could find no resolution to this, and am unwilling to tolerate the decreased battery life this implies. The Gnome summary: it looks nice out of the box, but you quickly realize that this is something of a paper-thin illusion when you try to actually use it regularly. KDE The KDE experience on Wayland was a little bit opposite of Gnome. While with Gnome, things start out looking great but you realize there are some serious issues (especially battery-eating), with KDE things start out looking a tad rough but you realize you can trivially fix them and wind up with a very solid system. Compared to Gnome, KDE never had a battery-draining problem. It will show me estimated battery time remaining if I want it to. It will do whatever I want it to when I insert a USB drive. It doesn t muck with my CPU power settings, and lets me easily define on AC vs on battery settings for things like suspend when idle. KDE supports fractional scaling, to any arbitrary setting (even with the gsettings thing above, Gnome still only supports it in 25% increments). Then the question is what to do with X11-only applications. KDE offers two choices. The first is Scaled by the system , which is also the only option for Gnome. With that setting, the X11 apps effectively run natively at 100% and then are scaled up within Wayland, giving them a blurry appearance on HiDPI displays. The advantage is that the scaling happens within Wayland, so the size of the app will always be correct even when the Wayland scaling factor changes. The other option is Apply scaling themselves , which uses native X11 scaling. This lets most X11 apps display crisp and sharp, but then if the system scaling changes, due to limitations of X11, you ll have to restart the X apps to get them to be the correct size. I appreciate the choice, and use Apply scaling by themselves because only a few of my apps aren t Wayland-aware. I did encounter a few bugs in KDE under Wayland: sddm, the display manager, would be slow to stop and cause a long delay on shutdown or reboot. This seems to be a known issue with sddm and Wayland, and is easily worked around by adding a systemd TimeoutStopSec. Konsole, the KDE terminal emulator, has weird display artifacts when using fractional scaling under Wayland. I applied some patches and rebuilt Konsole and then all was fine. The Bismuth tiling extension has some pretty weird behavior under Wayland, but a 1-character patch fixes it. On Debian, KDE mysteriously installed Pulseaudio instead of Debian s new default Pipewire, but that was easily fixed as well (and Pulseaudio also works fine). Conclusions I m sticking with KDE. Given that I couldn t figure out how to stop Gnome from deciding to eat enough battery to make my fan come on, the decision wasn t hard. But even if it weren t for that, I d have gone with KDE. Once a couple of things were patched, the experience is solid, fast, and flawless. Emacs (my main X11-only application) looks great with the self-scaling in KDE. Gimp, which I use occasionally, was terrible with the blurry scaling in Gnome. Update: Corrected the gsettings command

8 September 2023

Reproducible Builds: Reproducible Builds in August 2023

Welcome to the August 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. If you are interested in contributing to the project, please visit our Contribute page on our website.

Rust serialisation library moving to precompiled binaries Bleeping Computer reported that Serde, a popular Rust serialization framework, had decided to ship its serde_derive macro as a precompiled binary. As Ax Sharma writes:
The move has generated a fair amount of push back among developers who worry about its future legal and technical implications, along with a potential for supply chain attacks, should the maintainer account publishing these binaries be compromised.
After intensive discussions, use of the precompiled binary was phased out.

Reproducible builds, the first ten years On August 4th, Holger Levsen gave a talk at BornHack 2023 on the Danish island of Funen titled Reproducible Builds, the first ten years which promised to contain:
[ ] an overview about reproducible builds, the past, the presence and the future. How it started with a small [meeting] at DebConf13 (and before), how it grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an executive order of the president of the United States. (HTML slides)
Holger repeated the talk later in the month at Chaos Communication Camp 2023 in Zehdenick, Germany: A video of the talk is available online, as are the HTML slides.

Reproducible Builds Summit Just another reminder that our upcoming Reproducible Builds Summit is set to take place from October 31st November 2nd 2023 in Hamburg, Germany. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. If you re interested in joining us this year, please make sure to read the event page, the news item, or the invitation email that Mattia Rizzolo sent out, which have more details about the event and location. We are also still looking for sponsors to support the event, so do reach out to the organizing team if you are able to help. (Also of note that PackagingCon 2023 is taking place in Berlin just before our summit, and their schedule has just been published.)

Vagrant Cascadian on the Sustain podcast Vagrant Cascadian was interviewed on the SustainOSS podcast on reproducible builds:
Vagrant walks us through his role in the project where the aim is to ensure identical results in software builds across various machines and times, enhancing software security and creating a seamless developer experience. Discover how this mission, supported by the Software Freedom Conservancy and a broad community, is changing the face of Linux distros, Arch Linux, openSUSE, and F-Droid. They also explore the challenges of managing random elements in software, and Vagrant s vision to make reproducible builds a standard best practice that will ideally become automatic for users. Vagrant shares his work in progress and their commitment to the last mile problem.
The episode is available to listen (or download) from the Sustain podcast website. As it happens, the episode was recorded at FOSSY 2023, and the video of Vagrant s talk from this conference (Breaking the Chains of Trusting Trust is now available on Archive.org: It was also announced that Vagrant Cascadian will be presenting at the Open Source Firmware Conference in October on the topic of Reproducible Builds All The Way Down.

On our mailing list Carles Pina i Estany wrote to our mailing list during August with an interesting question concerning the practical steps to reproduce the hello-traditional package from Debian. The entire thread can be viewed from the archive page, as can Vagrant Cascadian s reply.

Website updates Rahul Bajaj updated our website to add a series of environment variations related to reproducible builds [ ], Russ Cox added the Go programming language to our projects page [ ] and Vagrant Cascadian fixed a number of broken links and typos around the website [ ][ ][ ].

Software development In diffoscope development this month, versions 247, 248 and 249 were uploaded to Debian unstable by Chris Lamb, who also added documentation for the new specialize_as method and expanding the documentation of the existing specialize as well [ ]. In addition, Fay Stegerman added specialize_as and used it to optimise .smali comparisons when decompiling Android .apk files [ ], Felix Yan and Mattia Rizzolo corrected some typos in code comments [ , ], Greg Chabala merged the RUN commands into single layer in the package s Dockerfile [ ] thus greatly reducing the final image size. Lastly, Roland Clobus updated tool descriptions to mark that the xb-tool has moved package within Debian [ ].
reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian updated the packaging to be compatible with Tox version 4. This was originally filed as Debian bug #1042918 and Holger Levsen uploaded this to change to Debian unstable as version 0.7.26 [ ].

Distribution work In Debian, 28 reviews of Debian packages were added, 14 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types were added, including Chris Lamb adding a new timestamp_in_documentation_using_sphinx_zzzeeksphinx_theme toolchain issue.
In August, F-Droid added 25 new reproducible apps and saw 2 existing apps switch to reproducible builds, making 191 apps in total that are published with Reproducible Builds and using the upstream developer s signature. [ ]
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen:
  • Debian-related changes:
    • Disable Debian live image creation jobs until an OpenQA credential problem has been fixed. [ ]
    • Run our maintenance scripts every 3 hours instead of every 2. [ ]
    • Export data for unstable to the reproducible-tracker.json data file. [ ]
    • Stop varying the build path, we want reproducible builds. [ ]
    • Temporarily stop updating the pbuilder.tgz for Debian unstable due to #1050784. [ ][ ]
    • Correctly document that we are not variying usrmerge. [ ][ ]
    • Mark two armhf nodes (wbq0 and jtx1a) as down; investigation is needed. [ ]
  • Misc:
    • Force reconfiguration of all Jenkins jobs, due to the recent rise of zombie processes. [ ]
    • In the node health checks, also try to restart failed ntpsec, postfix and vnstat services. [ ][ ][ ]
  • System health checks:
    • Detect Debian live build failures due to missing credentials. [ ][ ]
    • Ignore specific types of known zombie processes. [ ][ ]
In addition, Vagrant Cascadian updated the scripts to use a predictable build path that is consistent with the one used on buildd.debian.org. [ ][ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

5 September 2023

Russ Allbery: Review: Before We Go Live

Review: Before We Go Live, by Stephen Flavall
Publisher: Spender Books
Copyright: 2023
ISBN: 1-7392859-1-3
Format: Kindle
Pages: 271
Stephen Flavall, better known as jorbs, is a Twitch streamer specializing in strategy games and most well-known as one of the best Slay the Spire players in the world. Before We Go Live, subtitled Navigating the Abusive World of Online Entertainment, is a memoir of some of his experiences as a streamer. It is his first book. I watch a lot of Twitch. For a long time, it was my primary form of background entertainment. (Twitch's baffling choices to cripple their app have subsequently made YouTube somewhat more attractive.) There are a few things one learns after a few years of watching a lot of streamers. One is that it's a precarious, unforgiving living for all but the most popular streamers. Another is that the level of behind-the-scenes drama is very high. And a third is that the prevailing streaming style has converged on fast-talking, manic, stream-of-consciousness joking apparently designed to satisfy people with very short attention spans. As someone for whom that manic style is like nails on a chalkboard, I am therefore very picky about who I'm willing to watch and rarely can tolerate the top streamers for more than an hour. jorbs is one of the handful of streamers I've found who seems pitched towards adults who don't need instant bursts of dopamine. He's calm, analytical, and projects a relaxed, comfortable feeling most of the time (although like the other streamers I prefer, he doesn't put up with nonsense from his chat). If you watch him for a while, he's also one of those people who makes you think "oh, this is an interestingly unusual person." It's a bit hard to put a finger on, but he thinks about things from intriguing angles. Going in, I thought this would be a general non-fiction book about the behind-the-scenes experience of the streaming industry. Before We Go Live isn't really that. It is primarily a memoir focused on Flavall's personal experience (as well as the experience of his business manager Hannah) with the streaming team and company F2K, supplemented by a brief history of Flavall's streaming career and occasional deeply personal thoughts on his own mental state and past experiences. Along the way, the reader learns a lot more about his thought processes and approach to life. He is indeed a fascinatingly unusual person. This is to some extent an expos , but that's not the most interesting part of this book. It quickly becomes clear that F2K is the sort of parasitic, chaotic, half-assed organization that crops up around any new business model. (Yes, there's crypto.) People who are good at talking other people out of money and making a lot of big promises try to follow a startup fast-growth model with unclear plans for future revenue and hope that it all works out and turns into a valuable company. Most of the time it doesn't, because most of the people running these sorts of opportunistic companies are better at talking people out of money than at running a business. When the new business model is in gaming, you might expect a high risk of sexism and frat culture; in this case, you would not be disappointed. This is moderately interesting but not very revealing if one is already familiar with startup culture and the kind of people who start businesses without doing any of the work the business is about. The F2K principals are at best opportunistic grifters, if not actual con artists. It's not long into this story before this is obvious. At that point, the main narrative of this book becomes frustrating; Flavall recognizes the dysfunction to some extent, but continues to associate with these people. There are good reasons related to his (and Hannah's) psychological state, but it doesn't make it easier to read. Expect to spend most of the book yelling "just break up with these people already" as if you were reading Captain Awkward letters. The real merit of this book is that people are endlessly fascinating, Flavall is charmingly quirky, and he has the rare mix of the introspection that allows him to describe himself without the tendency to make his self-story align with social expectations. I think every person is intriguingly weird in at least some ways, but usually the oddities are smoothed away and hidden under a desire to present as "normal" to the rest of society. Flavall has the right mix of writing skill and a willingness to write with direct honesty that lets the reader appreciate and explore the complex oddities of a real person, including the bits that at first don't make much sense. Parts of this book are uncomfortable reading. Both Flavall and his manager Hannah are abuse survivors, which has a lot to do with their reactions to their treatment by F2K, and those reactions are both tragic and maddening to read about. It's a good way to build empathy for why people will put up with people who don't have their best interests at heart, but at times that empathy can require work because some of the people on the F2K side are so transparently sleazy. This is not the sort of book I'm likely to re-read, but I'm glad I read it simply for that time spent inside the mind of someone who thinks very differently than I do and is both honest and introspective enough to give me a picture of his thought processes that I think was largely accurate. This is something memoir is uniquely capable of doing if the author doesn't polish all of the oddities out of their story. It takes a lot of work to be this forthright about one's internal thought processes, and Flavall does an excellent job. Rating: 7 out of 10

1 September 2023

Simon Josefsson: Trisquel on ppc64el: Talos II

The release notes for Trisquel 11.0 Aramo mention support for POWER and ARM architectures, however the download area only contains links for x86, and forum posts suggest there is a lack of instructions how to run Trisquel on non-x86. Since the release of Trisquel 11 I have been busy migrating x86 machines from Debian to Trisquel. One would think that I would be finished after this time period, but re-installing and migrating machines is really time consuming, especially if you allow yourself to be distracted every time you notice something that Really Ought to be improved. Rabbit holes all the way down. One of my production machines is running Debian 11 bullseye on a Talos II Lite machine from Raptor Computing Systems, and migrating the virtual machines running on that host (including the VM that serves this blog) to a x86 machine running Trisquel felt unsatisfying to me. I want to migrate my computing towards hardware that harmonize with FSF s Respects Your Freedom and not away from it. Here I had to chose between using the non-free software present in newer Debian or the non-free software implied by most x86 systems: not an easy chose. So I have ignored the dilemma for some time. After all, the machine was running Debian 11 bullseye , which was released before Debian started to require use of non-free software. With the end-of-life date for bullseye approaching, it seems that this isn t a sustainable choice. There is a report open about providing ppc64el ISOs that was created by Jason Self shortly after the release, but for many months nothing happened. About a month ago, Luis Guzm n mentioned an initial ISO build and I started testing it. The setup has worked well for a month, and with this post I want to contribute instructions how to get it up and running since this is still missing. The setup of my soon-to-be new production machine: According to the notes in issue 14 the ISO image is available at https://builds.trisquel.org/debian-installer-images/ and the following commands download, integrity check and write it to a USB stick:
wget -q https://builds.trisquel.org/debian-installer-images/debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz
tar xfa debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso
echo '6df8f45fbc0e7a5fadf039e9de7fa2dc57a4d466e95d65f2eabeec80577631b7  ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso'   sha256sum -c
sudo wipefs -a /dev/sdX
sudo dd if=./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso of=/dev/sdX conv=sync status=progress
Sadly, no hash checksums or OpenPGP signatures are published. Power off your device, insert the USB stick, and power it up, and you see a Petitboot menu offering to boot from the USB stick. For some reason, the "Expert Install" was the default in the menu, and instead I select "Default Install" for the regular experience. For this post, I will ignore BMC/IPMI, as interacting with it is not necessary. Make sure to not connect the BMC/IPMI ethernet port unless you are willing to enter that dungeon. The VGA console works fine with a normal USB keyboard, and you can chose to use only the second enP4p1s0f1 network card in the network card selection menu. If you are familiar with Debian netinst ISO s, the installation is straight-forward. I complicate the setup by partitioning two RAID1 partitions on the two NVMe sticks, one RAID1 for a 75GB ext4 root filesystem (discard,noatime) and one RAID1 for a 900GB LVM volume group for virtual machines, and two 20GB swap partitions on each of the NVMe sticks (to silence a warning about lack of swap, I m not sure swap is still a good idea?). The 3x18TB disks use DM-integrity with RAID1 however the installer does not support DM-integrity so I had to create it after the installation. There are two additional matters worth mentioning: I have re-installed the machine a couple of times, and have now finished installing the production setup. I haven t ran into any serious issues, and the system has been stable. Time to wrap up, and celebrate that I now run an operating system aligned with the Free System Distribution Guidelines on hardware that aligns with Respects Your Freedom Happy Hacking indeed!

31 August 2023

Ian Jackson: Conferences take note: the pandemic is not over

Many people seem to be pretending that the pandemic is over. It isn t. People are still getting Covid, becoming sick, and even in some cases becoming disabled. People s plans are still being disrupted. Vulnerable people are still hiding. Conference organisers: please make robust Covid policies, publish them early, and enforce them. And, clearly set expectations for your attendees. Attendees: please don t be the superspreader. Two conferences This year I have attended a number of in-person events. For Eastercon I chose to participate online, remotely. This turns out to have been a very good decision. At least a quarter of attendees got Covid. At BiCon we had about 300 attendees. I m not aware of any Covid cases. Part of the difference between the two may have been in the policies. BiCon s policy was rather more robust. Unlike Eastercon s it had a much better refund policy for people who got Covid and therefore shouldn t come; also BiCon asked attendees to actually show evidence of a negative test. Another part of the difference will have been the venue. The NTU buildings we used at BiCon were modern and well ventilated. But, I think the biggest difference was attendees' attitudes. BiCon attendees are disproportionately likely (compared to society at large) to have long term medical conditions. And the cultural norms are to value and protect those people. Conversely, in my experience, a larger proportion of Eastercon attendees don t always have the same level of consideration. I don t want to give details, but I have reliable reports of quite reprehensible behaviour by some attendees - even members of the convention volunteer staff. Policies Your conference should IMO at the very least: The rules should be published very early, so that people can see them, and decide if they want to go, before they have to book anything. Don t recommend that people don t spread disease Most of the things that attendees can do to about Covid primarily protect others, rather than themselves. Making those things recommendations or advice is grossly unfair. You re setting up an arsehole filter: nice people will want to protect others, but less public spirited people will tell themselves it s only a recommendation. Make the rules mandatory. But won t we be driving people away ? If you don t have a robust Covid policy, you are already driving people away. And the people who won t come because of reasonable measures like I ve asked for above, are dickheads. You don t want them putting your other attendees at risk. And probably they re annoying in other ways too. Example of something that is not OK Yesterday (2023-08-30 13:44 UTC), less than two weeks before the conference, Debconf 23 s Covid policy still looked like you see below. Today there is a policy, but it is still weak.
Edited 2023-09-01 00:58 +01:00 to fix the link to the Debconf policy.


comment count unavailable comments

27 August 2023

Shirish Agarwal: FSCKing /home

There is a bit of context that needs to be shared before I get to this and would be a long one. For reasons known and unknown, I have a lot of sudden electricity outages. Not just me, all those who are on my line. A discussion with a lineman revealed that around 200+ families and businesses are on the same line and when for whatever reason the electricity goes for all. Even some of the traffic lights don t work. This affects software more than hardware or in some cases, both. And more specifically HDD s are vulnerable. I had bought an APC unit several years for precisely this, but over period of time it just couldn t function and trips also when the electricity goes out. It s been 6-7 years so can t even ask customer service to fix the issue and from whatever discussions I have had with APC personnel, the only meaningful difference is to buy a new unit but even then not sure this is an issue that can be resolved, even with that. That comes to the issue that happens once in a while where the system fsck is unable to repair /home and you need to use an external pen drive for the same. This is my how my hdd stacks up
/ is on dev/sda7 /boot is on /dev/sda6, /boot/efi is on /dev/sda2 and /home is on /dev/sda8 so theoretically, if /home for some reason doesn t work I should be able drop down on /dev/sda7, unmount /dev/sda8, run fsck and carry on with my work. I tried it number of times but it didn t work. I was dropping down on tty1 and attempting the same, no dice as root/superuser getting the barest x-term. So first I tried asking couple of friends who live nearby me. Unfortunately, both are MS-Windows users and both use what are called as company-owned laptops . Surfing on those systems were a nightmare. Especially the number of pop-ups of ads that the web has become. And to think about how much harassment ublock origin has saved me over the years. One of the more interesting bits from both their devices were showing all and any downloads from fosshub was showing up as malware. I dunno how much of that is true or not as haven t had to use it as most software we get through debian archives or if needed, download from github or wherever and run/install it and you are in business. Some of them even get compiled into a good .deb package but that s outside the conversation atm. My only experience with fosshub was few years before the pandemic and that was good. I dunno if fosshub really has malware or malwarebytes was giving false positives. It also isn t easy to upload a 600 MB+ ISO file somewhere to see whether it really has malware or not. I used to know of a site or two where you could upload a suspicious file and almost 20-30 famous and known antivirus and anti-malware engines would check it and tell you the result. Unfortunately, I have forgotten the URL and seeing things from MS-Windows perspective, things have gotten way worse than before. So left with no choice, I turned to the local LUG for help. Fortunately, my mobile does have e-mail and I could use gmail to solicit help. While there could have been any number of live CD s that could have helped but one of my first experiences with GNU/Linux was that of Knoppix that I had got from Linux For You (now known as OSFY) sometime in 2003. IIRC, had read an interview of Mr. Klaus Knopper as well and was impressed by it. In those days, Debian wasn t accessible to non-technical users then and Knoppix was a good tool to see it. In fact, think he was the first to come up with the idea of a Live CD and run with it while Canonical/Ubuntu took another 2 years to do it. I think both the CD and the interview by distrowatch was shared by LFY in those early days. Of course, later the story changes after he got married, but I think that is more about Adriane rather than Knoppix. So Vishal Rao helped me out. I got an HP USB 3.2 32GB Type C OTG Flash Drive x5600c (Grey & Black) from a local hardware dealer around similar price point. The dealer is a big one and has almost 200+ people scattered around the city doing channel sales who in turn sell to end users. Asking one of the representatives about their opinion on stopping electronic imports (apparently more things were added later to the list including all sorts of sundry items from digital cameras to shavers and whatnot.) The gentleman replied that he hopes that it would not happen otherwise more than 90% would have to leave their jobs. They already have started into lighting fixtures (LED bulbs, tubelights etc.) but even those would come in the same ban  The main argument as have shared before is that Indian Govt. thinks we need our home grown CPU and while I have no issues with that, as shared before except for RISC-V there is no other space where India could look into doing that. Especially after the Chip Act, Biden has made that any new fabs or any new thing in chip fabrication will only be shared with Five Eyes only. Also, while India is looking to generate about 2000 GW by 2030 by solar, China has an ambitious 20,000 GW generation capacity by the end of this year and the Chinese are the ones who are actually driving down the module prices. The Chinese are also automating their factories as if there s no tomorrow. The end result of both is that China will continue to be the world s factory floor for the foreseeable future and whoever may try whatever policies, it probably is gonna be difficult to compete with them on prices of electronic products. That s the reason the U.S. has been trying so that China doesn t get the latest technology but that perhaps is a story for another day.

HP USB 3.2 Type C OTG Flash Drive x5600c For people who have had read this blog they know that most of the flash drives today are MLC Drives and do not have the longevity of the SLC Drives. For those who maybe are new, this short brochure/explainer from Kingston should enhance your understanding. SLC Drives are rare and expensive. There are also a huge number of counterfeit flash drives available in the market and almost all the companies efforts whether it s Kingston, HP or any other manufacturer, they have been like a drop in the bucket. Coming back to the topic at hand. While there are some tools that can help you to figure out whether a pen drive is genuine or not. While there are products that can tell you whether they are genuine or not (basically by probing the memory controller and the info. you get from that.) that probably is a discussion left for another day. It took me couple of days and finally I was able to find time to go Vishal s place. The journey of back and forth lasted almost 6 hours, with crazy traffic jams. Tells you why Pune or specifically the Swargate, Hadapsar patch really needs a Metro. While an in-principle nod has been given, it probably is more than 5-7 years or more before we actually have a functioning metro. Even the current route the Metro has was supposed to be done almost 5 years to the date and even the modified plan was of 3 years ago. And even now, most of the Stations still need a lot of work to be done. PMC, Deccan as examples etc. still have loads to be done. Even PMT (Pune Muncipal Transport) that that is supposed to do the last mile connections via its buses has been putting half-hearted attempts

Vishal Rao While Vishal had apparently seen me and perhaps we had also interacted, this was my first memory of him although we have been on a few boards now and then including stackexchange. He was genuine and warm and shared 4-5 distros with me, including Knoppix and System Rescue as shared by Arun Khan. While this is and was the first time I had heard about Ventoy apparently Vishal has been using it for couple of years now. It s a simple shell script that you need to download and run on your pen drive and then just dump all the .iso images. The easiest way to explain ventoy is that it looks and feels like Grub. Which also reminds me an interaction I had with Vishal on mobile. While troubleshooting the issue, I was unsure whether it was filesystem that was the issue or also systemd was corrupted. Vishal reminded me of putting fastboot to the kernel parameters to see if I m able to boot without fscking and get into userspace i.e. /home. Although journalctl and systemctl were responding even on tty1 still was a bit apprehensive. Using fastboot was able to mount the whole thing and get into userspace and that told me that it s only some of the inodes that need clearing and there probably are some orphaned inodes. While Vishal had got a mini-pc he uses that a server, downloads stuff to it and then downloads stuff from it. From both privacy, backup etc. it is a better way to do things but then you need to laptop to access it. I am sure he probably uses it for virtualization and other ways as well but we just didn t have time for that discussion. Also a mini-pc can set you back anywhere from 25 to 40k depending on the mini-pc and the RAM and the SSD. And you need either a lappy or an Raspberry Pi with some kinda visual display to interact with the mini-pc. While he did share some of the things, there probably could have been a far longer interaction just on that but probably best left for another day. Now at my end, the system I had bought is about 5-6 years old. At that time it only had 6 USB 2.0 drives and 2 USB 3.0 (A) drives.
The above image does tell of the various form factors. One of the other things is that I found the pendrive and its connectors to be extremely fiddly. It took me number of times fiddling around with it when I was finally able to put in and able to access the pen drive partitions. Unfortunately, was unable to see/use systemrescue but Knoppix booted up fine. I mounted the partitions briefly to see where is what and sure enough /dev/sda8 showed my /home files and folders. Unmounted it, then used $fsck -y /dev/sda8 and back in business. This concludes what happened. Updates Quite a bit was left out on the original post, part of which I didn t know and partly stuff which is interesting and perhaps need a blog post of their own. It s sad I won t be part of debconf otherwise who knows what else I would have come to know.
  1. One of the interesting bits that I came to know about last week is the Alibaba T-Head T-Head TH1520 RISC-V CPU and saw it first being demoed on a laptop and then a standalone tablet. The laptop is an interesting proposition considering Alibaba opened up it s chip thing only couple of years ago. To have an SOC within 18 months and then under production for lappies and tablets is practically unheard of especially of a newbie/startup. Even AMD took 3-4 years for its first chip.It seems they (Alibaba) would be parceling them out by quarter end 2023 and another 1000 pieces/Units first quarter next year, while the scale is nothing compared to the behemoths, I think this would be more as a matter of getting feedback on both the hardware and software. The value proposition is much better than what most of us get, at least in India. For example, they are doing a warranty for 5 years and also giving spare parts. RISC-V has been having a lot of resurgence in China in part as its an open standard and partly development will be far cheaper and faster than trying x86 or x86-64. If you look into both the manufacturers, due to monopoly, both of them now give 5-8% increment per year, and if you look back in history, you would find that when more chips were in competition, they used to give 15-20% performance increment per year.
2. While Vishal did share with me what he used and the various ways he uses the mini-pc, I did have a fun speculating on what he could use it. As shared by Romane as his case has shared, the first thing to my mind was backups. Filesystems are notorious in the sense they can be corrupted or can be prone to be corrupted very easily as can be seen above  . Backups certainly make a lot of sense, especially rsync. The other thing that came to my mind was having some sort of A.I. and chat server. IIRC, somebody has put quite a bit of open source public domain data in debian servers that could be used to run either a chatbot or an A.I. or both and use that similar to how chatGPT but with much limited scope than what chatgpt uses. I was also thinking a media server which Vishal did share he does. I may probably visit him sometime to see what choices he did and what he learned in the process, if anything. Another thing that could be done is just take a dump of any of commodity markets or any markets and have some sort of predictive A.I. or whatever. A whole bunch of people have scammed thousands of Indian users on this, but if you do it on your own and for your own purposes to aid you buy and sell stocks or whatever commodity you may fancy. After all, nowadays markets themselves are virtual. While Vishal s mini-pc doesn t have any graphics, if it was an AMD APU mini-pc, something like this he could have hosted games in the way of thick server, thin client where all graphics processing happens on the server rather than the client. With virtual reality I think the case for the same case could be made or much more. The only problem with VR/AR is that we don t really have mass-market googles, eye pieces or headset. The only notable project that Google has/had in that place is the Google VR Cardboard headset and the experience is not that great or at least was not that great few years back when I could hear and experience the same. Most of the VR headsets say for example the Meta Quest 2 is for around INR 44k/- while Quest 3 is INR 50k+ and officially not available. As have shared before, the holy grail of VR would be when it falls below INR 10k/- so it becomes just another accessory, not something you really have to save for. There also isn t much content on that but then that is also the whole chicken or egg situation. This again is a non-stop discussion as so much has been happening in that space it needs its own blog post/article whatever. Till later.

Gunnar Wolf: Interested in adopting the RPi images for Debian?

Back in June 2018, Michael Stapelberg put the Raspberry Pi image building up for adoption. He created the first set of unofficial, experimental Raspberry Pi images for Debian. I promptly answered to him, and while it took me some time to actually warp my head around Michael s work, managed to eventually do so. By December, I started pushing some updates. Not only that: I didn t think much about it in the beginning, as the needed non-free pacakge was called raspi3-firmware, but By early 2019, I had it running for all of the then-available Raspberry families (so the package was naturally renamed to raspi-firmware). I got my Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and it soon joined the happy Debian family. The images are built daily, and are available in https://raspi.debian.net. In the process, I also adopted Lars great vmdb2 image building tool, and have kept it decently up to date (yes, I m currently lagging behind, but I ll get to it soonish ). Anyway This year, I have been seriously neglecting the Raspberry builds. I have simply not had time to regularly test built images, nor to debug why the builder has not picked up building for trixie (testing). And my time availability is not going to improve any time soon. We are close to one month away from moving for six months to Paran (Argentina), where I ll be focusing on my PhD. And while I do contemplate taking my Raspberries along, I do not forsee being able to put much energy to them. So This is basically a call for adoption for the Raspberry Debian images building service. I do intend to stick around and try to help. It s not only me (although I m responsible for the build itself) we have a nice and healthy group of Debian people hanging out in the #debian-raspberrypi channel in OFTC IRC. Don t be afraid, and come ask. I hope giving this project in adoption will breathe new life into it!

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

Next.

Previous.