Search Results: "tg"

9 January 2025

Scarlett Gately Moore: KDE: Snaps 24.12.1 Release, Kubuntu Plasma 5.27.12 Call for testers

I have released more core24 snaps to edge for your testing pleasure. If you find any bugs please report them at bugs.kde.org and assign them to me. Thanks!
Kdenlive our amazing video editor!
Haruna is a video player that also supports youtube!
Kdevelop is our feature rich development IDE KDE applications 24.12.1 release https://kde.org/announcements/gear/24.12.1/ New qt6 ports
Kubuntu: We have Plasma 5.27.12 Bugfix release in staging https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/staging-plasma for noble updates, please test! Do NOT do this on a production system. Thanks! I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

4 January 2025

Scarlett Gately Moore: KDE: Snap hotfixes and updates

Fixed okular pdf printing https://bugs.kde.org/show_bug.cgi?id=498065 Fixed kwave recording https://bugs.kde.org/show_bug.cgi?id=442085 please run sudo snap connect kwave:audio-record :audio-record until auto-connect gets approved here: https://forum.snapcraft.io/t/kde-auto-connect-our-two-recording-apps/44419 New qt6 snaps in edge until 24.12.1 release
I have begun the process of moving to core24 currently in edge until 24.12.1 release. Some major improvements come with core24!
Tokodon is our wonderful Mastadon client I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

31 December 2024

Chris Lamb: Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024. It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

Films Recent releases

Seen at a 2023 festival. Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almod var) and Emilia P rez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie Deux (Todd Phillips).
Older releases ie. Films released before 2023, and not including rewatches from previous years. Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser s Husband (Patrice Leconte, 19990). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).

Scarlett Gately Moore: KDE: Application snaps 24.12.0 release and more

https://kde.org/announcements/gear/24.12.0 I hope everyone had a wonderful holiday! Your present from me is shiny new application snaps! There are several new qt6 ports in this release. Please visit https://snapcraft.io/store?q=kde I have also fixed the Krita snap unable to open/save bug. Please test edge! I am continuing work on core24 support and hope to be done before next release. I do look forward to 2025! Begone 2024! If you can help with gas, I still have 3 weeks of treatments to go. Thank you for your continued support. https://gofund.me/573cc38e

23 December 2024

Simon Josefsson: OpenSSH and Git on a Post-Quantum SPHINCS+

Are you aware that Git commits and tags may be signed using OpenSSH? Git signatures may be used to improve integrity and authentication of our software supply-chain. Popular signature algorithms include Ed25519, ECDSA and RSA. Did you consider that these algorithms may not be safe if someone builds a post-quantum computer? As you may recall, I have earlier blogged about the efficient post-quantum key agreement mechanism called Streamlined NTRU Prime and its use in SSH and I have attempted to promote the conservatively designed Classic McEliece in a similar way, although it remains to be adopted. What post-quantum signature algorithms are available? There is an effort by NIST to standardize post-quantum algorithms, and they have a category for signature algorithms. According to wikipedia, after round three the selected algorithms are CRYSTALS-Dilithium, FALCON and SPHINCS+. Of these, SPHINCS+ appears to be a conservative choice suitable for long-term digital signatures. Can we get this to work? Recall that Git uses the ssh-keygen tool from OpenSSH to perform signing and verification. To refresh your memory, let s study the commands that Git uses under the hood for Ed25519. First generate a Ed25519 private key:
jas@kaka:~$ ssh-keygen -t ed25519 -f my_ed25519_key -P ""
Generating public/private ed25519 key pair.
Your identification has been saved in my_ed25519_key
Your public key has been saved in my_ed25519_key.pub
The key fingerprint is:
SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka
The key's randomart image is:
+--[ED25519 256]--+
     .+=.E ..      
      oo=.ooo      
     . =o=+o .     
      =oO+o .      
      .=+S.=       
       oo.o o      
      . o  .       
     ...o.+..      
    .o.o.=**.      
+----[SHA256]-----+
jas@kaka:~$ cat my_ed25519_key
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQAAAJCeDotOng6L
TgAAAAtzc2gtZWQyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQ
AAAEBFRvzgcD3YItl9AMmVK4xDKj8NTg4h2Sluj0/x7aSPlhY/9pnyHM3RY1ExKmPNuBbW
0lc13a/r92dsppC3uIgFAAAACGphc0BrYWthAQIDBAU=
-----END OPENSSH PRIVATE KEY-----
jas@kaka:~$ cat my_ed25519_key.pub 
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF jas@kaka
jas@kaka:~$ 
Then let s sign something with this key:
jas@kaka:~$ echo "Hello world!" > msg
jas@kaka:~$ ssh-keygen -Y sign -f my_ed25519_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
jas@kaka:~$ cat msg.sig 
-----BEGIN SSH SIGNATURE-----
U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgFj/2mfIczdFjUTEqY824FtbSVz
Xdr+v3Z2ymkLe4iAUAAAAMbXktbmFtZXNwYWNlAAAAAAAAAAZzaGE1MTIAAABTAAAAC3Nz
aC1lZDI1NTE5AAAAQLmWsq05tqOOZIJqjxy5ZP/YRFoaX30lfIllmfyoeM5lpVnxJ3ZxU8
SF0KodDr8Rtukg2N3Xo80NGvZOzbG/9Aw=
-----END SSH SIGNATURE-----
jas@kaka:~$
Now let s create a list of trusted public-keys and associated identities:
jas@kaka:~$ echo 'my.name@example.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF' > allowed-signers
jas@kaka:~$ 
Then let s verify the message we just signed:
jas@kaka:~$ cat msg   ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with ED25519 key SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ
jas@kaka:~$ 
I have implemented support for SPHINCS+ in OpenSSH. This is early work, but I wanted to announce it to get discussion of some of the details going and to make people aware of it. What would a better way to demonstrate SPHINCS+ support in OpenSSH than by validating the Git commit that implements it using itself? Here is how to proceed, first get a suitable development environment up and running. I m using a Debian container launched in a protected environment using podman.
jas@kaka:~$ podman run -it --rm debian:stable
Then install the necessary build dependencies for OpenSSH.
# apt-get update 
# apt-get install git build-essential autoconf libz-dev libssl-dev
Now clone my OpenSSH branch with the SPHINCS+ implentation and build it. You may browse the commit on GitHub first if you are curious.
# cd
# git clone https://github.com/jas4711/openssh-portable.git -b sphincsp
# cd openssh-portable
# autoreconf -fvi
# ./configure
# make
Configure a Git allowed signers list with my SPHINCS+ public key (make sure to keep the public key on one line with the whitespace being one ASCII SPC character):
# mkdir -pv ~/.ssh
# echo 'simon@josefsson.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAECI6eacTxjB36xcPtP0ZyxJNIGCN350GluLD5h0KjKDsZLNmNaPSFH2ynWyKZKOF5eRPIMMKSCIV75y+KP9d6w3' > ~/.ssh/allowed_signers
# git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers
Then verify the commit using the newly built ssh-keygen binary:
# PATH=$PWD:$PATH
# git log -1 --show-signature
commit ce0b590071e2dc845373734655192241a4ace94b (HEAD -> sphincsp, origin/sphincsp)
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
Author: Simon Josefsson <simon@josefsson.org>
Date:   Tue Dec 3 18:44:25 2024 +0100
    Add SPHINCS+.
# git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
# 
Yay! So what are some considerations? SPHINCS+ comes in many different variants. First it comes with three security levels approximately matching 128/192/256 bit symmetric key strengths. Second choice is between the SHA2-256, SHAKE256 (SHA-3) and Haraka hash algorithms. Final choice is between a robust and a simple variant with different security and performance characteristics. To get going, I picked the sphincss256sha256robust SPHINCS+ implementation from SUPERCOP 20241022. There is a good size comparison table in the sphincsplus implementation, if you want to consider alternative variants. SPHINCS+ public-keys are really small, as you can see in the allowed signers file. This is really good because they are handled by humans and often by cut n paste. What about private keys? They are slightly longer than Ed25519 private keys but shorter than typical RSA private keys.
# ssh-keygen -t sphincsplus -f my_sphincsplus_key -P ""
Generating public/private sphincsplus key pair.
Your identification has been saved in my_sphincsplus_key
Your public key has been saved in my_sphincsplus_key.pub
The key fingerprint is:
SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg root@ad600ff56253
The key's randomart image is:
+[SPHINCSPLUS 256-+
  .  .o            
 o . oo.           
  = .o.. o         
 o o  o o . .   o  
 .+    = S o   o . 
 Eo=  . + . . .. . 
 =*.+  o . . oo .  
 B+=    o o.o. .   
 o*o   ... .oo.    
+----[SHA256]-----+
# cat my_sphincsplus_key.pub 
ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7 root@ad600ff56253
# cat my_sphincsplus_key 
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAYwAAABtzc2gtc3
BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9slu
L/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAQidiIwanYiMGgAAAB
tzc2gtc3BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1
Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAIAbwBxEhA
NYzITN6VeCMqUyvw/59JM+WOLXBlRbu3R8qS7ljc4qFVWUtmhy8B3t9e4jrhdO6w0n5I4l
mnLnBi2hJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpS
vYgZvUkB2WVWGXXZBCfRdQ+wAAABFyb290QGFkNjAwZmY1NjI1MwECAwQ=
-----END OPENSSH PRIVATE KEY-----
# 
Signature size? Now here is the challenge, for this variant the size is around 29kb or close to 600 lines of base64 data:
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   head -10
tree ede42093e7d5acd37fde02065a4a19ac1f418703
parent 826483d51a9fee60703298bbf839d9ce37943474
author Simon Josefsson <simon@josefsson.org> 1733247865 +0100
committer Simon Josefsson <simon@josefsson.org> 1734907869 +0100
gpgsig -----BEGIN SSH SIGNATURE-----
 U1NIU0lHAAAAAQAAAGMAAAAbc3NoLXNwaGluY3NwbHVzQG9wZW5zc2guY29tAAAAQIjp5p
 xPGMHfrFw+0/RnLEk0gYI3fnQaW4sPmHQqMoOxks2Y1o9IUfbKdbIpko4Xl5E8gwwpIIhX
 vnL4o/13rDcAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAHSDAAAAG3NzaC1zcGhpbmNzcGx1c0
 BvcGVuc3NoLmNvbQAAdGDHlobgfgkKKQBo3UHmnEnNXczCMNdzJmeYJau67QM6xZcAU+d+
 2mvhbksm5D34m75DWEngzBb3usJTqWJeeDdplHHRe3BKVCQ05LHqRYzcSdN6eoeZqoOBvR
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   tail -5 
 ChvXUk4jfiNp85RDZ1kljVecfdB2/6CHFRtxrKHJRDiIavYjucgHF1bjz0fqaOSGa90UYL
 RZjZ0OhdHOQjNP5QErlIOcZeqcnwi0+RtCJ1D1wH2psuXIQEyr1mCA==
 -----END SSH SIGNATURE-----
Add SPHINCS+.
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b   wc -l
579
# 
What about performance? Verification is really fast:
# time git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
real	0m0.010s
user	0m0.005s
sys	0m0.005s
# 
On this machine, verifying an Ed25519 signature is a couple of times slower, and needs around 0.07 seconds. Signing is slower, it takes a bit over 2 seconds on my laptop.
# echo "Hello world!" > msg
# time ssh-keygen -Y sign -f my_sphincsplus_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
real	0m2.226s
user	0m2.226s
sys	0m0.000s
# echo 'my.name@example.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7' > allowed-signers
# cat msg   ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with SPHINCSPLUS key SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg
# 
Welcome to our new world of Post-Quantum safe digital signatures of Git commits, and Happy Hacking!

20 December 2024

Michael Prokop: Grml 2024.12 codename Adventgrenze

Picture with metrics of three user profiles on GitHub.com, with many contributions especially in the last quarter of the year We did it again ! Just in time, we re excited to announce the release of Grml stable version 2024.12, code-named Adventgrenze ! (If you re not familiar with Grml, it s a Debian-based live system tailored for system administrators.) This new release is built on Debian trixie, and for the first time, we re introducing support for 64-bit ARM CPUs (arm64 architecture)! I m incredibly proud of the hard work that went into this release. A significant amount of behind-the-scenes effort went into reworking our infrastructure and redesigning the build process. Special thanks to Chris and Darsha our Grml developer days in November and December were a blast! For a detailed overview of the changes between releases 2024.02 and 2024.12, check out our official release announcement. And, as always, after a release comes the next one exciting improvements are already in the works! BTW: recently we also celebrated 20(!) years of Grml Releases. If you re a Grml and or grml-zsh user, please join us in celebrating and send us a postcard!

19 December 2024

Gregory Colpart: MiniDebConf Toulouse 2024

After the MiniDebConf Marseille 2019, COVID-19 made it impossible or difficult to organize new MiniDebConfs for a few years. With the gradual resumption of in-person events (like FOSDEM, DebConf, etc.), the idea emerged to host another MiniDebConf in France, but with a lighter organizational load. In 2023, we decided to reach out to the organizers of Capitole du Libre to repeat the experience of 2017: hosting a MiniDebConf alongside their annual event in Toulouse in November. However, our request came too late for 2023. After discussions with Capitole du Libre in November 2023 in Toulouse and again in February 2024 in Brussels, we confirmed that a MiniDebConf Toulouse would take place in November 2024! We then assembled a small organizing team and got to work: a Call for Papers in May 2024, adding a two-day MiniDebCamp, coordinating with the DebConf video team, securing sponsors, creating a logo, ordering T-shirts and stickers, planning the schedule, and managing registrations. Even with lighter logistics (conference rooms, badges, and catering during the weekend were handled by Capitole du Libre), there was still quite a bit of preparation to do. On Thursday, November 14, and Friday, November 15, 2024, about forty developers arrived from around the world (France, Spain, Italy, Switzerland, Germany, England, Brazil, Uruguay, India, Brest, Marseille ) to spend two days at the MiniDebCamp in the beautiful collaborative spaces of Artilect in Toulouse city center.
Then, on Saturday, November 16, and Sunday, November 17, 2024, the MiniDebConf took place at ENSEEIHT as part of the Capitole du Libre event. The conference kicked off on Saturday morning with an opening session by J r my Lecour, which included a tribute to Lunar (Nicolas Dandrimont). This was followed by Reproducible Builds Rebuilding What is Distributed from ftp.debian.org (Holger Levsen) and Discussion on My Research Work on Sustainability of Debian OS (Eda). After lunch at the Capitole du Libre food trucks, the intense afternoon schedule began: What s New in the Linux Kernel (and What s Missing in Debian) (Ben Hutchings), Linux Live Patching in Debian (Santiago Ruano Rinc n), Trixie on Mobile: Are We There Yet? (Arnaud Ferraris), PostgreSQL Container Groups, aka cgroups Down the Road (C dric Villemain), Upgrading a Thousand Debian Hosts in Less Than an Hour (J r my Lecour and myself), and Using Debusine to Automate Your QA (Stefano Rivera & co). Sunday marked the second day, starting with a presentation on DebConf 25 (Benjamin Somers), which will be held in Brest in July 2025. The morning continued with talks: How LTS Goes Beyond LTS (Santiago Ruano Rinc n & Roberto C. S nchez), Cross-Building (Helmut Grohne), and State of JavaScript (Bastien Roucari s). In the afternoon, there were Lightning Talks, PyPI Security: Past, Present & Future (Salvo LtWorf Tomaselli), and the classic Bits from DPL (Andreas Tille), before closing with the final session led by Pierre-Elliott B cue. All talks are available on video (a huge thanks to the amazing DebConf video team), and many thanks to our sponsors (Viridien, Freexian, Evolix, Collabora, and Data Bene). A big thank-you as well to the entire Capitole du Libre team for hosting and supporting us see you in Brest in July 2025! Articles about (or mentioning) MiniDebConf Toulouse:

17 December 2024

Gunnar Wolf: The science of detecting LLM-generated text

This post is a review for Computing Reviews for The science of detecting LLM-generated text , a article published in Communications of the ACM
While artificial intelligence (AI) applications for natural language processing (NLP) are no longer something new or unexpected, nobody can deny the revolution and hype that started, in late 2022, with the announcement of the first public version of ChatGPT. By then, synthetic translation was well established and regularly used, many chatbots had started attending users requests on different websites, voice recognition personal assistants such as Alexa and Siri had been widely deployed, and complaints of news sites filling their space with AI-generated articles were already commonplace. However, the ease of prompting ChatGPT or other large language models (LLMs) and getting extensive answers its text generation quality is so high that it is often hard to discern whether a given text was written by an LLM or by a human has sparked significant concern in many different fields. This article was written to present and compare the current approaches to detecting human- or LLM-authorship in texts. The article presents several different ways LLM-generated text can be detected. The first, and main, taxonomy followed by the authors is whether the detection can be done aided by the LLM s own functions ( white-box detection ) or only by evaluating the generated text via a public application programming interface (API) ( black-box detection ). For black-box detection, the authors suggest training a classifier to discern the origin of a given text. Although this works at first, this task is doomed from its onset to be highly vulnerable to new LLMs generating text that will not follow the same patterns, and thus will probably evade recognition. The authors report that human evaluators find human-authored text to be more emotional and less objective, and use grammar to indicate the tone of the sentiment that should be used when reading the text a trait that has not been picked up by LLMs yet. Human-authored text also tends to have higher sentence-level coherence, with less term repetition in a given paragraph. The frequency distribution for more and less common words is much more homogeneous in LLM-generated texts than in human-written ones. White-box detection includes strategies whereby the LLMs will cooperate in identifying themselves in ways that are not obvious to the casual reader. This can include watermarking, be it rule based or neural based; in this case, both processes become a case of steganography, as the involvement of a LLM is explicitly hidden and spread through the full generated text, aiming at having a low detectability and high recoverability even when parts of the text are edited. The article closes by listing the authors concerns about all of the above-mentioned technologies. Detecting an LLM, be it with or without the collaboration of the LLM s designers, is more of an art than a science, and methods deemed as robust today will not last forever. We also cannot assume that LLMs will continue to be dominated by the same core players; LLM technology has been deeply studied, and good LLM engines are available as free/open-source software, so users needing to do so can readily modify their behavior. This article presents itself as merely a survey of methods available today, while also acknowledging the rapid progress in the field. It is timely and interesting, and easy to follow for the informed reader coming from a different subfield.

13 December 2024

Emanuele Rocca: Murder Mystery: GCC Builds Failing After sbuild Refactoring

This is the story of an investigation conducted by Jochen Sprickerhof, Helmut Grohne, and myself. It was true teamwork, and we would have not reached the bottom of the issue working individually. We think you will find it as interesting and fun as we did, so here is a brief writeup. A few of the steps mentioned here took several days, others just a few minutes. What is described as a natural progression of events did not always look very obvious at the moment at all.
Let us go through the Six Stages of Debugging together.

Stage 1: That cannot happen
Official Debian GCC builds start failing on multiple architectures in late November.
The build error happens on the build servers when running the testuite, but we know this cannot happen. GCC builds are not meant to fail in case of testsuite failures! Return codes are not making the build fail, make is being called with -k, it just cannot happen.
A lot of the GCC tests are always failing in fact, and an extensive log of the results is posted to the debian-gcc mailing list, but the packages always build fine regardless.
On the build daemons, build failures take several hours.

Stage 2: That does not happen on my machine
Building on my machine running Bookworm is just fine. The Build Daemons run Bookworm and use a Sid chroot for the build environment, just like I am. Same kernel.
The only obvious difference between my setup and the Debian buildds is that I am using sbuild 0.85.0 from bookworm, and the buildds have 0.86.3~bpo12+1 from bookworm-backports. Trying again with 0.86.3~bpo12+1, the build fails on my system too. The build daemons were updated to the bookworm-backports version of sbuild at some point in late November. Ha.

Stage 3: That should not happen
There are quite a few sbuild versions in between 0.85.0 and 0.86.3~bpo12+1, but looking at recent sbuild bugs shows that sbuild 0.86.0 was breaking "quite a number of packages". Indeed, with 0.86.0 the build still fails. Trying the version immediately before, 0.85.11, the build finishes correctly. This took more time than it sounds, one run including the tests takes several hours. We need a way to shorten this somehow.
The Debian packaging of GCC allows to specify which languages you may want to skip, and by default it builds Ada, Go, C, C++, D, Fortran, Objective C, Objective C++, M2, and Rust. When running the tests sequentially, the build logs stop roughly around the tests of a runtime library for D, libphobos. So can we still reproduce the failure by skipping everything except for D? With DEB_BUILD_OPTIONS=nolang=ada,go,c,c++,fortran,objc,obj-c++,m2,rust the build still fails, and it fails faster than before. Several minutes, not hours. This is progress, and time to file a bug. The report contains massive spoilers, so no link. :-)

Stage 4: Why does that happen?
Something is causing the build to end prematurely. It s not the OOM killer, and the kernel does not have anything useful to say in the logs. Can it be that the D language tests are sending signals to some process, and that is what s killing make ? We start tracing signals sent with bpftrace by writing the following script, signals.bt:
tracepoint:signal:signal_generate  
    printf("%s PID %d (%s) sent signal %d to PID %d\n", comm, pid, args->sig, args->pid);
 
And executing it with sudo bpftrace signals.bt.
The build takes its sweet time, and it fails. Looking at the trace output there s a suspicious process.exe terminating stuff.
process.exe (PID: 2868133) sent signal 15 to PID 711826
That looks interesting, but we have no clue what PID 711826 may be. Let s change the script a bit, and trace signals received as well.
tracepoint:signal:signal_generate  
    printf("PID %d (%s) sent signal %d to %d\n", pid, comm, args->sig, args->pid);
 
tracepoint:signal:signal_deliver  
    printf("PID %d (%s) received signal %d\n", pid, comm, args->sig);
 
The working version of sbuild was using dumb-init, whereas the new one features a little init in perl. We patch the current version of sbuild by making it use dumb-init instead, and trace two builds: one with the perl init, one with dumb-init.
Here are the signals observed when building with dumb-init.
PID 3590011 (process.exe) sent signal 2 to 3590014
PID 3590014 (sleep) received signal 9
PID 3590011 (process.exe) sent signal 15 to 3590063
PID 3590063 (std.process tem) received signal 9
PID 3590011 (process.exe) sent signal 9 to 3590065
PID 3590065 (std.process tem) received signal 9
And this is what happens with the new init in perl:
PID 3589274 (process.exe) sent signal 2 to 3589291
PID 3589291 (sleep) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589338
PID 3589338 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 9 to 3589340
PID 3589340 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589341
PID 3589274 (process.exe) sent signal 15 to 3589323
PID 3589274 (process.exe) sent signal 15 to 3589320
PID 3589274 (process.exe) sent signal 15 to 3589274
PID 3589274 (process.exe) received signal 9
PID 3589341 (sleep) received signal 9
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589320
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589323
There are a few additional SIGTERM being sent when using the perl init, that s helpful. At this point we are fairly convinced that process.exe is worth additional inspection. The source code of process.d shows something interesting:
1221 @system unittest
1222  
[...]
1247     auto pid = spawnProcess(["sleep", "10000"],
[...]
1260     // kill the spawned process with SIGINT
1261     // and send its return code
1262     spawn((shared Pid pid)  
1263         auto p = cast() pid;
1264         kill(p, SIGINT);
So yes, there s our sleep and the SIGINT (signal 2) right in the unit tests of process.d, just like we have observed in the bpftrace output.
Can we study the behavior of process.exe in isolation, separatedly from the build? Indeed we can. Let s take the executable from a failed build, and try running it under /usr/libexec/sbuild-usernsexec.
First, we prepare a chroot inside a suitable user namespace:
unshare --map-auto --setuid 0 --setgid 0 mkdir /tmp/rootfs
cd /tmp/rootfs
cat /home/ema/.cache/sbuild/unstable-arm64.tar   unshare --map-auto --setuid 0 --setgid 0 tar xf  -
unshare --map-auto --setuid 0 --setgid 0 mkdir /tmp/rootfs/whatever
unshare --map-auto --setuid 0 --setgid 0 cp process.exe /tmp/rootfs/
Now we can run process.exe on its own using the perl init, and trace signals at will:
/usr/libexec/sbuild-usernsexec --pivotroot --nonet u:0:100000:65536  g:0:100000:65536 /tmp/rootfs ema /whatever -- /process.exe
We can compare the behavior of the perl init vis-a-vis the one using dumb-init in milliseconds instead of minutes.

Stage 5: Oh, I see.
Why does process.exe send more SIGTERMs when using the perl init is now the big question. We have a simple reproducer, so this is where using strace becomes possible.
sudo strace --user ema --follow-forks -o sbuild-dumb-init.strace ./sbuild-usernsexec-dumb-init --pivotroot --nonet u:0:100000:65536  g:0:100000:65536 /tmp/dumbroot ema /whatever -- /process.exe
We start comparing the strace output of dumb-init with that of perl-init, looking in particular for different calls to kill.
Here is what process.exe does under dumb-init:
3593883 kill(-2, SIGTERM)               = -1 ESRCH (No such process)
No such process. Under perl-init instead:
3593777 kill(-2, SIGTERM <unfinished ...>
The process is there under perl-init!
That is a kill with negative pid. From the kill(2) man page:
If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid.
It would have been very useful to see this kill with negative pid in the output of bpftrace, why didn t we? The tracepoint used, tracepoint:signal:signal_generate, shows when signals are actually being sent, and not the syscall being called. To confirm, one can trace tracepoint:syscalls:sys_enter_kill and see the negative PIDs, for example:
PID 312719 (bash) sent signal 2 to -312728
The obvious question at this point is: why is there no process group 2 when using dumb-init?

Stage 6: How did that ever work?
We know that process.exe sends a SIGTERM to every process in the process group with ID 2. To find out what this process group may be, we spawn a shell with dumb-init and observe under /proc PIDs 1, 16, and 17. With perl-init we have 1, 2, and 17. When running dumb-init, there are a few forks before launching the program, explaining the difference. Looking at /proc/2/cmdline we see that it s bash, ie. the program we are running under perl-init. When building a package, that is dpkg-buildpackage itself.
The test is accidentally killing its own process group.
Now where does this -2 come from in the test?
2363     // Special values for _processID.
2364     enum invalid = -1, terminated = -2;
Oh. -2 is used as a special value for PID, meaning "terminated". And there s a call to kill() later on:
2694     do   s = tryWait(pid);   while (!s.terminated);
[...]
2697     assertThrown!ProcessException(kill(pid));
What sets pid to terminated you ask?
Here is tryWait:
2568 auto tryWait(Pid pid) @safe
2569  
2570     import std.typecons : Tuple;
2571     assert(pid !is null, "Called tryWait on a null Pid.");
2572     auto code = pid.performWait(false);
And performWait:
2306         _processID = terminated;
The solution, dear reader, is not to kill.
PS: the bug report with spoilers for those interested is #1089007.

9 December 2024

Thorsten Alteholz: My Debian Activities in November 2024

Debian LTS This was my hundred-twenty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on: I also continued to work on a fix for glewlwyd, which is more difficult than expected. Besides I started to work on ffmpeg and haproxy. Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting. Debian ELTS This month was the seventy-sixth ELTS month. During my allocated time I uploaded or worked on: I also started to work on a fix for kmail-account-wizzard. Unfortunately preparing a testing environment takes some time and I did not finish testing this month. Besides I started to work on ffmpeg and haproxy. Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting. Debian Printing Unfortunately I didn t found any time to work on this topic. Debian Matomo Unfortunately I didn t found any time to work on this topic. Debian Astro This month I uploaded new packages or new upstream or bugfix versions of: I also sponsored an upload of calceph. Debian IoT This month I uploaded new upstream or bugfix versions of: Debian Mobcom This month I uploaded new packages or new upstream or bugfix versions of: misc This month I uploaded new upstream or bugfix versions of: I also did some NMU of opensta, kdrill, glosstex, irsim, pagetools, afnix, cpm, to fix some RC bugs. FTP master This month I accepted 266 and rejected 16 packages. The overall number of packages that got accepted was 269.

5 December 2024

Reproducible Builds: Reproducible Builds in November 2024

Welcome to the November 2024 report from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security where relevant. As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds mourns the passing of Lunar
  2. Introducing reproduce.debian.net
  3. New landing page design
  4. SBOMs for Python packages
  5. Debian updates
  6. Reproducible builds by default in Maven 4
  7. PyPI now supports digital attestations
  8. Dependency Challenges in OSS Package Registries
  9. Zig programming language demonstrated reproducible
  10. Website updates
  11. Upstream patches
  12. Misc development news
  13. Reproducibility testing framework

Reproducible Builds mourns the passing of Lunar The Reproducible Builds community sadly announced it has lost its founding member, Lunar. J r my Bobbio aka Lunar passed away on Friday November 8th in palliative care in Rennes, France. Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. He was the author of our earliest status reports and many of our key tools in use today are based on his design. Lunar s creativity, insight and kindness were often noted. You can view our full tribute elsewhere on our website. He will be greatly missed.

Introducing reproduce.debian.net In happier news, this month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there. In November, reproduce.debian.net began rebuilding Debian unstable on the amd64 architecture, but throughout the MiniDebConf, it had attempted to rebuild 66% of the official archive. From this, it could be determined that it is currently possible to bit-for-bit reproduce and corroborate approximately 78% of the actual binaries distributed by Debian that is, using the .buildinfo files hosted by Debian itself. reproduce.debian.net also contains instructions how to setup one s own rebuilderd instance, and we very much invite everyone with a machine to spare to setup their own version and to share the results. Whilst rebuilderd is still in development, it has been used to reproduce Arch Linux since 2019. We are especially looking for installations targeting Debian architectures other than i386 and amd64.

New landing page design As part of a very productive partnership with the Sovereign Tech Fund and Neighbourhoodie, we are pleased to unveil our new homepage/landing page. We are very happy with our collaboration with both STF and Neighbourhoodie (including many changes not directly related to the website), and look forward to working with them in the future.

SBOMs for Python packages The Python Software Foundation has announced a new cross-functional project for SBOMs and Python packages . Seth Michael Larson writes that the project is specifically looking to solve these issues :
  • Enable Python users that require SBOM documents (likely due to regulations like CRA or SSDF) to self-serve using existing SBOM generation tools.
  • Solve the phantom dependency problem, where non-Python software is bundled in Python packages but not recorded in any metadata. This makes the job of software composition analysis (SCA) tools difficult or impossible.
  • Make the adoption work by relevant projects such as build backends, auditwheel-esque tools, as minimal as possible. Empower users who are interested in having better SBOM data for the Python projects they are using to be able to contribute engineering time towards that goal.
A GitHub repository for the initiative is available, and there are a number of queries, comments and remarks on Seth s Discourse forum post.

Debian updates There was significant development within Debian this month. Firstly, at the recent MiniDebConf in Toulouse, France, Holger Levsen gave a Debian-specific talk on rebuilding packages distributed from ftp.debian.org that is to say, how to reproduce the results from the official Debian build servers: Holger described the talk as follows:
For more than ten years, the Reproducible Builds project has worked towards reproducible builds of many projects, and for ten years now we have build Debian packages twice with maximal variations applied to see if they can be build reproducible still. Since about a month, we ve also been rebuilding trying to exactly match the builds being distributed via ftp.debian.org. This talk will describe the setup and the lessons learned so far, and why the results currently are what they are (spoiler: they are less than 30% reproducible), and what we can do to fix that.
The Debian Project Leader, Andreas Tille, was present at the talk and remarked later in his Bits from the DPL update that:
It might be unfair to single out a specific talk from Toulouse, but I d like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar s contributions and legacy. Personally, I ve encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work.
Holger s slides and video in .webm format are available.
Next, rebuilderd is the server to monitor package repositories of Linux distributions and attempt to reproduce the observed results. This month, version 0.21.0 released, most notably with improved support for binNMUs by Jochen Sprickerhof and updating the rebuilderd-debian.sh integration to the latest debrebuild version by Holger Levsen. There has also been significant work to get the rebuilderd package into the Debian archive, in particular, both rust-rebuilderd-common version 0.20.0-1 and rust-rust-lzma version 0.6.0-1 were packaged by kpcyrd and uploaded by Holger Levsen. Related to this, Holger Levsen submitted three additional issues against rebuilderd as well:
  • rebuildctl should be more verbose when encountering issues. [ ]
  • Please add an option to used randomised queues. [ ]
  • Scheduling and re-scheduling multiple packages at once. [ ]
and lastly, Jochen Sprickerhof submitted one an issue requested that rebuilderd downloads the source package in addition to the .buildinfo file [ ] and kpcyrd also submitted and fixed an issue surrounding dependencies and clarifying the license [ ]
Separate to this, back in 2018, Chris Lamb filed a bug report against the sphinx-gallery package as it generates unreproducible content in various ways. This month, however, Dmitry Shachnev finally closed the bug, listing the multiple sub-issues that were part of the problem and how they were resolved.
Elsewhere, Roland Clobus posted to our mailing list this month, asking for input on a bug in Debian s ca-certificates-java package. The issue is that the Java key management tools embed timestamps in its output, and this output ends up in the /etc/ssl/certs/java/cacerts file on the generated ISO images. A discussion resulted from Roland s post suggesting some short- and medium-term solutions to the problem.
Holger Levsen uploaded some packages with reproducibility-related changes:
Lastly, 12 reviews of Debian packages were added, 5 were updated and 21 were removed this month adding to our knowledge about identified issues in Debian.

Reproducible builds by default in Maven 4 On our mailing list this month, Herv Boutemy reported the latest release of Maven (4.0.0-beta-5) has reproducible builds enabled by default. In his mailing list post, Herv mentions that this story started during our Reproducible Builds summit in Hamburg , where he created the upstream issue that builds on a multi-year effort to have Maven builds configured for reproducibility.

PyPI now supports digital attestations Elsewhere in the Python ecosystem and as reported on LWN and elsewhere, the Python Package Index (PyPI) has announced that it has finalised support for PEP 740 ( Index support for digital attestations ). Trail of Bits, who performed much of the development work, has an in-depth blog post about the work and its adoption, as well as what is left undone:
One thing is notably missing from all of this work: downstream verification. [ ] This isn t an acceptable end state (cryptographic attestations have defensive properties only insofar as they re actually verified), so we re looking into ways to bring verification to individual installing clients. In particular, we re currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.
There was an in-depth discussion on LWN s announcement page, as well as on Hacker News.

Dependency Challenges in OSS Package Registries At BENEVOL, the Belgium-Netherlands Software Evolution workshop in Namur, Belgium, Tom Mens and Alexandre Decan presented their paper, An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries . The abstract of their paper is as follows:
While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. [ ]
A PDF of the paper is available online.

Zig programming language demonstrated reproducible Motiejus Jak ty posted an interesting and practical blog post on his successful attempt to reproduce the Zig programming language without using the pre-compiled binaries checked into the repository, and despite the circular dependency inherent in its bootstrapping process. As a summary, Motiejus concludes that:
I can now confidently say (and you can also check, you don t need to trust me) that there is nothing hiding in zig1.wasm [the checked-in binary] that hasn t been checked-in as a source file.
The full post is full of practical details, and includes a few open questions.

Website updates Notwithstanding the significant change to the landing page (screenshot above), there were an enormous number of changes made to our website this month. This included:
  • Alex Feyerke and Mariano Gim nez:
    • Dramatically overhaul the website s landing page with new benefit cards tailored to the expected visitors to our website and a reworking of the visual hierarchy and design. [ ][ ][ ][ ][ ][ ][ ][ ][ ][ ]
  • Bernhard M. Wiedemann:
    • Update the System images page to document the e2fsprogs approach. [ ]
  • Chris Lamb:
  • FC (Fay) Stegerman:
    • Replace more inline markdown with HTML on the Success stories page. [ ]
    • Add some links, fix some other links and correct some spelling errors on the Tools page. [ ]
  • Holger Levsen:
    • Add a historical presentation ( Reproducible builds everywhere eg. in Debian, OpenWrt and LEDE ) from October 2016. [ ]
    • Add jochensp and Oejet to the list of known contributors. [ ][ ]
  • Julia Kr ger:
  • Ninette Adhikari & hulkoba:
    • Add/rework the list of success stories into a new page that clearly shows milestones in Reproducible Builds. [ ][ ][ ][ ][ ][ ]
  • Philip Rinn:
    • Import 47 historical weekly reports. [ ]
  • hulkoba:
    • Add alt text to almost all images (!). [ ][ ]
    • Fix a number of links on the Talks . [ ][ ]
    • Avoid so-called ghost buttons by not using <button> elements as links, as the affordance of a <button> implies an action with (potentially) a side effect. [ ][ ]
    • Center the sponsor logos on the homepage. [ ]
    • Move publications and generate them instead from a data.yml file with an improved layout. [ ][ ]
    • Make a large number of small but impactful stylisting changes. [ ][ ][ ][ ]
    • Expand the Tools to include a number of missing tools, fix some styling issues and fix a number of stale/broken links. [ ][ ][ ][ ][ ][ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Misc development news

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related changes:
    • Create and introduce a new reproduce.debian.net service and subdomain [ ]
    • Make a large number of documentation changes relevant to rebuilderd. [ ][ ][ ][ ][ ]
    • Explain a temporary workaround for a specific issue in rebuilderd. [ ]
    • Setup another rebuilderd instance on the o4 node and update installation documentation to match. [ ][ ]
    • Make a number of helpful/cosmetic changes to the interface, such as clarifying terms and adding links. [ ][ ][ ][ ][ ]
    • Deploy configuration to the /opt and /var directories. [ ][ ]
    • Add an infancy (or alpha ) disclaimer. [ ][ ]
    • Add more notes to the temporary rebuilderd documentation. [ ]
    • Commit an nginx configuration file for reproduce.debian.net s Stats page. [ ]
    • Commit a rebuilder-worker.conf configuration for the o5 node. [ ]
  • Debian-related changes:
    • Grant jspricke and jochensp access to the o5 node. [ ][ ]
    • Build the qemu package with the nocheck build flag. [ ]
  • Misc changes:
    • Adapt the update_jdn.sh script for new Debian trixie systems. [ ]
    • Stop installing the PostgreSQL database engine on the o4 and o5 nodes. [ ]
    • Prevent accidental reboots of the o4 node because of a long-running job owned by josch. [ ][ ]
In addition, Mattia Rizzolo addressed a number of issues with reproduce.debian.net [ ][ ][ ][ ]. And lastly, both Holger Levsen [ ][ ][ ][ ] and Vagrant Cascadian [ ][ ][ ][ ] performed node maintenance.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 December 2024

Scarlett Gately Moore: Hacked and tis the season for surgeries

I am still here. Sadly while I battle this insane infection from my broken arm I got back in July, the hackers got my blog. I am slowly building it back up. Further bad news is I have more surgeries, first one tomorrow. Furthering my current struggles I cannot start my job search due to hospitalization and recovery. Please consider a donation. https://gofund.me/6e99345d On the open source work front, I am still working on stuff, mostly snaps ( Apps 24.08.3 released ) Thank you everyone that voted me into the Ubuntu Community Council! I am trying to stay positive, but it seems I can t catch a break. I will have my computer in the hospital and will work on what I can. Have a blessed day and see you soon. Scarlett

1 December 2024

Sandro Knau : QML Dependency tracking in Debian

Tracking library dependencies work in Debian to resolve from symbols usage to a library and add this to the list of dependencies. That is working for years now. The KDE community nowadays create more and more QML based applications. Unfortunately QML is a interpreted language, this means missing QML dependencies will only be an issue at runtime. To fix this I created dh_qmldeps, that searches for QML dependencies at build time and will fail if it can't resolve the QML dependency. Me didn't create an own QML interpreter, just using qmlimportscanner behind the scenes and process the output further to resolve the QML modules to Debian packages. The workflow is like follows: The package compiles normally and split to the binary packages. Than dh_qmldeps scans through the package content to find QML content ( .qml files, or qmldirfor QML modules). All founded files will be scanned by qmlimportscanner, the output is a list of depended QML modules. As QML modules have a standardized file path, we can ask the Debian system, which packages ship this file path. We end up with a list of Debian packages in the variable $ qml6:Depends . This variable can be attached to the list of dependencies of the scanned package. A maintainer can also lower some dependencies to Recommends or Suggest, if needed. You can find the source code on salsa and usage documentation you can find on https://qt-kde-team.pages.debian.net/dh_qmldeps.html. The last weeks I now enabled dh_qmldeps for newly every package, that creates a QML6 module package. So the first bugs are solved and it should be usable for more packages. By scanning with qmlimportscanner trough all code, I found several non-existing QML modules: YEAH - the first milestone is reached. We are able to simply handle QML modules. But QML applications there is still room for improvement. In apps the QML files are inside the executable. Additionally applications create internal QML modules, that are shipped directly in the same executable. I still search for a good way to analyse an executable to get a list of internal QML modules and a list of included QML files. Any ideas are welcomed :) As workaround dh_qmldeps scans currently all QML files inside the application source code.

19 November 2024

Melissa Wen: Display/KMS Meeting at XDC 2024: Detailed Report

XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers. Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition. Short on Time?
  1. Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
  1. For a quick written summary, scroll down to the TL;DR section.

TL;DR This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):
  • Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
  • Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
  • HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland s (AMD) changes.
  • Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
  • Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.

Bringing together Linux display developers in the XDC 2024 While I didn t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants. Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD) Link: https://indico.freedesktop.org/event/6/contributions/383/ Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts. The final agenda covered five topics in the scheduled order:
  1. How to share drivers between V4L2 and DRM for bridge-like components (new topic);
  2. Real-time Scheduling (problems encountered after the Display Next hackfest);
  3. HDR/Color Management (ofc);
  4. Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
  5. (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).

Unpacking the Topics Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed. From his notes, let s dive into the key discussions!

How to share drivers between V4L2 and KMS for bridge-like components. Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.
  • Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
  • Potential Solutions:
    1. Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
    2. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
    3. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.

Real-Time Scheduling Challenges We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.
  • Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
  • Action items:
    • Explore alternative backtraces during the busy wait (e.g., ftrace).
    • Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel D nzer can help with this setup).

HDR/Color Management This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years. Here s a breakdown of the key points raised at this meeting:
  • Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
  • NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA s specific needs.
  • Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
  • Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your Acked-by (approval)]
  • Compositor Side: The compositor developers have also made significant progress.
    • KDE has already implemented and validated the API through an experimental implementation in Kwin.
    • Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
    • Weston has also begun exploring implementation, and we might see something from them by the end of the year.
  • Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.
Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and shipping the DRM/KMS plane color management API!

Display Mux During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.
  • Context:
  • Key Considerations:
    • HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don t need to focus on it here.
    • Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the drm-master to coordinate the entire sequence, but how can this be ensured?
    • Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
  • Action points:
    • Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
    • Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.

Towards Better Commit Failure Feedback In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.
  • Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.
To address this issue, we discussed several potential improvements:
  • Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
  • Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
  • Enhanced Logging: Currently, the dmesg log doesn t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.
By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.

A Big Thank You! Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel D nzer, Rob Clark, Simon Ser and Teddy Li. This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack. Stay tuned for future updates!

12 November 2024

Paul Tagliamonte: Complex for Whom?

In basically every engineering organization I ve ever regarded as particularly high functioning, I ve sat through one specific recurring conversation which is not a conversation about complexity . Things are good or bad because they are or aren t complex, architectures needs to be redone because it s too complex some refactor of whatever it is won t work because it s too complex. You may have even been a part of some of these conversations or even been the one advocating for simple light-weight solutions. I ve done it. Many times. Rarely, if ever, do we talk about complexity within its rightful context complexity for whom. Is a solution complex because it s complex for the end user? Is it complex if it s complex for an API consumer? Is it complex if it s complex for the person maintaining the API service? Is it complex if it s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I ve come to believe, is fairly zero-sum there s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own. That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I ve been informed that Fred Brooks coined a term for what I call lower bound complexity Essential Complexity , in the paper No Silver Bullet Essence and Accident in Software Engineering , which is a better term and can be used interchangeably.

Complexity Culture In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn t usually as bad as it could be. When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can t you make it less complex? Right around here is when I start to try and contextualize the conversation happening around me understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you? Frequently it s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it s something downstream consumers are likely to hit, it s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service Popoffs about how complex something is, are, to a first approximation, best understood as meaning complicated for the person making comments . A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They re right. Mostly right. This is less complex less complex for them. It s not, however, without complexity and its own tradeoffs it s just complexity that they do not have to deal with. Now they don t have to maintain machines that have pesky operating systems or hard drive failures. They don t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster. On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) not to mention all sorts of rules to route packets to their project (a single repo s binary being run in 3 containers on a single vm host). Beyond that, there s the invisible complexity complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had. What s more complex? An app running in an in-house 4u server racked in the office s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES This extends beyond Engineering. Decisions regarding what tools are we able to use be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects will incur costs in terms of expressed complexity . Pinning open source projects to a fixed set makes SBOM production less complex . Using only one SaaS provider s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation less complex . If all you have is a contract with Pauly T s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is less complex for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won t and make it everyone else s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor. Suddenly, the decision to reduce complexity because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, less complex to the organization. It s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this. I can t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there s a decisionmaker optimizing for what they believe to be the least amount of complexity least hassle, fewest unique cases, most consistency as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN T REVIEW) We wish to rid ourselves of systemic Complexity after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity ( accidental complexity in Brooks s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that s a good idea. Maybe it s not. All I know is that what doesn t help the situation is conflating complexity with everything we don t like legacy code, maintenance burden or toil, cost, delivery velocity.
  • Complexity is not the same as proclivity to failure. The most reliable systems I ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.
Next time you re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they re saying mean something along the lines of I don t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing? What can change? What should change?

11 November 2024

Vincent Bernat: Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration. While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.
  pkgs  :
pkgs.stdenv.mkDerivation  
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
 
Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:
  stdenv, fetchurl  :
stdenv.mkDerivation rec  
  pname = "hello";
  version = "2.12.1";
  src = fetchurl  
    url = "mirror://gnu/hello/hello-$ version .tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
   ;
 
To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.
pkgs.stdenvNoCC.mkDerivation rec  
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v$ version  --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';
  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
 
With a fixed-output derivation, it is up to us to ensure the output is always the same: You can use this derivation to override the src attribute in pkgs.caddy:
pkgs.caddy.overrideAttrs (prev:  
  src = pkgs.stdenvNoCC.mkDerivation   /* ... */  ;
  vendorHash = null;
  subPackages = [ "." ];
 );
Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:
 
  inputs =  
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
   ;
  outputs =   self, nixpkgs, flake-utils, caddy  :
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs  
          inherit system;
          overlays = [ caddy.overlays.default ];
         ;
      in
       
        packages =  
          default = pkgs.caddy.withPlugins  
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
           ;
         ;
       );
 

Update (2024-11) This flake won t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term plugins, though Caddy documentation also refers to them as modules since they are implemented as Go modules.
  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different and I have proposed it in another pull request.
  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail.

8 November 2024

Freexian Collaborators: Debian Contributions: October s report (by Anupa Ann Joseph)

Debian Contributions: 2024-10 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

rebootstrap, by Helmut Grohne After significant changes earlier this year, the state of architecture cross bootstrap is normalizing again. More and more architectures manage to complete rebootstrap testing successfully again. Here are two examples of what kind of issues the bootstrap testing identifies. At some point, libpng1.6 would fail to cross build on musl architectures whereas it would succeed on other ones failing to locate zlib. Adding --debug-find to the cmake invocation eventually revealed that it would fail to search in /usr/lib/<triplet>, which is the default library path. This turned out to be a bug in cmake assuming that all linux systems use glibc. libpng1.6 also gained a baseline violation for powerpc and ppc64 by enabling the use of AltiVec there. The newt package would fail to cross build for many 32-bit architectures whereas it would succeed for armel and armhf due to -Wincompatible-pointer-types. It turns out that this flag was turned into -Werror and it was compiling with a warning earlier. The actual problem is a difference in signedness between wchar_t and FriBidChar (aka uint32_t) and actually affects native building on i386.

Miscellaneous contributions
  • Helmut sent 35 patches for cross build failures.
  • Stefano Rivera uploaded the Python 3.13.0 final release.
  • Stefano continued to rebuild Python packages with C extensions using Python 3.13, to catch compatibility issues before the 3.13-add transition starts.
  • Stefano uploaded new versions of a handful of Python packages, including: dh-python, objgraph, python-mitogen, python-truststore, and python-virtualenv.
  • Stefano packaged a new release of mkdocs-macros-plugin, which required packaging a new Python package for Debian, python-super-collections (now in NEW review).
  • Stefano helped the mini-DebConf Online Brazil get video infrastructure up and running for the event. Unfortunately, Debian s online-DebConf setup has bitrotted over the last couple of years, and it eventually required new temporary Jitsi and Jibri instances.
  • Colin Watson fixed a number of autopkgtest failures to get ansible back into testing.
  • Colin fixed an ssh client failure in certain cases when using GSS-API key exchange, and added an integration test to ensure this doesn t regress in future.
  • Colin worked on the Python 3.13 transition, fixing problems related to it in 15 packages. This included upstream work in a number of packages (postgresfixture, python-asyncssh, python-wadllib).
  • Colin upgraded 41 Python packages to new upstream versions.
  • Carles improved po-debconf-manager: now it can create merge requests to Salsa automatically (created 17, new batch coming this month), imported almost all the packages with debconf translation templates whose VCS is Salsa (currently 449 imported), added statistics per package and language, improved command line interface options. Performed user support fixing different issues. Also prepared an abstract for the talk at MiniDebConf Toulouse.
  • Santiago Ruano Rinc n continued the organization work for the DebConf 25 conference, to be held in Brest, France. Part of the work relates to the initial edits of the sponsoring brochure. Thanks to Benjamin Somers who finalized the French and English versions.
  • Rapha l forwarded a couple of zim and hamster bugs to the upstream developers, and tried to diagnose a delayed startup of gdm on his laptop (cf #1085633).
  • On behalf of the Debian Publicity Team, Anupa interviewed 7 women from the Debian community, old and new contributors. The interview was published in Bits from Debian.

1 November 2024

Colin Watson: Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. Ansible I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone. The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process: This should now get back into testing tomorrow. OpenSSH Martin- ric Racine reported that ssh-audit didn t list the ext-info-s feature as being available in Debian s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page. I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct. On upstream s advice, I cherry-picked some key exchange fixes needed for big-endian architectures. Python team I packaged python-evalidate, needed for a new upstream version of buildbot. The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface. A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python s dead batteries PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian s python-wadllib source package to allow its tests to pass. I ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging. tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left. I tracked down an nltk regression that caused build failures in many other packages. I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernoo , but it needed a little extra work). I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto). I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools. I fixed broken symlinks in python-treq. I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream). I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions. I tried to fix a regression in python-scruffy, but I need testing feedback. I requested removal of python-testing.mysqld.

26 October 2024

Russell Coker: The CUPS Vulnerability

The Announcement Late last month there was an announcement of a severity 9.9 vulnerability allowing remote code execution that affects all GNU/Linux systems (plus others) [1]. For something to affect all Linux systems that would have to be either a kernel issue or a sshd issue. The announcement included complaints about the lack of response of vendors and And YES: I LOVE hyping the sh1t out of this stuff because apparently sensationalism is the only language that forces these people to fix . He seems to have a different experience to me of reporting bugs, I have had plenty of success getting bugs fixed without hyping them. I just report the bug, wait a while, and it gets fixed. I have reported potential security bugs without even bothering to try and prove that they were exploitable (any situation where you can make a program crash is potentially exploitable), I just report it and it gets fixed. I was very dubious about his ability to determine how serious a bug is and to accurately report it so this wasn t a situation where I was waiting for it to be disclosed to discover if it affected me. I was quite confident that my systems wouldn t be at any risk. Analysis Not All Linux Systems Run CUPS When it was published my opinion was proven to be correct, it turned out to be a series of CUPS bugs [2]. To describe that as all GNU/Linux systems (plus others) seems like a vast overstatement, maybe a good thing to say if you want to be a TikTok influencer but not if you want to be known for computer security work. For the Debian distribution the cups-browsed package (which seems to be the main exploitable one) is recommended by cups-daemon, as I have my Debian systems configured to not install recommended packages by default that means that it wasn t installed on any of my systems. Also the vast majority of my systems don t do printing and therefore don t have any part of CUPS installed. CUPS vs NAT The next issue is that in Australia most home ISPs don t have IPv6 enabled and CUPS doesn t do the things needed to allow receiving connections from the outside world via NAT with IPv4. If inbound port 631 is blocked on both TCP and USP as is the default on Australian home Internet or if there is a correctly configured firewall in place then the network is safe from attack. There is a feature called uPnP port forwarding [3] to allow server programs to ask a router to send inbound connections to them, this is apparently usually turned off by default in router configuration. If it is enabled then there are Debian packages of software to manage this, the miniupnpc package has the client (which can request NAT changes on the router) [4]. That package is not installed on any of my systems and for my home network I don t use a router that runs uPnP. The only program I knowingly run that uses uPnP is Warzone2100 and as I don t play network games that doesn t happen. Also as an aside in version 4.4.2-1 of warzone2100 in Debian and Ubuntu I made it use Bubblewrap to run the game in a container. So a Remote Code Execution bug in Warzone 2100 won t be an immediate win for an attacker (exploits via X11 or Wayland are another issue). MAC Systems Debian has had AppArmor enabled by default since Buster was released in 2019 [5]. There are claims that AppArmor will stop this exploit from doing anything bad. To check SE Linux access I first use the semanage fcontext command to check the context of the binary, cupsd_exec_t means that the daemon runs as cupsd_t. Then I checked what file access is granted with the sesearch program, mostly just access to temporary files, cupsd config files, the faillog, the Kerberos cache files (not used on the Kerberos client systems I run), Samba run files (might be a possibility of exploiting something there), and the security_t used for interfacing with kernel security infrastructure. I then checked the access to the security class and found that it is permitted to check contexts and access-vectors not access that can be harmful. The next test was to use sesearch to discover what capabilities are granted, which unfortunately includes the sys_admin capability, that is a capability that allows many sysadmin tasks that could be harmful (I just checked the Fedora source and Fedora 42 has the same access). Whether the sys_admin capability can be used to do bad things with the limited access cupsd_t has to device nodes etc is not clear. But this access is undesirable. So the SE Linux policy in Debian and Fedora will stop cupsd_t from writing SETUID programs that can be used by random users for root access and stop it from writing to /etc/shadow etc. But the sys_admin capability might allow it to do hostile things and I have already uploaded a changed policy to Debian/Unstable to remove that. The sys_rawio capability also looked concerning but it s apparently needed to probe for USB printers and as the domain has no access to block devices it is otherwise harmless. Below are the commands I used to discover what the policy allows and the output from them.
# semanage fcontext -l grep bin/cups-browsed
/usr/bin/cups-browsed                              regular file       system_u:object_r:cupsd_exec_t:s0 
# sesearch -A -s cupsd_t -c file -p write
allow cupsd_t cupsd_interface_t:file   append create execute execute_no_trans getattr ioctl link lock map open read rename setattr unlink write  ;
allow cupsd_t cupsd_lock_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ;
allow cupsd_t cupsd_log_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ;
allow cupsd_t cupsd_runtime_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ;
allow cupsd_t cupsd_rw_etc_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ;
allow cupsd_t cupsd_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ;
allow cupsd_t cupsd_tmp_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ;
allow cupsd_t faillog_t:file   append getattr ioctl lock open read write  ;
allow cupsd_t init_tmpfs_t:file   append getattr ioctl lock read write  ;
allow cupsd_t krb5_host_rcache_t:file   append create getattr ioctl link lock open read rename setattr unlink write  ; [ allow_kerberos ]:True
allow cupsd_t print_spool_t:file   append create getattr ioctl link lock open read relabelfrom relabelto rename setattr unlink write  ;
allow cupsd_t samba_var_t:file   append getattr ioctl lock open read write  ;
allow cupsd_t security_t:file   append getattr ioctl lock open read write  ;
allow cupsd_t security_t:file   append getattr ioctl lock open read write  ; [ allow_kerberos ]:True
allow cupsd_t usbfs_t:file   append getattr ioctl lock open read write  ;
# sesearch -A -s cupsd_t -c security
allow cupsd_t security_t:security check_context; [ allow_kerberos ]:True
allow cupsd_t security_t:security   check_context compute_av  ;
# sesearch -A -s cupsd_t -c capability
allow cupsd_t cupsd_t:capability net_bind_service; [ allow_ypbind ]:True
allow cupsd_t cupsd_t:capability   audit_write chown dac_override dac_read_search fowner fsetid ipc_lock kill net_bind_service setgid setuid sys_admin sys_rawio sys_resource sys_tty_config  ;
# sesearch -A -s cupsd_t -c capability2
allow cupsd_t cupsd_t:capability2   block_suspend wake_alarm  ;
# sesearch -A -s cupsd_t -c blk_file
Conclusion This is an example of how not to handle security issues. Some degree of promotion is acceptable but this is very excessive and will result in people not taking security announcements seriously in future. I wonder if this is even a good career move by the researcher in question, will enough people believe that they actually did something good in this that it outweighs the number of people who think it s misleading at best?

24 October 2024

Emmanuel Kasper: back to blogging and running a feed reader as a containerized systemd service

After reading about Jonathan McDowell feed reader install and the back to blogging initiative, I decided to install a feed reader to follow all those nice blog posts. With a feed reader you can compose your own feed of news based on blog posts, websites, mastodon toots. And then you are independant from ad oriented ranking algorithms of social networks. Since Jonathan used FreshRSS as a feed reader, I started with the same software. On a quick glance on its github page, it sounded like a good project:
  • active contributions
  • different channels for stable and latest version of the software
  • container images pointing to the stable release
  • support multiple databases for storage, including PostgreSQL
  • correct documentation mentioning security caveats
I prefer to do the container image installation using podman since:
  • upgrades from FreshRSS are easy to do and can be done separately from operating system upgrades
  • I do not mess my based operating system with php (subjective) and in case of a compromized freshrss, the freshrss/apache install would be still restrained to its own Linux namespaces, separated from the rest of the system.
Podman is image compatible with Docker as they both implement the OCI runtime specification, and have a nearly identical command line interface. This installation will be done on a Debian server, but should work too on any Linux distribution. Initial setup
  • start a container image based on the start command provided by the FreshRSS project. The podman command line is nearly identical to the docker command line, excepts that podman expects the fully qualified domain name associated with the container image, and I chose to run the freshrss container on the localhost interface only. I also use a defined version tag, because using the latest tag makes it complicated to track which exact ersion I have installed.
# podman pull docker.io/freshrss/freshrss:1.20.1
# podman run --detach --restart unless-stopped --log-opt max-size=10m \
  --publish 127.0.0.1:8081:80 \
  --env TZ=Europe/Paris \
  --env 'CRON_MIN=1,31' \
  --volume freshrss_data:/var/www/FreshRSS/data \
  --volume freshrss_extensions:/var/www/FreshRSS/extensions \
  --name freshrss \
  docker.io/freshrss/freshrss:1.20.1
  • verify where the podman volumes have been created. This is where the user data of freshrss will be stored.
# podman volume ls
# podman volume inspect freshrss_data
  • now that freshrss is installed, you can start its configuration wizard at localhost:8081. You should keep the default sqlite choice
  • finally after running the wizard, you can login again and add some feeds
  • verify that your config has been stored outside the container, and inside the volume (so that it will not be erased in case of upgrages)
# ls -l /var/lib/containers/storage/volumes/freshrss_data/_data/users/
  • verify the state of sqlite database
echo '.tables'  sqlite3  /var/lib/containers/storage/volumes/freshrss_data/_data/users/<your freshrss user>/db.sqlite 
category  entry     entrytag  entrytmp  feed      tag
Going with FreshRSS in Production Podman has this very nice feature that it can generate a systemd unit from a running container, and use systemd to start a container on boot. This is in contrary to docker where the docker daemon does the stop/start of containers on boot. I prefer the systemd approach as it treats containers the same way as other system services. Once the freshrss container is running we can generate a systemd unit of it with:
# podman generate systemd --new --name freshrss   tee /etc/systemd/system/container-freshrss.service
Let s stop the container we started previously, and use systemd to manage it:
# podman stop freshrss
# systemctl enable --now container-freshrss.service
We can verify that we have a listening socket on the localhost interface, on the source port 8081
# systemctl status container-freshrss.service
  ...
# ss --listening --numeric --process '( sport = 8081 )'
Netid         State           Recv-Q          Send-Q                   Local Address:Port                   Peer Address:Port         Process         
tcp           LISTEN          0               4096                         127.0.0.1:8081                        0.0.0.0:*             users:(("conmon",pid=4464,fd=5))
Nota Bene: conmon (8) is the process managing the network namespace in which fresh-rss is running, hence it is displayed as the process owning the listening socket Exposing FreshRSS to the external world We have now a running service, but we need to make it reachable from the internet. The simplest, classical way, is to create a subdomain and a VirtualHost configured as a reverse proxy to access the service at 127.0.0.1:8081. Fortunately the FreshRSS authors have documented this setup in https://github.com/FreshRSS/FreshRSS/tree/edge/Docker#alternative-reverse-proxy-using-apache and those steps are no different from a standard application behind a web reverse proxy. Upgrading freshrss container to a newer version A documentation showing how to install a piece of software is nothing when it does not show how to upgrade that said software. Installing is easy, upgrading is where the challenge is. Fortunately to the good stateless design of freshrss (everything is in the sqlite database, which is backed by a non-epheremal volume in our setup), switchting versions is a peace of cake.
# podman pull docker.io/freshrss/freshrss:1.20.2
# systemctl stop container-freshrss.service
# sed -i 's,docker.io/freshrss/freshrss:1.20.1,docker.io/freshrss/freshrss:1.20.2,' /etc/systemd/system/container-freshrss.service
# systemctl daemon-reload
# systemctl start container-freshrss.service
If you need to rollback, you just need to revert version numbers in the instruction above. Enjoy your own reader feed ! I will add the following feeds of blogs I like, let us see if I follow them better with a feed reader !

Next.