Search Results: "will"

23 January 2025

Sergio Talens-Oliag: Ghostty Terminal Emulator

For a long time I ve been using the Terminator terminal emulator on Linux machines, but last week I read a LWN article about a new emulator called Ghostty that looked interesting and I decided to give it a try. The author sells it as a fast, feature-rich and cross-platform terminal emulator that follows the zero configuration philosophy.

Installation and configurationI installed the debian package for Ubuntu 24.04 from the ghostty-ubuntu project and started playing with it. The first thing I noticed is that the zero configuration part is true; I was able to use the terminal without a configuration file, although I created one to change the theme and the font size, but other than that it worked OK for me; my $HOME/.config/ghostty/config file is as simple as:
font-size=14
theme=/usr/share/ghostty/themes/iTerm2 Solarized Light

Starting the terminal maximizedAfter playing a little bit with the terminal I was turned off by the fact that there was no option to start it maximized, but is seemed to me that someone should have asked for the feature, or, if not, I could ask for it. I did a quick search on the project and I found out that there was a merged PR that added the option, so I downloaded the source code, installed Zig and built the program on my machine. As the change is going to be included on the next version on the package I replaced the binary with my version and started playing with the terminal.

Accessing remote machinesThe first thing I noticed was that when logging into remote machines using ssh the terminal variable was not known, but on the help section of the project documentation there was an entry about how to fix it copying the terminfo configuration to remote machines, it is as simple as running the following:
infocmp -x   ssh YOUR-SERVER -- tic -x -

Dead keys on UbuntuWith that sorted out everything looked good to me until I tried to add an accented character when editing a file and the terminal stopped working. Again, I looked at the project issues and found one that matched what was happening to me, and it remembered me about one of the best things about actively maintained open source software. It turns out that the issue is related to a bug on ibus, but other terminals were working right, so the ghostty developer was already working on a fix on the way the terminal handles the keyboard input on GTK, so I subscribed to the issue and stopped using ghostty until there was something new to try again (I use an Spanish keyboard map and I can t use a terminal that does not support dead keys). Yesterday I saw some messages about things being almost fixed, so I pulled the latest changes on my cloned repository, compiled it and writing accented characters works now, there is a small issue with the cursor (the dead key pressed is left on the block cursor unless you change the window focus), but that is something manageable for me.

ConclusionI think that ghostty is a good terminal emulator and I m going to keep using it on my laptop unless I find something annoying that I can t work with (i hope that the cursor issue will be fixed soon and I can live with it as the only thing I need to do to recover from it is changing the widow focus, and that can be done really quickly using keyboard shortcuts). As it is actively maintained and the developer seems to be quite active I don t expect problems and is nice to play with new things from time to time.

22 January 2025

Jonathan McDowell: Christmas Movies

I watch a lot of films. Since completing the IMDB Top 250 back in 2016 I ve kept an eye on it, and while I don t go out of my way to watch the films that newly appear in it I generally sit at over 240 watched. I should note I don t consider myself a film buff/critic, however. I watch things for enjoyment, and a lot of the time that s kicking back and relaxing and disengaging my brain. So I don t get into writing reviews, just high level lists of things I ve watched, sometimes with a few comments. With that in mind, let s talk about Christmas movies. Yes, I appreciate it s the end of January, but generally during December we watch things that have some sort of Christmas theme. New ones if we can find them, but also some of what we consider classics . This almost always starts with Scrooged after we ve put up the tree. I don t always like Bill Murray (I couldn t watch The Life Aquatic with Steve Zissou and I think Lost in Translation is overrated), but he s in a bunch of things I really like, and Scrooged is one of those. I don t care where you sit on whether Die Hard is a Christmas movie or not, it s a good movie and therefore regularly gets a December watch. Die Hard 2 also fits into that category of sequel at least as good as the original , though Helen doesn t agree. We watched it anyway, and I finally made the connection between the William Sadler hotel scene and Michael Rooker s in Mallrats. It turns out I m a Richard Curtis fan. Love Actually has not aged well; most times I watch it I find something new questionable about it, and I always end up hating Alan Rickman for cheating on Emma Thompson, but I do like watching it. He had a new one, That Christmas, out this year, so we watched it as well. Another new-to-us film this year was Spirited. I generally like Ryan Reynolds, and Will Ferrell is good as long as he s not too overboard, so I had high hopes. I enjoyed it, but for some reason not as much as I d expected, and I doubt it s getting added to the regular watch list. Larry doesn t generally like watching full length films, but he (and we), enjoyed The Grinch, which I actually hadn t seen before. He s not as fussed on The Muppet Christmas Carol, but we watched it every year, generally on Christmas or Boxing Day. Favourite thing I saw on the Fediverse in December was Do you know there s a book of The Muppet Christmas Carol, and they don t mention that there s muppets in it once? There are a various other light hearted Christmas films we regularly watch. This year included The Holiday (I have so many issues with even just the practicalities of a short notice house swap), and Last Christmas (lots of George Michael music, what s not to love? Also it was only on this watch through that we realised the lead character is the Mother of Dragons). We started, but could not finish, Carry On. I saw it described somewhere as copaganda, and that feels accurate. It does not accurately reflect any of my interactions with TSA at airports, especially during busy periods. Things we didn t watch this year, but are regularly in the mix, include Fatman, Violent Night (looking forward to the sequel, hopefully this year), and Lethal Weapon. Klaus is kinda at the other end of the spectrum, but very touching, and we ve watched it a couple of years now. Given what we seem to like, any suggestions for other films to add? It s nice to have enough in the mix that we get some variety every year.

21 January 2025

Ravi Dwivedi: The Arduous Luxembourg Visa Process

In 2024, I was sponsored by The Document Foundation (TDF) to attend the LibreOffice annual conference in Luxembourg from the 10th to the 12th of October. Being an Indian passport holder, I needed a visa to visit Luxembourg. However, due to my Kenya trip coming up in September, I ran into a dilemma: whether to apply before or after the Kenya trip. To obtain a visa, I needed to submit my application with VFS Global (and not with the Luxembourg embassy directly). Therefore, I checked the VFS website for information on processing time, which says:
As a rule, the processing time of an admissible Schengen visa application should not exceed 15 calendar days (from the date the application is received at the Embassy).
It also mentions:
If the application is received less than 15 calendar days before the intended travel date, the Embassy can deem your application inadmissible. If so, your visa application will not be processed by the Embassy and the application will be sent back to VFS along with the passport.
If I applied for the Luxembourg visa before my trip, I would run the risk of not getting my passport back in time, and therefore missing my Kenya flight. On the other hand, if I waited until after returning from Kenya, I would run afoul of the aforementioned 15 working days needed by the embassy to process my application. I had previously applied for a Schengen visa for Austria, which was completed in 7 working days. My friends who had been to France told me they got their visa decision within a week. So, I compared Luxembourg s application numbers with those of other Schengen countries. In 2023, Luxembourg received 3,090 applications from India, while Austria received 39,558, Italy received 52,332 and France received 176,237. Since Luxembourg receives a far fewer number of applications, I expected the process to be quick. Therefore, I submitted my visa application with VFS Global in Delhi on the 5th of August, giving the embassy a month with 18 working days before my Kenya trip. However, I didn t mention my Kenya trip in the Luxembourg visa application. For reference, here is a list of documents I submitted: I submitted flight reservations instead of flight tickets . It is because, in case of visa rejection, I would have lost a significant amount of money if I booked confirmed flight tickets. The embassy also recommends the same. After the submission of documents, my fingerprints were taken. The expenses for the visa application were as follows:
Service Description Amount (INR)
Visa Fee 8,114
VFS Global Fee 1,763
Courier 800
Total 10,677
Going by the emails sent by VFS, my application reached the Luxembourg embassy the next day. Fast-forward to the 27th of August 14th day of my visa application. I had already booked my flight ticket to Nairobi for the 4th of September, but my passport was still with the Luxembourg embassy, and I hadn t heard back. In addition, I also obtained Kenya s eTA and got vaccinated for Yellow Fever, a requirement to travel to Kenya. In order to check on my application status, I gave the embassy a phone call, but missed their calling window, which was easy to miss since it was only 1 hour - 12:00 to 1:00 PM. So, I dropped them an email explaining my situation. At this point, I was already wondering whether to cancel the Kenya trip or the Luxembourg one, if I had to choose. After not getting a response to my email, I called them again the next day. The embassy told me they would look into it and asked me to send my flight tickets over email. One week to go before my flight now. I followed up with the embassy on the 30th by a phone call, and the person who picked up the call told me that my request had already been forwarded to the concerned department and is under process. They asked me to follow up on Monday, 2nd September. During the visa process, I was in touch with three other Indian attendees.1 In the meantime, I got to know that all of them had applied for a Luxembourg visa by the end of the month of August. Back to our story, over the next two days, the embassy closed for the weekend. I began weighing my options. On one hand, I could cancel the Kenya trip and hope that Luxembourg goes through. Even then, Luxembourg wasn t guaranteed as the visa could get rejected, so I might have ended up missing both the trips. On the other hand, I could cancel the Luxembourg visa application and at least be sure of going to Kenya. However, I thought it would make Luxembourg very unlikely because it didn t leave 15 working days for the embassy to process my visa after returning from Kenya. I also badly wanted to attend the LibreOffice conference because I couldn t make it two years ago. Therefore, I chose not to cancel my Luxembourg visa application. I checked with my travel agent and learned that I could cancel my Nairobi flight before September 4th for a cancelation fee of approximately 7,000 INR. On the 2nd of September, I was a bit frustrated because I hadn t heard anything from the embassy regarding my request. Therefore, I called the embassy again. They assured me that they would arrange a call for me from the concerned department that day, which I did receive later that evening. During the call, they offered to return my passport via VFS the next day and asked me to resubmit it after returning from Kenya. I immediately accepted the offer and was overjoyed, as it would enable me to take my flight to Nairobi without canceling my Luxembourg visa application. However, I didn t have the offer in writing, so it wasn t clear to me how I would collect my passport from VFS. The next day, I would receive it when I would be on my way to VFS in the form of an email from the embassy which read:
Dear Mr. Dwivedi, We acknowledge the receipt of your email. As you requested, we are returning your passport exceptionally through VFS, you can collect it directly from VFS Delhi Center between 14:00-17:00 hrs, 03 Sep 2024. Kindly bring the printout of this email along with your VFS deposit receipt and Original ID proof. Once you are back from your trip, you can redeposit the passport with VFS Luxembourg for our processing. With best regards,
Consular Section GRAND DUCHY OF LUXEMBOURG
Embassy in New Delhi
I took a printout of the email and submitted it to VFS to get my passport. This seemed like a miracle - just when I lost all hope of making it to my Kenya flight and was mentally preparing myself to miss it, I got my passport back exceptionally and now I had to mentally prepare again for Kenya. I had never heard of an embassy returning passport before completing the visa process before. The next day, I took my flight to Nairobi as planned. In case you are interested, I have written two blog posts on my Kenya trip - one on the OpenStreetMap conference in Nairobi and the other on my travel experience in Kenya. After returning from Kenya, I resubmitted my passport on the 17th of September. Fast-forward to the 25th of September; I didn t hear anything from the embassy about my application process. So, I checked with TDF to see whether the embassy reached out to them. They told me they confirmed my participation and my hotel booking to the visa authorities on the 19th of September (6 days ago). I was wondering what was taking so long after the verification. On the 1st of October, I received a phone call from the Luxembourg embassy, which turned out to be a surprise interview. They asked me about my work, my income, how I came to know about the conference, whether I had been to Europe before, etc. The call lasted around 10 minutes. At this point, my travel date - 8th of October - was just two working days away as the 2nd of October was off due to Gandhi Jayanti and 5th and 6th October were weekends, leaving only the 3rd and the 4th. I am not sure why the embassy saved this for the last moment, even though I submitted my application 2 months ago. I also got to know that one of the other Indian attendees missed the call due to being in their college lab, where he was not allowed to take phone calls. Therefore, I recommend that the embassy agree on a time slot for the interview call beforehand. Visa decisions for all the above-mentioned Indian attendees were sent by the embassy on the 4th of October, and I received mine on the 5th. For my travel date of 8th October, this was literally the last moment the embassy could send my visa. The parcel contained my passport and a letter. The visa was attached to a page in the passport. I was happy that my visa had been approved. However, the timing made my task challenging. The enclosed letter stated:
Subject: Your Visa Application for Luxembourg
Dear Applicant, We would like to inform you that a Schengen visa has been granted for the 8-day duration from 08/10/2024 to 30/10/2024 for conference purposes in Luxembourg. You are requested to report back to the Embassy of Luxembourg in New Delhi through an email (email address redacted) after your return with the following documents:
  • Immigration Stamps (Entry and Exit of Schengen Area)
  • Restaurant Bills
  • Shopping/Hotel/Accommodation bills
Failure to report to the Embassy after your return will be taken into consideration for any further visa applications.
I understand the embassy wanting to ensure my entry and exit from the Schengen area during the visa validity period, but found the demand for sending shopping bills excessive. Further, not everyone was as lucky as I was as it took a couple of days for one of the Indian attendees to receive their visa, delaying their plan. Another attendee had to send their father to the VFS center to collect their visa in time, rather than wait for the courier to arrive at their home. Foreign travel is complicated, especially for the citizens of countries whose passports and currencies are weak. Embassies issuing visas a day before the travel date doesn t help. For starters, a last-minute visa does not give enough time for obtaining a forex card as banks ask for the visa. Further, getting foreign currency (Euros in our case) in cash with a good exchange rate becomes difficult. As an example, for the Kenya trip, I had to get US Dollars at the airport due to the plan being finalized at the last moment, worsening the exchange rate. Back to the current case, the flight prices went up significantly compared to September, almost doubling. The choice of airlines also got narrowed, as most of the flights got booked by the time I received my visa. With all that said, I think it was still better than an arbitrary rejection. Credits: Contrapunctus, Badri, Fletcher, Benson, and Anirudh for helping with the draft of this post.

  1. Thanks to Sophie, our point of contact for the conference, for putting me in touch with them.

19 January 2025

Fran ois Marier: Blocking comment spammers on an Ikiwiki blog

Despite comments on my ikiwiki blog being fully moderated, spammers have been increasingly posting link spam comments on my blog. While I used to use the blogspam plugin, the underlying service was likely retired circa 2017 and its public repositories are all archived. It turns out that there is a relatively simple way to drastically reduce the amount of spam submitted to the moderation queue: ban the datacentre IP addresses that spammers are using.

Looking up AS numbers It all starts by looking at the IP address of a submitted comment: From there, we can look it up using whois:
$ whois -r 2a0b:7140:1:1:5054:ff:fe66:85c5
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See https://docs.db.ripe.net/terms-conditions.html
% Note: this output has been filtered.
%       To receive output for a database update, use the "-B" flag.
% Information related to '2a0b:7140:1::/48'
% Abuse contact for '2a0b:7140:1::/48' is 'abuse@servinga.com'
inet6num:       2a0b:7140:1::/48
netname:        EE-SERVINGA-2022083002
descr:          servinga.com - Estonia
geoloc:         59.4424455 24.7442221
country:        EE
org:            ORG-SG262-RIPE
mnt-domains:    HANNASKE-MNT
admin-c:        CL8090-RIPE
tech-c:         CL8090-RIPE
status:         ASSIGNED
mnt-by:         MNT-SERVINGA
created:        2020-02-18T11:12:49Z
last-modified:  2024-12-04T12:07:26Z
source:         RIPE
% Information related to '2a0b:7140:1::/48AS207408'
route6:         2a0b:7140:1::/48
descr:          servinga.com - Estonia
origin:         AS207408
mnt-by:         MNT-SERVINGA
created:        2020-02-18T11:18:11Z
last-modified:  2024-12-11T23:09:19Z
source:         RIPE
% This query was served by the RIPE Database Query Service version 1.114 (SHETLAND)
The important bit here is this line:
origin:         AS207408
which referts to Autonomous System 207408, owned by a hosting company in Germany called Servinga.

Looking up IP blocks Autonomous Systems are essentially organizations to which IPv4 and IPv6 blocks have been allocated. These allocations can be looked up easily on the command line either using a third-party service:
$ curl -sL https://ip.guide/as207408   jq .routes.v4 >> servinga
$ curl -sL https://ip.guide/as207408   jq .routes.v6 >> servinga
or a local database downloaded from IPtoASN. This is what I ended up with in the case of Servinga:
[
  "45.11.183.0/24",
  "80.77.25.0/24",
  "194.76.227.0/24"
]
[
  "2a0b:7140:1::/48"
]

Preventing comment submission While I do want to eliminate this source of spam, I don't want to block these datacentre IP addresses outright since legitimate users could be using these servers as VPN endpoints or crawlers. I therefore added the following to my Apache config to restrict the CGI endpoint (used only for write operations such as commenting):
<Location /blog.cgi>
        Include /etc/apache2/spammers.include
        Options +ExecCGI
        AddHandler cgi-script .cgi
</Location>
and then put the following in /etc/apache2/spammers.include:
<RequireAll>
    Require all granted
    # https://ipinfo.io/AS207408
    Require not ip 46.11.183.0/24
    Require not ip 80.77.25.0/24
    Require not ip 194.76.227.0/24
    Require not ip 2a0b:7140:1::/48
</RequireAll>
Finally, I can restart the website and commit my changes:
$ apache2ctl configtest && systemctl restart apache2.service
$ git commit -a -m "Ban all IP blocks from Servinga"

Future improvements I will likely automate this process in the future, but at the moment my blog can go for a week without a single spam message (down from dozens every day). It's possible that I've already cut off the worst offenders. I have published the list I am currently using.

Petter Reinholdtsen: 121 packages in Debian mapped to hardware for automatic recommendation

For some years now, I have been working on a automatic hardware based package recommendation system for Debian and other Linux distributions. The isenkram system I started on back in 2013 now consist of two subsystems, one locating firmware files using the information provided by apt-file, and one matching hardware to packages using information provided by AppStream. The former is very similar to the mechanism implemented in debian-installer to pick the right firmware packages to install. This post is about the latter system. Thanks to steady progress and good help from both other Debian and upstream developers, I am happy to report that the Isenkram system now are able to recommend 121 packages using information provided via AppStream. The mapping is done using modalias information provided by the kernel, the same information used by udev when creating device files, and the kernel when deciding which kernel modules to load. To get all the modalias identifiers relevant for your machine, you can run the following command on the command line:
find /sys/devices -name modalias -print0   xargs -0 sort -u
The modalias identifiers can look something like this:
acpi:PNP0000
cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0015,0016,0017,0018,0019,001A,001B,001C,001D,001F,002B,0034,003A,003B,003D,0068,006B,006C,006D,006F,0070,0072,0074,0075,0076,0078,0079,007C,0080,0081,0082,0083,0084,0085,0086,0087,0088,0089,008B,008C,008D,008E,008F,0091,0092,0093,0094,0095,0096,0097,0098,0099,009A,009B,009C,009D,009E,00C0,00C5,00E1,00E3,00EB,00ED,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0100,0101,0102,0103,0111,0120,0121,0123,0125,0127,0128,0129,012A,012C,012D,0140,0160,0161,0165,016C,017B,01C0,01C1,01C2,01C4,01C5,01C6,01F9,024A,025A,025B,025C,025F,0282
dmi:bvnDellInc.:bvr2.18.1:bd08/14/2023:br2.18:svnDellInc.:pnPowerEdgeR730:pvr:rvnDellInc.:rn0H21J3:rvrA09:cvnDellInc.:ct23:cvr:skuSKU=NotProvided
pci:v00008086d00008D3Bsv00001028sd00000600bc07sc80i00
platform:serial8250
scsi:t-0x05
usb:v413CpA001d0000dc09dsc00dp00ic09isc00ip00in00
The entries above are a selection of the complete set available on a Dell PowerEdge R730 machine I have access to, to give an idea about the various styles of hardware identifiers presented in the modalias format. When looking up relevant packages in a Debian Testing installation on the same R730, I get this list of packages proposed:
% sudo isenkram-lookup
firmware-bnx2x
firmware-nvidia-graphics
firmware-qlogic
megactl
wsl
%
The list consist of firmware packages requested by kernel modules, as well packages with program to get the status from the RAID controller and to maintain the LAN console. When the edac-utils package providing tools to check the ECC RAM status will enter testing in a few days, it will also show up as a proposal from isenkram. In addition, once the mfiutil package we uploaded in October get past the NEW processing, it will also propose a tool to configure the RAID controller. Another example is the trusty old Lenovo Thinkpad X230, which have hardware handled by several packages in the archive. This is running on Debian Stable:
% isenkram-lookup 
beignet-opencl-icd
bluez
cheese
ethtool
firmware-iwlwifi
firmware-misc-nonfree
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tlp
tp-smapi-dkms
tpb
%
Here there proposal consist of software to handle the camera, bluetooth, network card, wifi card, GPU, fan, fingerprint reader and acceleration sensor on the machine. Here is the complete set of packages currently providing hardware mapping via AppStream in Debian Unstable: air-quality-sensor, alsa-firmware-loaders, antpm, array-info, avarice, avrdude, bmusb-v4l2proxy, brltty, calibre, colorhug-client, concordance-common, consolekit, dahdi-firmware-nonfree, dahdi-linux, edac-utils, eegdev-plugins-free, ekeyd, elogind, firmware-amd-graphics, firmware-ath9k-htc, firmware-atheros, firmware-b43-installer, firmware-b43legacy-installer, firmware-bnx2, firmware-bnx2x, firmware-brcm80211, firmware-carl9170, firmware-cavium, firmware-intel-graphics, firmware-intel-misc, firmware-ipw2x00, firmware-ivtv, firmware-iwlwifi, firmware-libertas, firmware-linux-free, firmware-mediatek, firmware-misc-nonfree, firmware-myricom, firmware-netronome, firmware-netxen, firmware-nvidia-graphics, firmware-qcom-soc, firmware-qlogic, firmware-realtek, firmware-ti-connectivity, fpga-icestorm, g810-led, galileo, garmin-forerunner-tools, gkrellm-thinkbat, goldencheetah, gpsman, gpstrans, gqrx-sdr, i8kutils, imsprog, ledger-wallets-udev, libairspy0, libam7xxx0.1, libbladerf2, libgphoto2-6t64, libhamlib-utils, libm2k0.9.0, libmirisdr4, libnxt, libopenxr1-monado, libosmosdr0, librem5-flash-image, librtlsdr0, libticables2-8, libx52pro0, libykpers-1-1, libyubikey-udev, limesuite, linuxcnc-uspace, lomoco, madwimax, media-player-info, megactl, mixxx, mkgmap, msi-keyboard, mu-editor, mustang-plug, nbc, nitrokey-app, nqc, ola, openfpgaloader, openocd, openrazer-driver-dkms, pcmciautils, pcscd, pidgin-blinklight, ponyprog, printer-driver-splix, python-yubico-tools, python3-btchip, qlcplus, rosegarden, scdaemon, sispmctl, solaar, spectools, sunxi-tools, t2n, thinkfan, tlp, tp-smapi-dkms, trezor, tucnak, ubertooth, usbrelay, uuu, viking, w1retap, wsl, xawtv, xinput-calibrator, xserver-xorg-input-wacom and xtrx-dkms. In addition to these, there are several with patches pending in the Debian bug tracking system, and even more where no-one wrote patches yet. Good candiates for the latter are packages with udev rules but no AppStream hardware information. The isenkram system consist of two packages, isenkram-cli with the command line tools, and isenkram with a GUI background process. The latter will listen for dbus events from udev emitted when new hardware become available (like when inserting a USB dongle or discovering a new bluetooth device), look up the modalias entry for this piece of hardware in AppStream (and a hard coded list of mappings from isenkram - currently working hard to move this list to AppStream), and pop up a dialog proposing to install any not already installed packages supporting this hardware. It work very well today when inserting the LEGO Mindstorms RCX, NXT and EV3 controllers. :) If you want to make sure more hardware related packages get recommended, please help out fixing the remaining packages in Debian to provide AppStream metadata with hardware mappings. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18 January 2025

Petter Reinholdtsen: What is the most supported MIME type in Debian in 2025?

Seven and twelve years ago, I measured what the most supported MIME type in Debian was, first by analysing the desktop files in all packages in the archive, then by analysing the DEP-11 AppStream data set. I guess it is time to repeat the measurement, only for unstable as last time: Debian Unstable:
  count MIME type
  ----- -----------------------
     63 image/png
     63 image/jpeg
     57 image/tiff
     54 image/gif
     51 image/bmp
     50 audio/mpeg
     48 text/plain
     42 audio/x-mp3
     40 application/ogg
     39 audio/x-wav
     39 audio/x-flac
     36 audio/x-vorbis+ogg
     35 audio/x-mpeg
     34 audio/x-mpegurl
     34 audio/ogg
     33 application/x-ogg
     32 audio/mp4
     31 audio/x-scpls
     31 application/pdf
     29 audio/x-ms-wma
The list was created like this using a sid chroot:
cat /var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz   \
  zcat   awk '/^  - \S+\/\S+$/  print $2  '   sort   \
  uniq -c   sort -nr   head -20
It is nice to see that the same number of packages now support PNG and JPEG. Last time JPEG had more support than PNG. Most of the MIME types are known to me, but the 'audio/x-scpls' one I have no idea what represent, except it being an audio format. To find the packages claiming support for this format, the appstreamcli command from the appstream package can be used:
% appstreamcli what-provides mediatype audio/x-scpls   grep Package:   sort -u
Package: alsaplayer-common
Package: amarok
Package: audacious
Package: brasero
Package: celluloid
Package: clapper
Package: clementine
Package: cynthiune.app
Package: elisa
Package: gtranscribe
Package: kaffeine
Package: kmplayer
Package: kylin-burner
Package: lollypop
Package: mediaconch-gui
Package: mediainfo-gui
Package: mplayer-gui
Package: mpv
Package: mystiq
Package: parlatype
Package: parole
Package: pragha
Package: qmmp
Package: rhythmbox
Package: sayonara
Package: shotcut
Package: smplayer
Package: soundconverter
Package: strawberry
Package: syncplay
Package: vlc
%
Look like several video and auto tools understand the format. Similarly one can check out the number of packages supporting the STL format commonly used for 3D printing:
% appstreamcli what-provides mediatype model/stl   grep Package:   sort -u
Package: cura
Package: freecad
Package: open3d-viewer
%
How strange the slic3r and prusa-slicer packages do not support STL. Perhaps just missing package metadata? Luckily the amount of package metadata in Debian is getting better, and hopefully this way of locating relevant packages for any file format will be the preferred one soon. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Dominique Dumont: How we solved storage API throttling on our Azure Kubernetes clusters

Hi This issue was quite puzzling, so I m sharing how we investigated this issue. I hope it can be useful for you. My client informed me that he was no longer able to install new instances of his application. k9s showed that only some pods could not be created, only the ones that created physical volume (PV). The description of these pods showed a HTTP error 429 when creating pods: New PVC could not be created because we were throttled by Azure storage API. This issue was confirmed by Azure diagnostic console on Kubernetes ( menu Diagnose and solve problems Cluster and Control Plane Availability and Performance Azure Resource Request Throttling ). We had a lot of throttling:
2025-01-18_11-01-k8s-throttles.png
Which were explained by the high call rate:
2025-01-18_11-01-k8s-calls.png
The first clue was found at the bottom of Azure diagnostic page:
2025-01-18_11-27-throttles-by-user-agent.png
According, to this page, throttling is done by services whose user agent is:
Go/go1.23.1 (amd64-linux) go-autorest/v14.2.1 Azure-SDK-For-Go/v68.0.0
storage/2021-09-01microsoft.com/aks-operat azsdk-go-armcompute/v1.0.0 (go1.22.3; linux)
The main information is Azure-SDK-For-Go, which means the program making all these calls to storage API is written in Go. All our services are written in Typescript or Rust, so they are not suspect. That leaves controllers running in kube-systems namespace. I could not find anything suspects in the logs of these services. At that point I was convinced that a component in Kubernetes control plane was making all those calls. Unfortunately, AKS is managed by Microsoft and I don t have access to the control plane logs. However, we re realized that we had quite a lot of volumesnapshots that are created in our clusters using k8s-scheduled-volume-snapshotter: We suspected that kubernetes reconciliation loop is throttled when checking the status of all these snapshots. May be so, but we also had the same issues and throttle rates on preprod and prod were the number of snapshots were quite different. We tried to get more information using Azure console on our snapshot account, but it was also broken by the throttling issue. We were so puzzled that we decided to try L odagan s advice (tout cr mer pour repartir sur des bases saines, loosely translated as burn everything down to start from scratch ) and we destroyed piece by piece our dev cluster while checking if the throttling stopped. First, we removed all our applications, no change. Then, all ancillary components like rabbitmq, cert-manager were removed, no change. Then, we tried remove the namespace containing our applications. But, we faced another issue: Kubernetes was unable to remove the namespace because it could not destroy some PVC and volumesnapshots. That was actually good news, because it meant that we were close to the actual issue. We managed to destroy the PVC and volumesnapshots by removing their finalizers. Finalizers are some kind of markers that tell kubernetes that something needs to be done before actually deleting a resource. The finalizers were removed with a command like:
kubectl patch volumesnapshots $ volumesnapshot  \
  -p ' \"metadata\": \"finalizers\":null '  --type merge
Then, we got the first progress : the throttling and high call rate stopped on our dev cluster. To make sure that the snapshots were the issue, we re-installed the ancillary components and our applications. Everything was copacetic. So, the problem was indeed with PVC and snapshots. Even though we have backups outside of Azure, we weren t really thrilled at trying L odagan s method on our prod cluster So we looked for a better fix to try on our preprod cluster. Poking around in PVC and volumesnapshots, I finally found this error message in the description on a volumesnapshotcontents:
Code="ShareSnapshotCountExceeded" Message="The total number of snapshots
for the share is over the limit."
The number of snapshots found in our cluster was not that high. So I wanted to check the snapshots present in our storage account using Azure console, which was still broken. Fortunately, Azure CLI is able to retry HTTP calls when getting 429 errors. I managed to get a list of snapshots with
az storage share list --account-name [redacted] --include-snapshots \
      tee preprod-list.json
There, I found a lot of snapshots dating back from 2024. These were no longer managed by Kubernetes and should have been cleaned up. That was our smoking gun. I guess that we had a chain of events like: To make things worse, k8s-scheduled-volume-snapshotter creates new snapshots when it cannot list the old ones. So we had 4 new snapshots per day instead of one. Since we had the chain of events, fixing the issue was not too difficult (but quite long ):
  1. stop k8s-scheduled-volume-snapshotter by disabling its cron job
  2. delete all volumesnapshots and volume snapshots contents from k8s.
  3. since Azure API was throttled, we also had to remove their finalizers
  4. delete all snapshots from azure using az command and a Perl script (this step took several hours)
  5. re-enable k8s-scheduled-volume-snapshotter
After these steps, preprod was back to normal. I m now applying the same recipe on prod. We still don t know why we had all these stale snapshots. It may have been a human error or a bug in k8s-scheduled-volume-snapshotter. Anyway, to avoid this problem is the future, we will: My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

17 January 2025

Russell Coker: Systemd Hardening and Sending Mail

A feature of systemd is the ability to reduce the access that daemons have to the system. The restrictions include access to certain directories, system calls, capabilities, and more. The systemd.exec(5) man page describes them all [1]. To see an overview of the security of daemons run systemd-analyze security and to get details of one particular daemon run a command like systemd-analyze security mon.service . I created a Debian wiki page for a systemd-analyze security goal [2]. At this time release goals aren t a serious thing for Debian so this won t result in release critical bug reports, but it is still something we can aim for. For a simple daemon (EG BIND, dhcpd, and syslogd) this isn t difficult to do. It might be difficult to understand the implications of some changes (especially when restricting system calls) but you can do some quick tests. The functionality of such programs has a limited scope and once you get it basically working it s done. For some daemons it s harder. Network-Manager is one of the well known slightly more difficult cases as it could do things like starting a VPN connection. The larger scope and the use of plugins makes it difficult to test the combinations. The systemd restrictions apply to child processes too unlike restrictions by SE Linux and AppArmor which permit a child process to run in a different security context. The messages when a daemon fails due to systemd restrictions are usually unclear which makes things harder to setup and makes it more important to get it right. My mon package (which I forked upstream as etbe-mon [3] is one of the difficult daemons as local test can involve probing large parts of the system. But I have got that working reasonably well for most cases. I have a bug report about running mon with Exim [4]. The problem with this is that Exim has a single process model which means that the process doing local delivery can be a child of the process that initially received the message. So the main mon process needs all the access for delivering mail (writing to /home etc). This also means that every other child of mon will get such access including programs that receive untrusted data from the Internet. Most of the extra access needed by Exim is not a problem, but /home access is a potential risk. It also means that more effort is needed when reviewing the access control. The problem with this Exim design is that it applies to many daemons. Every daemon that sends email or that potentially could send email in some configuration needs extra access to be granted. Can Exim be configured to have it s sendmail -T type operation just write a file in a spool directory for another program to process? Do we need to grant permissions to most of the system just for Exim?

15 January 2025

Thomas Lange: FAI 6.2.5 and new ISO available

The new years starts with a FAI release. FAI 6.2.5 is available and contains many small improvements. A new feature is that the command fai-cd can now create ISOs for the ARM64 architecture. The FAIme service uses the newest FAI version and the Debian most recent point release 12.9. The FAI CD images were also updated. The Debian packages of FAI 6.2.5 are available for Debian stable (aka bookworm) via the FAI repository adding this line to sources.list:
deb https://fai-project.org/download bookworm koeln
Using the tool extrepo, you can also add the FAI repository to your host
# extrepo enable fai
FAI 6.2.5 will soon be available in Debian testing via the official Debian mirrors.

13 January 2025

Kentaro Hayashi: Stick to boot from 6.11 linux-image

Since Dec 2024, there is a compatibility issue with linux-image 6.12 and nvidia-driver 535.216.03. #1089513 - nvidia-driver: crash in drm_open_helper on Linux 6.12.3 - Debian Bug report logs It seems that the upstream was already fixed this issue in newer release, but not available yet on Debian sid. If you keep installed linux-image-amd64 or linux-headers-amd64, it will be booted from 6.12 by default. Surely it will boot, but it still has the resolution issue. It can't be your daily driver. Thus workaround is sticking to boot from 6.11 linux-image. As usually older image was listed in "Advanced options for ..." submenu, you need to explicitly choose 6.11 image during booting. In such a case, the changing default boot image is useful. (another option is just purge all 6.12 linux-image, linux-image-amd64 and linux-headers-amd64, but it's out of scope in this article) See /boot/grub/grub.cfg and collect each menuentry_id_option. You need to collect the following menuentry id. Then, edit GRUB_DEFAULT entry in /etc/default/grub. GRUB_DEFAULT should be combination with [1] and [2] which is concatenated with ">" e.g. NOTE: the actual value might vary on your environment
GRUB_DEFAULT="gnulinux-advanced-2e0e7dd0-7e10-4b58-92f7-518dabb5b547>gnulinux-6.11.10-amd64-advanced-2e0e7dd0-7e10-4b58-92f7-518dabb5b547"
After that, update grub entry with sudo update-grub. /boot/grub/grub.cfg will be updated correctly.

Freexian Collaborators: Monthly report about Debian Long Term Support, December 2024 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In December, 19 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 14.0h (out of 14.0h assigned).
  • Adrian Bunk did 47.75h (out of 53.0h assigned and 47.0h from previous period), thus carrying over 52.25h to the next month.
  • Andrej Shadura did 6.0h (out of 17.0h assigned and -7.0h from previous period after hours given back), thus carrying over 4.0h to the next month.
  • Bastien Roucari s did 22.0h (out of 22.0h assigned).
  • Ben Hutchings did 15.0h (out of 0.0h assigned and 18.0h from previous period), thus carrying over 3.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 17.0h assigned and 9.0h from previous period), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 32.25h (out of 40.5h assigned and 19.5h from previous period), thus carrying over 27.75h to the next month.
  • Guilhem Moulin did 22.5h (out of 9.75h assigned and 12.75h from previous period).
  • Jochen Sprickerhof did 2.0h (out of 3.5h assigned and 6.5h from previous period), thus carrying over 8.0h to the next month.
  • Lee Garrett did 8.5h (out of 14.75h assigned and 45.25h from previous period), thus carrying over 51.5h to the next month.
  • Lucas Kanashiro did 32.0h (out of 10.0h assigned and 54.0h from previous period), thus carrying over 32.0h to the next month.
  • Markus Koschany did 40.0h (out of 20.0h assigned and 20.0h from previous period).
  • Roberto C. S nchez did 13.5h (out of 6.75h assigned and 17.25h from previous period), thus carrying over 10.5h to the next month.
  • Santiago Ruano Rinc n did 18.75h (out of 24.75h assigned and 0.25h from previous period), thus carrying over 6.25h to the next month.
  • Sean Whitton did 6.0h (out of 2.0h assigned and 4.0h from previous period).
  • Sylvain Beucler did 10.5h (out of 21.5h assigned and 38.5h from previous period), thus carrying over 49.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).

Evolution of the situation In December, we have released 29 DLAs. The LTS Team has published updates to several notable packages. Contributor Guilhem Moulin published an update of php7.4, a widely-used open source general purpose scripting language, which addressed denial of service, authorization bypass, and information disclosure vulnerabilities. Contributor Lucas Kanashiro published an update of clamav, an antivirus toolkit for Unix and Linux, which addressed denial of service and authorization bypass vulnerabilities. Finally, contributor Tobias Frost published an update of intel-microcode, the microcode for Intel microprocessors, which well help to ensure that processor hardware is protected against several local privilege escalation and local denial of service vulnerabilities. Beyond our customary LTS package updates, the LTS Team has made contributions to Debian s stable bookworm release and its experimental section. Notably, contributor Lee Garrett published a stable update of dnsmasq. The LTS update was previously published in November and in December Lee continued working to bring the same fixes (addressing the high profile KeyTrap and NSEC3 vulnerabilities) to the dnsmasq package in Debian bookworm. This package was accepted for inclusion in the Debian 12.9 point release scheduled for January 2025. Addititionally, contributor Sean Whitton provided assistance, via upload sponsorships, to the Debian maintainers of xen. This assistance resulted in two uploads of xen into Debian s experimental section, which will contribute to the next Debian stable release having a version of xen with better longterm support from the upstream development team.

Thanks to our sponsors Sponsors that joined recently are in bold.

12 January 2025

Divine Attah-Ohiemi: My 30-Day Outreachy Experience with the Debian Community

Hey everyone! It s Divine Attah-Ohiemi here, and I m excited to share what I ve been up to in my internship with the Debian community. It s been a month since I began this journey, and if you re thinking about applying for Outreachy, let me give you a glimpse into my project and the amazing people I get to work with. So, what s it like in the Debian community? It s a fantastic mix of folks from all walks of life seasoned developers, curious newbies, and everyone in between. What really stands out is how welcoming everyone is. I m especially thankful to my mentors, Thomas Lange, Carsten Schoenert, and Subin Siby, for their guidance and for always clocking in whenever I have questions. It feels like a big family where you can share your ideas and learn from each other. The commitment to diversity and merit is palpable, making it a great place for anyone eager to jump in and contribute. Now, onto the project! We re working on improving the Debian website by switching from WML (Web Meta Language) to Hugo, a modern static site generator. This change doesn t just make the site faster; it significantly reduces the time it takes to build compared to WML. Plus, it makes it way easier for non-developers to contribute and add pages since the content is built from Markdown files. It s all about enhancing the experience for both new and existing users. My role involves developing a proof of concept for this transition. I m migrating existing pages while ensuring that old links still work, so users won t run into dead ends. It s a bit of a juggling act, but knowing that my work is helping to make Debian more accessible is incredibly rewarding. What gets me most excited is the chance to contribute to a project that s been around for over 20 years! It s an honor to be part of something so significant and to help shape its future. How cool is it to know that what I m doing will impact users around the globe? In the past month, I ve learned a bunch of new things. For instance, I ve been diving into Apache's mod_rewrite to automatically map old multilingual URLs to new ones. This is important since Hugo handles localization differently than WML. I ve also been figuring out how to set up 301 redirects to prevent dead links, which is crucial for a smooth user experience. One of the more confusing parts has been using GNU Make to manage Perl scripts for dynamic pages. It s a bit of a learning curve, but I m tackling it head-on. Each challenge is a chance to grow, and I m here for it! If you re considering applying to the Debian community through Outreachy, I say go for it! There s so much to learn and experience, and you ll be welcomed with open arms. Happy coding, everyone!

Dirk Eddelbuettel: Rcpp 1.0.14 on CRAN: Regular Semi-Annual Update

rcpp logo The Rcpp Core Team is once again thrilled, pleased, and chuffed (am I doing this right for LinkedIn?) to announce a new release (now at 1.0.14) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course r2u should catch up tomorrow too. The release was only uploaded yesterday, and as always get flagged because of the grandfathered .Call(symbol) as well as for the url to the Rcpp book (which has remained unchanged for years) failing . My email reply was promptly dealt with under European morning hours and by the time I got up the submission was in state waiting over a single reverse-dependency failure which is also spurious, appears on some systems and not others, and also not new. Imagine that: nearly 3000 reverse dependencies and only one (spurious) change to worse. Solid testing seems to help. My thanks as always to the CRAN for responding promptly. This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. This time we also need a one-off hotfix release 1.0.13-1: we had (accidentally) conditioned an upcoming R change on 4.5.0, but it already came with 4.4.2 so we needed to adjust our code. As a reminder, we do of course make interim snapshot dev or rc releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2977 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 60.8% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 93.7 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1947 (JSS, 2011) and 354 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 676. This release is primarily incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R once again to a few changes; Kevin did once again did most of these PRs. Other contributed PRs include G bor permitting builds on yet another BSD variant, Simon Guest correcting sourceCpp() to work on read-only files, Marco Colombo correcting a (surprisingly large) number of vignette typos, I aki rebuilding some documentation files that tickled (false) alerts, and I took care of a number of other maintenance items along the way. The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.14 (2025-01-11)
  • Changes in Rcpp API:
    • Support for user-defined databases has been removed (Kevin in #1314 fixing #1313)
    • The SET_TYPEOF function and macro is no longer used (Kevin in #1315 fixing #1312)
    • An errorneous cast to int affecting large return object has been removed (Dirk in #1335 fixing #1334)
    • Compilation on DragonFlyBSD is now supported (G bor Cs rdi in #1338)
    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in Rcpp Attributes:
    • The sourceCpp() function can now handle input files with read-only modes (Simon Guest in #1346 fixing #1345)
  • Changes in Rcpp Deployment:
    • One unit tests for arm64 macOS has been adjusted; a macOS continuous integration runner was added (Dirk in #1324)
    • Authors@R is now used in DESCRIPTION as mandated by CRAN, the Rcpp.package.skeleton() function also creates it (Dirk in #1325 and #1327)
    • A single datetime format test has been adjusted to match a change in R-devel (Dirk in #1348 fixing #1347)
  • Changes in Rcpp Documentation:
    • The Rcpp Modules vignette was extended slightly following #1322 (Dirk)
    • Pdf vignettes have been regenerated under Ghostscript 10.03.1 to avoid a false positive by a Windows virus scanner (I aki in #1331)
    • A (large) number of (old) typos have been corrected in the vignettes (Marco Colombo in #1344)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Bastian Venthur: Investigating the popularity of Python build backends over time (II)

Last year, I analyzed the popularity of build backends used in pyproject.toml files over time. This post is the update for 2024. Analysis Like last year, I m using Tom Forbes fantastic dataset containing information about every file within every release uploaded to PyPI. To get the current dataset, I followed the same process as in last year s analysis, so I won t repeat all the details here. Instead, I ll highlight the main steps: Downloading all the parquet files took roughly a week due to GitHub s rate limiting. Tom suggested leveraging the Git v2 protocol to fetch the data directly. This approach could bypass rate limiting and complete the download of all pyproject.toml files in just 20 minutes(!). However, I couldn t find sufficient documentation that would help me to implement this method, so this will have to wait until next year s analysis. Once all the data is downloaded, I perform some preprocessing: Results I modified the plots a bit from last year to make them easier to read. Most notably, I binned the data into quarters to make the plots less noisy, and secondly, I stopped stacking the relative distribution plots to make the percentages directly readable. The first plot shows the absolute number of uploads (in thousands) by quarter and build backend. Absolute distribution of build backends by quarter The second plot shows the relative distribution of build backends by quarter. Relative distribution of build backends by quarter In 2024, we observe that: The script for downloading and analyzing the data is available in my GitHub repository. If someone has insights or examples on implementing the Git v2 protocol to download the pyproject.toml file given the repository URL and its hash, I d love to hear from you!

10 January 2025

Valhalla's Things: Winter

Posted on January 10, 2025
Tags: madeof:atoms, craft:painting, medium:acrylic
An acrylic painting in blue and greenish grey vaguely resembling rocky slopes at the bottom right and a cloudy sky at the top left. A few days ago1 I wanted to paint, but I didn t know what to paint, so I did a few more colour tests to find out which green combinations I can get out of the available yellows and blues (and greens) acrylic I have (from a cheap student grade line). A sheet of colour tests with mixes of various yellows and blues. I liked the cool grey tones in the second to last line, from 200 Naples yellow (PY3 PY83 PW6) and 410 Ultramarine blue (PB29), so I went up a step on the scale of things to do when having a desire to paint, but the utter inability to actually paint something and did a bit of a study with those two colours plus titanium white. The study above over a sheet of scrap paper: at one of the edge the excess paint has connected the two together. And btw, painting with acrylics on watercolour paper taped to a sheet of scrap paper works just fine for stuff like this that is not supposed to last, but the work will get glued to the paper below when (not if) the paint overflows :D

  1. i.e. almost three months, and then I wrote 95% of this post and forgot to finish and publish it.

9 January 2025

Reproducible Builds: Reproducible Builds in December 2024

Welcome to the December 2024 report from the Reproducible Builds project! Our monthly reports outline what we ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As ever, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. reproduce.debian.net
  2. debian-repro-status
  3. On our mailing list
  4. Enhancing the Security of Software Supply Chains
  5. diffoscope
  6. Supply-chain attack in the Solana ecosystem
  7. Website updates
  8. Debian changes
  9. Other development news
  10. Upstream patches
  11. Reproducibility testing framework

reproduce.debian.net Last month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there. This month, however, we are pleased to announce that not only does the service now produce graphs, the reproduce.debian.net homepage itself has become a start page of sorts, and the amd64.reproduce.debian.net and i386.reproduce.debian.net pages have emerged. The first of these rebuilds the amd64 architecture, naturally, but it also is building Debian packages that are marked with the no architecture label, all. The second builder is, however, only rebuilding the i386 architecture. Both of these services were also switched to reproduce the Debian trixie distribution instead of unstable, which started with 43% of the archive rebuild with 79.3% reproduced successfully. This is very much a work in progress, and we ll start reproducing Debian unstable soon. Our i386 hosts are very kindly sponsored by Infomaniak whilst the amd64 node is sponsored by OSUOSL thank you! Indeed, we are looking for more workers for more Debian architectures; please contact us if you are able to help.

debian-repro-status Reproducible builds developer kpcyrd has published a client program for reproduce.debian.net (see above) that queries the status of the locally installed packages and rates the system with a percentage score. This tool works analogously to arch-repro-status for the Arch Linux Reproducible Builds setup. The tool was packaged for Debian and is currently available in Debian trixie: it can be installed with apt install debian-repro-status.

On our mailing list On our mailing list this month:
  • Bernhard M. Wiedemann wrote a detailed post on his long journey towards a bit-reproducible Emacs package. In his interesting message, Bernhard goes into depth about the tools that they used and the lower-level technical details of, for instance, compatibility with the version for glibc within openSUSE.
  • Shivanand Kunijadar posed a question pertaining to the reproducibility issues with encrypted images. Shivanand explains that they must use a random IV for encryption with AES CBC. The resulting artifact is not reproducible due to the random IV used. The message resulted in a handful of replies, hopefully helpful!
  • User Danilo posted an in interesting question related to their attempts in trying to achieve reproducible builds for Threema Desktop 2.0. The question resulted in a number of replies attempting to find the right combination of compiler and linker flags (for example).
  • Longstanding contributor David A. Wheeler wrote to our list announcing the release of the Census III of Free and Open Source Software: Application Libraries report written by Frank Nagle, Kate Powell, Richie Zitomer and David himself. As David writes in his message, the report attempts to answer the question what is the most popular Free and Open Source Software (FOSS)? .
  • Lastly, kpcyrd followed-up to a post from September 2024 which mentioned their desire for someone to implement a hashset of allowed module hashes that is generated during the kernel build and then embedded in the kernel image , thus enabling a deterministic and reproducible build. However, they are now reporting that somebody implemented the hash-based allow list feature and submitted it to the Linux kernel mailing list . Like kpcyrd, we hope it gets merged.

Enhancing the Security of Software Supply Chains: Methods and Practices Mehdi Keshani of the Delft University of Technology in the Netherlands has published their thesis on Enhancing the Security of Software Supply Chains: Methods and Practices . Their introductory summary first begins with an outline of software supply chains and the importance of the Maven ecosystem before outlining the issues that it faces that threaten its security and effectiveness . To address these:
First, we propose an automated approach for library reproducibility to enhance library security during the deployment phase. We then develop a scalable call graph generation technique to support various use cases, such as method-level vulnerability analysis and change impact analysis, which help mitigate security challenges within the ecosystem. Utilizing the generated call graphs, we explore the impact of libraries on their users. Finally, through empirical research and mining techniques, we investigate the current state of the Maven ecosystem, identify harmful practices, and propose recommendations to address them.
A PDF of Mehdi s entire thesis is available to download.

diffoscope diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 283 and 284 to Debian:
  • Update copyright years. [ ]
  • Update tests to support file 5.46. [ ][ ]
  • Simplify tests_quines.py::test_ differences,differences_deb to simply use assert_diff and not mangle the test fixture. [ ]

Supply-chain attack in the Solana ecosystem A significant supply-chain attack impacted Solana, an ecosystem for decentralised applications running on a blockchain. Hackers targeted the @solana/web3.js JavaScript library and embedded malicious code that extracted private keys and drained funds from cryptocurrency wallets. According to some reports, about $160,000 worth of assets were stolen, not including SOL tokens and other crypto assets.

Website updates Similar to last month, there was a large number of changes made to our website this month, including:
  • Chris Lamb:
    • Make the landing page hero look nicer when the vertical height component of the viewport is restricted, not just the horizontal width.
    • Rename the Buy-in page to Why Reproducible Builds? [ ]
    • Removing the top black border. [ ][ ]
  • Holger Levsen:
  • hulkoba:
    • Remove the sidebar-type layout and move to a static navigation element. [ ][ ][ ][ ]
    • Create and merge a new Success stories page, which highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds. These stories aim to enhance the technical resilience of the initiative by encouraging community involvement and inspiring new contributions. . [ ]
    • Further changes to the homepage. [ ]
    • Remove the translation icon from the navigation bar. [ ]
    • Remove unused CSS styles pertaining to the sidebar. [ ]
    • Add sponsors to the global footer. [ ]
    • Add extra space on large screens on the Who page. [ ]
    • Hide the side navigation on small screens on the Documentation pages. [ ]

Debian changes There were a significant number of reproducibility-related changes within Debian this month, including:
  • Santiago Vila uploaded version 0.11+nmu4 of the dh-buildinfo package. In this release, the dh_buildinfo becomes a no-op ie. it no longer does anything beyond warning the developer that the dh-buildinfo package is now obsolete. In his upload, Santiago wrote that We still want packages to drop their [dependency] on dh-buildinfo, but now they will immediately benefit from this change after a simple rebuild.
  • Holger Levsen filed Debian bug #1091550 requesting a rebuild of a number of packages that were built with a very old version of dpkg.
  • Fay Stegerman contributed to an extensive thread on the debian-devel development mailing list on the topic of Supporting alternative zlib implementations . In particular, Fay wrote about her results experimenting whether zlib-ng produces identical results or not.
  • kpcyrd uploaded a new rust-rebuilderd-worker, rust-derp, rust-in-toto and debian-repro-status to Debian, which passed successfully through the so-called NEW queue.
  • Gioele Barabucci filed a number of bugs against the debrebuild component/script of the devscripts package, including:
    • #1089087: Address a spurious extra subdirectory in the build path.
    • #1089201: Extra zero bytes added to .dynstr when rebuilding CMake projects.
    • #1089088: Some binNMUs have a 1-second offset in some timestamps.
  • Gioele Barabucci also filed a bug against the dh-r package to report that the Recommends and Suggests fields are missing from rebuilt R packages. At the time of writing, this bug has no patch and needs some help to make over 350 binary packages reproducible.
  • Lastly, 8 reviews of Debian packages were added, 11 were updated and 11 were removed this month adding to our knowledge about identified issues.

Other development news In other ecosystem and distribution news:
  • Lastly, in openSUSE, Bernhard M. Wiedemann published another report for the distribution. There, Bernhard reports about the success of building R-B-OS , a partial fork of openSUSE with only 100% bit-reproducible packages. This effort was sponsored by the NLNet NGI0 initiative.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add a new i386.reproduce.debian.net rebuilder. [ ][ ][ ][ ][ ][ ]
    • Make a number of updates to the documentation. [ ][ ][ ][ ][ ]
    • Run i386.reproduce.debian.net run on a public port to allow external workers. [ ]
    • Add a link to the /api/v0/pkgs/list endpoint. [ ]
    • Add support for a statistics page. [ ][ ][ ][ ][ ][ ]
    • Limit build logs to 20 MiB and diffoscope output to 10 MiB. [ ]
    • Improve the frontpage. [ ][ ]
    • Explain that we re testing arch:any and arch:all on the amd64 architecture, but only arch:any on i386. [ ]
  • Misc:
    • Remove code for testing Arch Linux, which has moved to reproduce.archlinux.org. [ ][ ]
    • Don t install dstat on Jenkins nodes anymore as its been removed from Debian trixie. [ ]
    • Prepare the infom08-i386 node to become another rebuilder. [ ]
    • Add debug date output for benchmarking the reproducible_pool_buildinfos.sh script. [ ]
    • Install installation-birthday everywhere. [ ]
    • Temporarily disable automatic updates of pool links on buildinfos.debian.net. [ ]
    • Install Recommends by default on Jenkins nodes. [ ]
    • Rename rebuilder_stats.py to rebuilderd_stats.py. [ ]
    • r.d.n/stats: minor formatting changes. [ ]
    • Install files under /etc/cron.d/ with the correct permissions. [ ]
and Jochen Sprickerhof made the following changes: Lastly, Gioele Barabucci also classified packages affected by 1-second offset issue filed as Debian bug #1089088 [ ][ ][ ][ ], Chris Hofstaedtler updated the URL for Grml s dpkg.selections file [ ], Roland Clobus updated the Jenkins log parser to parse warnings from diffoscope [ ] and Mattia Rizzolo banned a number of bots and crawlers from the service [ ][ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Freexian Collaborators: Debian Contributions: Tracker.debian.org updates, Salsa CI improvements, Coinstallable build-essential, Python 3.13 transition, Ruby 3.3 transition and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-12 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Tracker.debian.org updates, by Rapha l Hertzog Profiting from end-of-year vacations, Rapha l prepared for tracker.debian.org to be upgraded to Debian 12 bookworm by getting rid of the remnants of python3-django-jsonfield in the code (it was superseded by a Django-native field). Thanks to Philipp Kern from the Debian System Administrators team, the upgrade happened on December 23rd. Rapha l also improved distro-tracker to better deal with invalid Maintainer fields which recently caused multiples issues in the regular data updates (#1089985, MR 105). While working on this, he filed #1089648 asking dpkg tools to error out early when maintainers make such mistakes. Finally he provided feedback to multiple issues and merge requests (MR 106, issues #21, #76, #77), there seems to be a surge of interest in distro-tracker lately. It would be nice if those new contributors could stick around and help out with the significant backlog of issues (in the Debian BTS, in Salsa).

Salsa CI improvements, by Santiago Ruano Rinc n Given that the Debian buildd network now relies on sbuild using the unshare backend, and that Salsa CI s reproducibility testing needs to be reworked (#399), Santiago resumed the work for moving the build job to use sbuild. There was some related work a few months ago that was focused on sbuild with the schroot and the sudo backends, but those attempts were stalled for different reasons, including discussions around the convenience of the move (#296). However, using sbuild and unshare avoids all of the drawbacks that have been identified so far. Santiago is preparing two merge requests: !568 to introduce a new build image, and !569 that moves all the extract-source related tasks to the build job. As mentioned in the previous reports, this change will make it possible for more projects to use the pipeline to build the packages (See #195). Additional advantages of this change include a more optimal way to test if a package builds twice in a row: instead of actually building it twice, the Salsa CI pipeline will configure sbuild to check if the clean target of debian/rules correctly restores the source tree, saving some CPU cycles by avoiding one build. Also, the images related to Ubuntu won t be needed anymore, since the build job will create chroots for different distributions and vendors from a single common build image. This will save space in the container registry. More changes are to come, especially those related to handling projects that customize the pipeline and make use of the extract-source job.

Coinstallable build-essential, by Helmut Grohne Building on the gcc-for-host work of last December, a notable patch turning build-essential Multi-Arch: same became feasible. Whilst the change is small, its implications and foundations are not. We still install crossbuild-essential-$ARCH for cross building and due to a britney2 limitation, we cannot have it depend on the host s C library. As a result, there are workarounds in place for sbuild and pbuilder. In turning build-essential Multi-Arch: same, we may actually express these dependencies directly as we install build-essential:$ARCH instead. The crossbuild-essential-$ARCH packages will continue to be available as transitional dummy packages.

Python 3.13 transition, by Colin Watson and Stefano Rivera Building on last month s work, Colin, Stefano, and other members of the Debian Python team fixed 3.13 compatibility bugs in many more packages, allowing 3.13 to now be a supported but non-default version in testing. The next stage will be to switch to it as the default version, which will start soon. Stefano did some test-rebuilds of packages that only build for the default Python 3 version, to find issues that will block the transition. The default version transition typically shakes out some more issues in applications that (unlike libraries) only test with the default Python version. Colin also fixed Sphinx 8.0 compatibility issues in many packages, which otherwise threatened to get in the way of this transition.

Ruby 3.3 transition, by Lucas Kanashiro The Debian Ruby team decided to ship Ruby 3.3 in the next Debian release, and Lucas took the lead of the interpreter transition with the assistance of the rest of the team. In order to understand the impact of the new interpreter in the ruby ecosystem, ruby-defaults was uploaded to experimental adding ruby3.3 as an alternative interpreter, and a mass rebuild of reverse dependencies was done here. Initially, a couple of hundred packages were failing to build, after many rounds of rebuilds, adjustments, and many uploads we are down to 30 package build failures, of those, 21 packages were asked to be removed from testing and for the other 9, bugs were filled. All the information to track this transition can be found here. Now, we are waiting for PHP 8.4 to finish to avoid any collision. Once it is done the Ruby 3.3 transition will start in unstable.

Miscellaneous contributions
  • Enrico Zini redesigned the way nm.debian.org stores historical audit logs and personal data backups.
  • Carles Pina submitted a new package (python-firebase-messaging) and prepared updates for python3-ring-doorbell.
  • Carles Pina developed further po-debconf-manager: better state transition, fixed bugs, automated assigning translators and reviewers on edit, updating po header files automatically, fixed bugs, etc.
  • Carles Pina reviewed, submitted and followed up the debconf templates translation (more than 20 packages) and translated some packages (about 5).
  • Santiago continued to work on DebConf 25 organization related tasks, including handling the logo survey and results. Stefano spent time on DebConf 25 too.
  • Santiago continued the exploratory work about linux livepatching with Emmanuel Arias. Santiago and Emmanuel found a challenge since kpatch won t fully support linux in trixie and newer, so they are exploring alternatives such as klp-build.
  • Helmut maintained the /usr-move transition filing bugs in e.g. bubblewrap, e2fsprogs, libvpd-2.2-3, and pam-tmpdir and corresponding on related issues such as kexec-tools and live-build. The removal of the usrmerge package unfortunately broke debootstrap and was quickly reverted. Continued fallout is expected and will continue until trixie is released.
  • Helmut sent patches for 10 cross build failures and worked with Sandro Knau on stuck Qt/KDE patches related to cross building.
  • Helmut continued to maintain rebootstrap removing the need to build gnu-efi in the process.
  • Helmut collaborated with Emanuele Rocca and Jochen Sprickerhof on an interesting adventure in diagnosing why gcc would FTBFS in recent sbuild.
  • Helmut proposed supporting build concurrency limits in coreutils s nproc. As it turns out nproc is not a good place for this functionality.
  • Colin worked with Sandro Tosi and Andrej Shadura to finish resolving the multipart vs. python-multipart name conflict, as mentioned last month.
  • Colin upgraded 48 Python packages to new upstream versions, fixing four CVEs and a number of compatibility bugs with recent Python versions.
  • Colin issued an openssh bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which had been quite broken in bookworm.
  • Stefano fixed a minor bug in debian-reimbursements that was disallowing combination PDFs containing JAL tickets, encoded in UTF-16.
  • Stefano uploaded a stable update to PyPy3 in bookworm, catching up with security issues resolved in cPython.
  • Stefano fixed a regression in the eventlet from his Python 3.13 porting patch.
  • Stefano continued discussing a forwarded patch (renaming the sysconfigdata module) with cPython upstream, ending in a decision to drop the patch from Debian. This will need some continued work.
  • Anupa participated in the Debian Publicity team meeting in December, which discussed the team activities done in 2024 and projects for 2025.

8 January 2025

John Goerzen: Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I a person that has been censored by Facebook for mentioning the Open Source social network Mastodon not cheering a lighter touch ? The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s. Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not. As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access and wouldn t understand how to access even if they had the equipment you have a place where self-expression can be unleashed. And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms. But as I ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough more than a few dozen people, say there are troublemakers that ruin it for everyone. Maybe it s trolling, maybe it s vicious attacks, you name it it will arrive and it will be poisonous. I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.) But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the sysop or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you re banned. But, in all but the smallest towns, there were other options you could try. On the other hand, sysops enjoyed having people call their BBSs and didn t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don t be excessively annoying, and don t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet. On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty delinking from the network if they repeatedly failed to prevent malicious content from flowing through their site. Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn t be in real life . And yes, some harbored trolls and flamers. The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn t like what the vibe was at a certain place. That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online. With the rise of the very large platforms and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you d need tens or hundreds of millions of dollars of equipment and employees. All that means you can t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook s mission is not the one they say it is [to] give people the power to build community and bring the world closer together. If that was their goal, they wouldn t be creating AI users and AI spam and all the rest. Zuck isn t showing courage; he s sucking up to Trump and those that will pay the price are those that always do: women and minorities. Really, the point of any large social network isn t to build community. It s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles even free speech doing it. Don t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others. The problem with a one-size-fits-all content policy is that the world isn t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can t square this circle. Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don t have much to offer on that point. Secondly, it indicates that we are too centralized. We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don t. And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance. Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go shrug, this sucks and try something better. (And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

7 January 2025

Jonathan Wiltshire: Using TPM for Automatic Disk Decryption in Debian 12

These days it s straightforward to have reasonably secure, automatic decryption of your root filesystem at boot time on Debian 12. Here s how I did it on an existing system which already had a stock kernel, secure boot enabled, grub2 and an encrypted root filesystem with the passphrase in key slot 0. There s no need to switch to systemd-boot for this setup but you will use systemd-cryptenroll to manage the TPM-sealed key. If that offends you, there are other ways of doing this.

Caveat The parameters I ll seal a key against in the TPM include a hash of the initial ramdisk. This is essential to prevent an attacker from swapping the image for one which discloses the key. However, it also means the key has to be re-sealed every time the image is rebuilt. This can be frequent, for example when installing/upgrading/removing packages which include a kernel module. You won t get locked out (as long as you still have a passphrase in another slot), but will need to re-seal the key to restore the automation. You can also choose not to include this parameter for the seal, but that opens the door to such an attack.

Caution: these are the steps I took on my own system. You may need to adjust them to avoid ending up with a non-booting system.

Check for a usable TPM device We ll bind the secure boot state, kernel parameters, and other boot measurements to a decryption key. Then, we ll seal it using the TPM. This prevents the disk being moved to another system, the boot chain being tampered with and various other attacks.
# apt install tpm2-tools
# systemd-cryptenroll --tpm2-device list
PATH        DEVICE     DRIVER 
/dev/tpmrm0 STM0125:00 tpm_tis

Clean up older kernels including leftover configurations I found that previously-removed (but not purged) kernel packages sometimes cause dracut to try installing files to the wrong paths. Identify them with:
# apt install aptitude
# aptitude search '~c'
Change search to purge or be more selective, this part is an exercise for the reader.

Switch to dracut for initramfs images Unless you have a particular requirement for the default initramfs-tools, replace it with dracut and customise:
# mkdir /etc/dracut.conf.d
# echo 'add_dracutmodules+=" tpm2-tss crypt "' > /etc/dracut.conf.d/crypt.conf
# apt install dracut

Remove root device from crypttab, configure grub Remove (or comment) the root device from /etc/crypttab and rebuild the initial ramdisk with dracut -f. Edit /etc/default/grub and add rd.auto rd.luks=1 to GRUB_CMDLINE_LINUX. Re-generate the config with update-grub. At this point it s a good idea to sanity-check the initrd contents with lsinitrd. Then, reboot using the new image to ensure there are no issues. This will also have up-to-date TPM measurements ready for the next step.

Identify device and seal a decryption key
# lsblk -ip -o NAME,TYPE,MOUNTPOINTS
NAME                                                    TYPE  MOUNTPOINTS
/dev/nvme0n1p4                                          part  /boot
/dev/nvme0n1p5                                          part  
 -/dev/mapper/luks-deff56a9-8f00-4337-b34a-0dcda772e326 crypt 
   -/dev/mapper/lv-var                                  lvm   /var
   -/dev/mapper/lv-root                                 lvm   /
   -/dev/mapper/lv-home                                 lvm   /home
In this example my root filesystem is in a container on /dev/nvme0n1p5. The existing passphrase key is in slot 0.
# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7+8+9+14 /dev/nvme0n1p5
Please enter current passphrase for disk /dev/nvme0n1p5: ********
New TPM2 token enrolled as key slot 1.
The PCRs I chose (7, 8, 9 and 14) correspond to the secure boot policy, kernel command line (to prevent init=/bin/bash-style attacks), files read by grub including that crucial initrd measurement, and secure boot MOK certificates and hashes. You could also include PCR 5 for the partition table state, and any others appropriate for your setup.

Reboot You should now be able to reboot and the root device will be unlocked automatically, provided the secure boot measurements remain consistent. The key slot protected by a passphrase (mine is slot 0) is now your recovery key. Do not remove it!
Please consider supporting my work in Debian and elsewhere through Liberapay.

Thorsten Alteholz: My Debian Activities in December 2024

Debian LTS This was my hundred-twenty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. I worked on updates for ffmpeg and haproxy in all releases. Along the way I marked more CVEs as not-affected than I had to fix. So finally there was no upload needed for haproxy anymore. Unfortunately testing ffmpeg was not as easy, as the recommended just look whether mpv can play random videos is not really satisfying. So the upload will happen only in January. I also wonder whether fixing glewlwyd is really worth the effort, as the software is already EOL upstream. Debian ELTS This month was the seventy-seventhth ELTS month. During my allocated time I worked on ffmpeg, haproxy, amanda and kmail-account-wizzard. Like LTS, all CVEs of haproxy and some of ffmpeg could be marked as not-affected and testing of the other packages was/is not really straight forward. So the final upload will only happen in January as well. Debian Printing Unfortunately I didn t found any time to work on this topic. Debian Matomo Thanks a lot to William Desportes for all fixes of my bad PHP packaging. Debian Astro This month I uploaded new packages or new upstream or bugfix versions of: I again sponsored an upload of calceph. Debian IoT This month I uploaded new upstream or bugfix versions of: Debian Mobcom This month I uploaded new packages or new upstream or bugfix versions of: misc This month I uploaded new upstream or bugfix versions of: I also sponsored uploads of emacs-lsp-docker, emacs-dape, emacs-oauth2, gpgmngr, libjs-jush. FTP master This month I accepted 330 and rejected 13 packages. The overall number of packages that got accepted was 335.

Next.

Previous.