Search Results: "ed"

8 January 2025

Dirk Eddelbuettel: RcppGetconf 0.0.4 on CRAN: Updates

A minor package update, the first in over six years, for the RcppGetconf package for reading system configuration not unlike getconf from the libc library is now on CRAN The changes are all minor package maintenance items of keeping URLs, continuous integration, and best practices current. We had two helper scripts use bash in their shebangs, and we just got dinged in one of them. Tedious as this can at times seem, it ensures CRAN packages do in fact compile just about anywhere which is a Good Thing (TM) so we obliged and updated the package with that change and all the others that had accumulated over six years. No interface or behaviour changes, just maintenance as one does at times. The short list of changes in this release follows:

Changes in inline version 0.0.4 (2025-01-07)
  • Dynamically linked compiled code is now registered in NAMESPACE
  • The continuous integration setup was update several times
  • The README was updated with current badges and URLs
  • The DESCRIPTION file now uses Authors@R
  • The configure and cleanup scripts use /bin/sh

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. More about the package is at the local RcppGetconf page and the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

John Goerzen: Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I a person that has been censored by Facebook for mentioning the Open Source social network Mastodon not cheering a lighter touch ? The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s. Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not. As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access and wouldn t understand how to access even if they had the equipment you have a place where self-expression can be unleashed. And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms. But as I ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough more than a few dozen people, say there are troublemakers that ruin it for everyone. Maybe it s trolling, maybe it s vicious attacks, you name it it will arrive and it will be poisonous. I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.) But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the sysop or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you re banned. But, in all but the smallest towns, there were other options you could try. On the other hand, sysops enjoyed having people call their BBSs and didn t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don t be excessively annoying, and don t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet. On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty delinking from the network if they repeatedly failed to prevent malicious content from flowing through their site. Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn t be in real life . And yes, some harbored trolls and flamers. The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn t like what the vibe was at a certain place. That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online. With the rise of the very large platforms and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you d need tens or hundreds of millions of dollars of equipment and employees. All that means you can t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook s mission is not the one they say it is [to] give people the power to build community and bring the world closer together. If that was their goal, they wouldn t be creating AI users and AI spam and all the rest. Zuck isn t showing courage; he s sucking up to Trump and those that will pay the price are those that always do: women and minorities. Really, the point of any large social network isn t to build community. It s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles even free speech doing it. Don t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others. The problem with a one-size-fits-all content policy is that the world isn t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can t square this circle. Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don t have much to offer on that point. Secondly, it indicates that we are too centralized. We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don t. And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance. Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go shrug, this sucks and try something better. (And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

Sandro Tosi: HOWTO remove Reddit (web) "Recent" list of communities

If you go on reddit.com via browser, on the left column you can see a section called "RECENT" with the list of the last 5 communities recently visited.If you want to remove them, say for privacy reasons (shared device, etc.), there's no simple way to do so: there's not "X" button next to it, your profile page doesn't offer a way to clear that out. you could clear all the data from the website, but that seems too extreme, no?Enter Chrome's "Developer Tools"While on reddit.com open Menu > More Tools > Developers tool, go on the Application tab, Storage > Local storage and select reddit.com; on the center panel you see a list of key-value pairs, look for the key "recent-subreddits-store"; you can see the list of the 5 communities in the JSON below.If you wanna get rid of the recently viewed communities list, simply delete that key, refresh reddit.com and voila, empty list.Note: I'm fairly sure i read about this method somewhere, i simply cant remember where, but it's definitely not me who came up with it. I just needed to use it recently and had to back track memories to figure it out again, so it's time to write it down.

7 January 2025

Jonathan Wiltshire: Using TPM for Automatic Disk Decryption in Debian 12

These days it s straightforward to have reasonably secure, automatic decryption of your root filesystem at boot time on Debian 12. Here s how I did it on an existing system which already had a stock kernel, secure boot enabled, grub2 and an encrypted root filesystem with the passphrase in key slot 0. There s no need to switch to systemd-boot for this setup but you will use systemd-cryptenroll to manage the TPM-sealed key. If that offends you, there are other ways of doing this.

Caveat The parameters I ll seal a key against in the TPM include a hash of the initial ramdisk. This is essential to prevent an attacker from swapping the image for one which discloses the key. However, it also means the key has to be re-sealed every time the image is rebuilt. This can be frequent, for example when installing/upgrading/removing packages which include a kernel module. You won t get locked out (as long as you still have a passphrase in another slot), but will need to re-seal the key to restore the automation. You can also choose not to include this parameter for the seal, but that opens the door to such an attack.

Caution: these are the steps I took on my own system. You may need to adjust them to avoid ending up with a non-booting system.

Check for a usable TPM device We ll bind the secure boot state, kernel parameters, and other boot measurements to a decryption key. Then, we ll seal it using the TPM. This prevents the disk being moved to another system, the boot chain being tampered with and various other attacks.
# apt install tpm2-tools
# systemd-cryptenroll --tpm2-device list
PATH        DEVICE     DRIVER 
/dev/tpmrm0 STM0125:00 tpm_tis

Clean up older kernels including leftover configurations I found that previously-removed (but not purged) kernel packages sometimes cause dracut to try installing files to the wrong paths. Identify them with:
# apt install aptitude
# aptitude search '~c'
Change search to purge or be more selective, this part is an exercise for the reader.

Switch to dracut for initramfs images Unless you have a particular requirement for the default initramfs-tools, replace it with dracut and customise:
# mkdir /etc/dracut.conf.d
# echo 'add_dracutmodules+=" tpm2-tss crypt "' > /etc/dracut.conf.d/crypt.conf
# apt install dracut

Remove root device from crypttab, configure grub Remove (or comment) the root device from /etc/crypttab and rebuild the initial ramdisk with dracut -f. Edit /etc/default/grub and add rd.auto rd.luks=1 to GRUB_CMDLINE_LINUX. Re-generate the config with update-grub. At this point it s a good idea to sanity-check the initrd contents with lsinitrd. Then, reboot using the new image to ensure there are no issues. This will also have up-to-date TPM measurements ready for the next step.

Identify device and seal a decryption key
# lsblk -ip -o NAME,TYPE,MOUNTPOINTS
NAME                                                    TYPE  MOUNTPOINTS
/dev/nvme0n1p4                                          part  /boot
/dev/nvme0n1p5                                          part  
 -/dev/mapper/luks-deff56a9-8f00-4337-b34a-0dcda772e326 crypt 
   -/dev/mapper/lv-var                                  lvm   /var
   -/dev/mapper/lv-root                                 lvm   /
   -/dev/mapper/lv-home                                 lvm   /home
In this example my root filesystem is in a container on /dev/nvme0n1p5. The existing passphrase key is in slot 0.
# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7+8+9+14 /dev/nvme0n1p5
Please enter current passphrase for disk /dev/nvme0n1p5: ********
New TPM2 token enrolled as key slot 1.
The PCRs I chose (7, 8, 9 and 14) correspond to the secure boot policy, kernel command line (to prevent init=/bin/bash-style attacks), files read by grub including that crucial initrd measurement, and secure boot MOK certificates and hashes. You could also include PCR 5 for the partition table state, and any others appropriate for your setup.

Reboot You should now be able to reboot and the root device will be unlocked automatically, provided the secure boot measurements remain consistent. The key slot protected by a passphrase (mine is slot 0) is now your recovery key. Do not remove it!
Please consider supporting my work in Debian and elsewhere through Liberapay.

Thorsten Alteholz: My Debian Activities in December 2024

Debian LTS This was my hundred-twenty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. I worked on updates for ffmpeg and haproxy in all releases. Along the way I marked more CVEs as not-affected than I had to fix. So finally there was no upload needed for haproxy anymore. Unfortunately testing ffmpeg was not as easy, as the recommended just look whether mpv can play random videos is not really satisfying. So the upload will happen only in January. I also wonder whether fixing glewlwyd is really worth the effort, as the software is already EOL upstream. Debian ELTS This month was the seventy-seventhth ELTS month. During my allocated time I worked on ffmpeg, haproxy, amanda and kmail-account-wizzard. Like LTS, all CVEs of haproxy and some of ffmpeg could be marked as not-affected and testing of the other packages was/is not really straight forward. So the final upload will only happen in January as well. Debian Printing Unfortunately I didn t found any time to work on this topic. Debian Matomo Thanks a lot to William Desportes for all fixes of my bad PHP packaging. Debian Astro This month I uploaded new packages or new upstream or bugfix versions of: I again sponsored an upload of calceph. Debian IoT This month I uploaded new upstream or bugfix versions of: Debian Mobcom This month I uploaded new packages or new upstream or bugfix versions of: misc This month I uploaded new upstream or bugfix versions of: I also sponsored uploads of emacs-lsp-docker, emacs-dape, emacs-oauth2, gpgmngr, libjs-jush. FTP master This month I accepted 330 and rejected 13 packages. The overall number of packages that got accepted was 335.

Enrico Zini: Debugging printing to a remote printer

I upgraded to Debian testing/trixie, and my network printer stopped appearing in print dialogs. These are notes from the debugging session. Check firewall configuration I tried out kde, which installed plasma-firewall, which installed firewalld, which closed by default the ports used for printing. For extra fun, appindicators are not working in Gnome and so firewall-applet is currently useless, although one can run firewall-config manually, or use the command line that might be more user friendly than the UI. Step 1: change the zone for the home wifi to "Home":
firewall-cmd  --zone home --list-interfaces
firewall-cmd  --zone home --add-interface wlp1s0
Step 2: make sure the home zone can print:
firewall-cmd --zone home --list-services
firewall-cmd --zone home --add-service=ipp
firewall-cmd --zone home --add-service=ipp-client
firewall-cmd --zone home --add-service=mdns
I searched and searched but I could not find out whether ipp is needed, ipp-client is needed, or both are needed. Check if avahi can see the printer Is the printer advertised correctly over mdns? When it didn't work:
$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []
$ avahi-browse -rt _ipp._tcp
[empty]
When it works:
$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID= " "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID= " "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []
$ avahi-browse -rt _ipp._tcp
+ wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
+ wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
= wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID= " "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1092109
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID= " "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
Check if cups can see the printer From CUPS' Using Network Printers:
$ /usr/sbin/lpinfo --include-schemes dnssd -v
network dnssd://Brother%20HL-2030%20series%20%40%20server._ipp._tcp.local/cups?uuid= 
Debugging session interrupted At this point, the printer appeared. It could be that: In the end, debugging failed successfully, and this log now remains as a reference for possible further issues.

5 January 2025

Dominique Dumont: cme: new field in fill.copyright.blanks.yml for Debian copyright file

Hi The file fill.copyright.blanks.yml is used to fill missing copyright information when running cme update dpkg-copyright. This file can contain a comment field that is used for book-keeping. Here s an example from libuv1:
README.md:
comment:  -
  the license from this file is used as a main license and tends to
  apply expat or CC to all files. Which is wrong. Let's skip this file
  and let cme retrieve data from files.
skip: true
You may ask: why no use a YAML comments ? The problem is that YAML comments are dropped by cme edit dpkg. So you should not use them in fill.copyrights.blanks.yml. It occurred to me that it may be interesting to copy the content of this comment in to debian/copyright file entries. But not in all cases, as some comments make sense in fill.copyright.blanks.yml but not in debian/copyright. So I ve added a new forwarded-comment parameter in fill.copyright.blanks.yml. The content of this field is copied verbatim in debian/copyright. This way, you can add comments for book keeping and comments for debian/copyright entries. For instance:
pan/gui/*:
  forwarded-comment: some comment about gui
  comment: this is an example from cme test files
yields:
Files: pan/gui/*
Copyright: 1989, 1991, Free Software Foundation, Inc.
License: GPL-2
Comment: some comment about gui
This new functionality is available in libconfig-model-dpkg-perl >= 3.008. All the best

Jonathan McDowell: Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I d do my annual recap of my Free Software activities. For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences In 2024 I managed to make it to FOSDEM again. It s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I m already booked to go this year as well. I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I d a talk submission in for FOSDEM about our use of attestation and why it s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn t selected. Maybe I ll find somewhere else suitable to do it. BSides Belfast may or may not count - it s a security conference, but there s a lot of overlap with various bits of Free software, so I feel it deserves a mention. I skipped DebConf for 2024 for a variety of reasons, but I m expecting to make DebConf25 in Brest, France in July.

Debian Most of my contributions to Free software continue to happen within Debian. In 2023 I d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I ll have to find some time to build the appropriate DFSG tarball and update it. rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded. kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that s tracking Kodi 22 and we re still on Kodi 21 so I plan to follow the Omega branch for now. Which I ve just noticed had a 21.0.8 release this week. Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I ve had zero concerns about any of his packaging. The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there. I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3. The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups. OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well. libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1. I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze. Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there s been some recent activity, but it doesn t seem to be release quality), but there was an outstanding ITP and I ve some familiarity with the space as we ve been using it at work as part of investigating TCG OPAL encryption. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux I d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I ve another fix in progress that I hope to submit in 2025, but it s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects I didn t end up doing much in the way of externally published personal project work in 2024. Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet. listadmin3 got some minor updates based on external feedback / MRs. It s nice to know it s useful to other folk even in its basic state. That wraps up 2024. I ve got no particular goals for this year at present. Ideally I d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I ll settle for making sure all my Debian packages are in reasonable state for Trixie.

Enrico Zini: ncdu on files to back up

I use borg and restic to backup files in my system. Sometimes I run a huge download or clone a large git repo and forget to mark it with CACHEDIR.TAG, and it gets picked up slowing the backup process and wasting backup space uselessly. I would like to occasionally audit the system to have an idea of what is a candidate for backup. ncdu would be great for this, but it doesn't know about backup exclusion filters. Let's teach it then. Here's a script that simulates a backup and feeds the results to ncdu:
#!/usr/bin/python3
import argparse
import os
import sys
import time
import stat
import json
import subprocess
import tempfile
from pathlib import Path
from typing import Any
FILTER_ARGS = [
    "--one-file-system",
    "--exclude-caches",
    "--exclude",
    "*/.cache",
]
BACKUP_PATHS = [
    "/home",
]
class Dir:
    """
    Dispatch borg output into a hierarchical directory structure.
    borg prints a flat file list, ncdu needs a hierarchical JSON.
    """
    def __init__(self, path: Path, name: str):
        self.path = path
        self.name = name
        self.subdirs: dict[str, "Dir"] =  
        self.files: list[str] = []
    def print(self, indent: str = "") -> None:
        for name, subdir in self.subdirs.items():
            print(f" indent name: /")
            subdir.print(indent + " ")
        for name in self.files:
            print(f" indent name ")
    def add(self, parts: tuple[str, ...]) -> None:
        if len(parts) == 1:
            self.files.append(parts[0])
            return
        subdir = self.subdirs.get(parts[0])
        if subdir is None:
            subdir = Dir(self.path / parts[0], parts[0])
            self.subdirs[parts[0]] = subdir
        subdir.add(parts[1:])
    def to_data(self) -> list[Any]:
        res: list[Any] = []
        st = self.path.stat()
        res.append(self.collect_stat(self.name, st))
        for name, subdir in self.subdirs.items():
            res.append(subdir.to_data())
        dir_fd = os.open(self.path, os.O_DIRECTORY)
        try:
            for name in self.files:
                try:
                    st = os.lstat(name, dir_fd=dir_fd)
                except FileNotFoundError:
                    print(
                        "Possibly broken encoding:",
                        self.path,
                        repr(name),
                        file=sys.stderr,
                    )
                    continue
                if stat.S_ISDIR(st.st_mode):
                    continue
                res.append(self.collect_stat(name, st))
        finally:
            os.close(dir_fd)
        return res
    def collect_stat(self, fname: str, st) -> dict[str, Any]:
        res =  
            "name": fname,
            "ino": st.st_ino,
            "asize": st.st_size,
            "dsize": st.st_blocks * 512,
         
        if stat.S_ISDIR(st.st_mode):
            res["dev"] = st.st_dev
        return res
class Scanner:
    def __init__(self) -> None:
        self.root = Dir(Path("/"), "/")
        self.data = None
    def scan(self) -> None:
        with tempfile.TemporaryDirectory() as tmpdir_name:
            mock_backup_dir = Path(tmpdir_name) / "backup"
            subprocess.run(
                ["borg", "init", mock_backup_dir.as_posix(), "--encryption", "none"],
                cwd=Path.home(),
                check=True,
            )
            proc = subprocess.Popen(
                [
                    "borg",
                    "create",
                    "--list",
                    "--dry-run",
                ]
                + FILTER_ARGS
                + [
                    f" mock_backup_dir ::test",
                ]
                + BACKUP_PATHS,
                cwd=Path.home(),
                stderr=subprocess.PIPE,
            )
            assert proc.stderr is not None
            for line in proc.stderr:
                match line[0:2]:
                    case b"- ":
                        path = Path(line[2:].strip().decode())
                    case b"x ":
                        continue
                    case _:
                        raise RuntimeError(f"Unparsable borg output:  line!r ")
                if path.parts[0] != "/":
                    raise RuntimeError(f"Unsupported path:  path.parts!r ")
                self.root.add(path.parts[1:])
    def to_json(self) -> list[Any]:
        return [
            1,
            0,
             
                "progname": "backup-ncdu",
                "progver": "0.1",
                "timestamp": int(time.time()),
             ,
            self.root.to_data(),
        ]
    def export(self):
        return json.dumps(self.to_json()).encode()
def main():
    parser = argparse.ArgumentParser(
        description="Run ncdu to estimate sizes of files to backup."
    )
    parser.parse_args()
    scanner = Scanner()
    scanner.scan()
    # scanner.root.print()
    res = subprocess.run(["ncdu", "-f-"], input=scanner.export())
    sys.exit(res.returncode)
if __name__ == "__main__":
    main()

4 January 2025

Scarlett Gately Moore: KDE: Snap hotfixes and updates

Fixed okular pdf printing https://bugs.kde.org/show_bug.cgi?id=498065 Fixed kwave recording https://bugs.kde.org/show_bug.cgi?id=442085 please run sudo snap connect kwave:audio-record :audio-record until auto-connect gets approved here: https://forum.snapcraft.io/t/kde-auto-connect-our-two-recording-apps/44419 New qt6 snaps in edge until 24.12.1 release
I have begun the process of moving to core24 currently in edge until 24.12.1 release. Some major improvements come with core24!
Tokodon is our wonderful Mastadon client I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

Kentaro Hayashi: Tips when building debian-installer

Recently, I'm trying to fix d-i Han-Unification issue for Japanese. This issue was not fixed for a long time since Debian 9 (stretch). #1037256 - debian-installer: GUI font for Japanese was incorrectly rendered - Debian Bug report logs
To know about how Han-Unification is harmful for Japanese in some cases, See "Your Code Displays Japanese Wrong". heistak.github.io
When building d-i (GUI Installer), you need to build build_netboot-gtk target. But note that you need recent master because it has nitpick issue with GNU Make 4.4.x. bugs.debian.org After that, need to prepare required packages. See README in details.
apt-get update
apt-get install -y myrepos git libgtk2.0-dev fakeroot
apt-get build-dep -y debian-installer
It seems that Bug#1037256 will be fixed with supporting compressed font. I don't know how to do it furthermore , but I'm sure that Mr. Cyril Brulebois will handle this issue better. :-) (I thought that creating fake fontconfig cache when building image, then decompress compressed font dynamically might work as just an idea, but it didn't work.) If you would like to tackle fixing d-i issues as a newbie, it might be better to execute "make reallyclean" before rebuilding image not to fall-in pitfalls.

Louis-Philippe V ronneau: Montreal's Debian & Stuff - December 2024

Our Debian User Group met on December 22nd for our last meeting of 2024. I wasn't sure at first it was a good idea, but many people showed up and it was great! Here's what we did: pollo: anarcat: lelutin: lavamind: tvaz: mjeanson and joeDoe: Some of us ended up grabbing a drink after the event at l'Isle de Garde, a pub right next to the venue. Pictures This time around, we were hosted by l'Espace des possibles, at their new location (they moved since our last visit). It was great! People liked the space so much we actually discussed going back there more often :) Group photo at l'Espace des possibles

3 January 2025

Bits from Debian: Bits from the DPL

Dear Debian community, this is bits from DPL for December. Happy New Year 2025! Wishing everyone health, productivity, and a successful Debian release later in this year. Strict ownership of packages I'm glad my last bits sparked discussions about barriers between packages and contributors, summarized temporarily in some post on the debian-devel list. As one participant aptly put it, we need a way to visibly say, "I'll do the job until someone else steps up". Based on my experience with the Bug of the Day initiative, simplifying the process for engaging with packages would significantly help. Currently we have
  1. NMU The Developers Reference outlines several preconditions for NMUs, explicitly stating, "Fixing cosmetic issues or changing the packaging style in NMUs is discouraged." This makes NMUs unsuitable for addressing package smells. However, I've seen NMUs used for tasks like switching to source format 3.0 or bumping the debhelper compat level. While it's technically possible to file a bug and then address it in an NMU, the process inherently limits the NMUer's flexibility to reduce package smells.
  2. Package Salvaging This is another approach for working on someone else's packages, aligning with the process we often follow in the Bug of the Day initiative. The criteria for selecting packages typically indicate that the maintainer either lacks time to address open bugs, has lost interest, or is generally MIA.
Both options have drawbacks, so I'd welcome continued discussion on criteria for lowering the barriers to moving packages to Salsa and modernizing their packaging. These steps could enhance Debian overall and are generally welcomed by active maintainers. The discussion also highlighted that packages on Salsa are often maintained collaboratively, fostering the team-oriented atmosphere already established in several Debian teams. Salsa Continuous Integration As part of the ongoing discussion about package maintenance, I'm considering the suggestion to switch from the current opt-in model for Salsa CI to an opt-out approach. While I fully agree that human verification is necessary when the pipeline is activated, I believe the current option to enable CI is less visible than it should be. I'd welcome a more straightforward approach to improve access to better testing for what we push to Salsa. Number of packages not on Salsa In my campaign, I stated that I aimed to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. As of this writing, the count stands at 1,928 [1], so I consider this promise fulfilled. My thanks go out to everyone who contributed to this effort. Moving forward, I'd like to set a more ambitious goal for the remainder of my term and hope we can reduce the number to below 1,800. [1] UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ; Past and future events Talk at MRI Together In early December, I gave a short online talk, primarily focusing on my work with the Debian Med team. I also used my position as DPL to advocate for attracting more users and developers from the scientific research community. FOSSASIA I originally planned to attend FOSDEM this year. However, given the strong Debian presence there and the need for better representation at the FOSSASIA Summit, I decided to prioritize the latter. This aligns with my goal of improving geographic diversity. I also look forward to opportunities for inter-distribution discussions. Debian team sprints Debian Ruby Sprint I approved the budget for the Debian Ruby Sprint, scheduled for January 2025 in Paris. If you're interested in contributing to the Ruby team, whether in person or online, consider reaching out to them. I'm sure any helping hand would be appreciated. Debian Med sprint There will also be a Debian Med sprint in Berlin in mid-February. As usual, you don't need to be an expert in biology or medicine basic bug squashing skills are enough to contribute and enjoy the friendly atmosphere the Debian Med team fosters at their sprints. For those working in biology and medicine, we typically offer packaging support. Anyone interested in spending a weekend focused on impactful scientific work with Debian is warmly invited. Again all the best for 2025
Andreas.

Taavi V n nen: Automatically updating reverse DNS entries for my Hetzner servers

Some parts of my infrastructure run on Hetzner dedicated servers. Hetzner's management console has an interface to update reverse DNS entries, and I wanted to automate that. Unfortunately there's no option to just delegate the zones to my own authoritative DNS servers. So I did the next best thing, which is updating the Hetzner-managed records with data from my own authoritative DNS servers.

Generating DNS zones the hard way The first step of automating DNS record provisioning is, well, figuring out which records need to be provisioned. I wanted to re-use my existing automation for generating the record data, instead of coming up with a new system for these records. The basic summary is that there's a Go program creatively named dnsgen that's in charge of generating zone file snippets from various sources (these include Netbox, Kubernetes, PuppetDB and my custom reverse web proxy setup). Those snippets are combined with Jinja templates to generate full zone files to be loaded to a hidden primary running Bind9 (like all other DNS servers I run). The zone files are then transferred to a fleet of internal authoritative servers as well as my public authoritative DNS server, which in turn transfers them to various other authoritative DNS servers (like ns-global and Traficom anycast) for redundancy. There's also a bunch of other smaller features, like using Bind views to server different data to internal and external clients, and resolving external records during record generation time to be used on apex records that would use CNAME records if they could. (The latter is a workaround for Masto.host, the hosting provider we use for Wikis World, not having a stable IPv6 address.) Overall it's a really nice system, and I've spent quite a bit of time on it.

Updating records on Hetzner-managed space As mentioned above, Hetzner unfortunately does not support custom DNS servers for reverse records on IP space rented from them. But I wanted to use my existing, perfectly working DNS record generation setup since that works perfectly fine. So the obvious answer is to (ab)use DNS zone file transfers. I quickly wrote a few hundred lines of Go to request the zone data and then use the Hetzner robot API to ensure the reverse entries are in sync. The main obstacle hit here was the Hetzner API somehow requiring an "update" call (instead of a "create" one) to create a new record, as the create endpoint was returning an HTTP 400 response no matter what. Once I sorted that out, the script started working fine and created the few dozen missing records. Finally I added a CronJob in my Kubernetes cluster to run the script once in a while. Overall this is a big improvement over doing things by hand and didn't require that much effort. The obvious next step would be to expand the script to a tiny DNS server capable of receiving zone update NOTIFYs to make the updates happen real-time. Unfortunately there's now no hiding of the records revealing my ugly hacks clever networking solutions :(

2 January 2025

Paul Wise: FLOSS Activities December 2024

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors The SWH work was sponsored. All other work was done on a volunteer basis.

Martin-Éric Racine: On the future of i386 on Debian

Before we proceed, let's emphasize a few things: This being said, I still think that the current approach of keeping i386 among the supported architectures, all while no longer shipping kernels, is entirely the wrong decision. What should instead be done is to keep on shipping i386 kernels for Trixie, but clearly indicate in the Trixie Release Notes that i386 is supported for the last time and thereafter fully demoted to Ports.

Matthew Garrett: The GPU, not the TPM, is the root of hardware DRM

As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.

I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.

What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.

Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.

The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.

The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.

In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).

Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.

The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.

comment count unavailable comments

Colin Watson: Free software activity in December 2024

Most of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via Liberapay (thanks!). OpenSSH I issued a bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which was quite broken in bookworm. base-passwd A few months ago, the adduser maintainer started a discussion with me (as the base-passwd maintainer) and the shadow maintainer about bringing all three source packages under one team, since they often need to cooperate on things like user and group names. I agreed, but hadn t got round to doing anything about it until recently. I ve now officially moved it under team maintenance. debconf Gioele Barabucci has been working on eliminating duplicated code between debconf and cdebconf, ultimately with the goal of migrating to cdebconf (which I m not sure I m convinced of as a goal, but if we can make improvements to both packages as part of working towards it then there s no harm in that). I finally got round to reviewing and merging confmodule changes in each of debconf and cdebconf. This caused an installer regression due to a weirdness in cdebconf-udeb s packaging, which I fixed - sorry about that! I ve also been dealing with a few patch submissions that had been in my queue for a long time, but more on that next month if all goes well. CI issues I noticed and fixed a problem with Restrictions: needs-sudo in autopkgtest. I fixed broken aptly images in the Salsa CI pipeline. Python team Last month, I mentioned some progress on sorting out the multipart vs. python-multipart name conflict in Debian (#1085728), and said that I thought we d be able to finish it soon. I was right! We got it all done this month: The Python 3.13 transition continues, and last month we were able to add it to the supported Python versions in testing. (The next step will be to make it the default.) I fixed lots of problems in aid of this, including: Sphinx 8.0 removed some old intersphinx_mapping syntax which turned out to still be in use by many packages in Debian. The fixes for this were individually trivial, but there were a lot of them: I found that twisted 24.11.0 broke tests in buildbot and wokkel, and fixed those. I packaged python-flatdict, needed for a new upstream version of python-semantic-release. I tracked down a test failure in vdirsyncer (which I ve been using for some years, but had never previously needed to modify) and contributed a fix upstream. I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools: I fixed django-cte to remove a build-dependency on the obsolete python3-nose package. I added Django 5.1 support to django-polymorphic. (There are a number of other packages that still need work here.) I fixed various other build/test failures: I upgraded these packages to new upstream versions: I updated the team s library style guide to remove material related to Python 2 and early versions of Python 3, which is no longer relevant to any current Python packaging work. Other Python upstream work I happened to notice a Twisted upstream issue requesting the removal of the deprecated twisted.internet.defer.returnValue, realized it was still used in many places in Debian, and went on a PR-filing spree informed by codesearch to try to reduce the future impact of such a change on Debian: Other small fixes Santiago Vila has been building the archive with make --shuffle (also see its author s explanation). I fixed associated bugs in cccc (contributed upstream), groff, and spectemu. I backported an upstream patch to putty to fix undefined behaviour that affected use of the small keypad . I removed groff s Recommends: libpaper1 (#1091375, #1091376), since it isn t currently all that useful and was getting in the way of a transition to libpaper2. I filed an upstream bug suggesting better integration in this area.

1 January 2025

Tim Retout: Strauss as Pop Music

While watching the Vienna New Year s Concert today, reading about its perhaps somewhat problematic origins, I was struck by the observation that the Strauss family s polkas were seen as pop music during their lifetime, not as serious as proper classical composers, and so it took some time before the Vienna Philharmonic would actually play their work. (Perhaps the space-themed interval today and the ballet dancers pretending to be a steam train were a continuation of the true spirit of this? It felt very Eurovision.) I can t decide if it s remarkable that this year was the first time a female composer (Constanze Geiger) was represented at this concert, or if that is what you get when you set up a tradition of playing mainly Strauss?

Russ Allbery: 2024 Book Reading in Review

In 2024, I finished and reviewed 46 books, not counting another three books I've finished but not yet reviewed and which will therefore roll over to 2025. This is slightly fewer books than the last couple of years, but more books than 2021. Reading was particularly spotty this year, with much of the year's reading packed into late November and December. This was a year in which I figured out I was trying to do too much, but did not finish figuring out what to do about it. Reading and particularly reviewing reflected that, with long silent periods and then attempts to catch up. One of the goals for next year is to find a more sustainable balance for the hobbies in my life, including reading. My favorite books I read this year were Ashley Herring Blake's Bright Falls sapphic romance trilogy: Delilah Green Doesn't Care, Astrid Parker Doesn't Fail, and Iris Kelly Doesn't Date. These are not perfect books, but they made me laugh, made me cry, and were impossible to put down. My thanks to a video from BookTuber Georgia Marie for the recommendation. I Shall Wear Midnight was the best of the remaining Pratchett novels. It's the penultimate Tiffany Aching book and, in my opinion, the best. All of the elements of the previous books come together in snarky competence porn that was a delight to read. The best book I read last year was Mark Lawrence's The Book That Wouldn't Burn, which much to my surprise did not make a single award list for its publication year of 2023. It was a tour de force of world-building that surprised me multiple times. Unfortunately, the sequel was not as good and I fear the series may be heading in the wrong direction. I am attempting to stay hopeful about the upcoming third and concluding book. I didn't read much non-fiction this year, but the best of what I did read was Zeke Faux's Number Go Up about the cryptocurrency bubble. This book will not change anyone's mind, but it's a readable and entertaining summary of some of the more obvious cryptocurrency scams. I also had enough quibbles with it to write an extended review, which is a compliment of sorts. The Discworld read-through is done, so I may either start or return to another series re-read in 2025. I have a huge backlog of all sorts of books, though, so we will see how the year goes. As always, I have no specific numeric goals, just a hope that I can make time for regular and varied reading and maintain a rhythm with writing reviews. The full analysis includes some additional personal reading statistics, probably only of interest to me.

Next.

Previous.