Search Results: "pat"

18 April 2025

Sven Hoexter: Trixie Upgrade and X11 Clipboard Manager Madness

Due to my own laziness and a few functionality issues my "for work laptop" is still using a 15+ year old setup with X11 and awesome. Since trixie is now starting its freeze, it's time to update that odd machine as well and look at the fallout. Good news: It's mostly my own resistance to change which required some kick in the back to move on. Clipboard Manager Madness For the past decade or so I used parcellite which served me well. Now that is no longer available in trixie and I started to look into one of the dead end streets of X11 related tooling, searching for an alternative. Parcellite Seems upstream is doing sporadic fixes, but holds GTK2 tight. The Debian package was patched to be GTK3 compatible, but has unfixed ftbfs issues with GCC 14. clipit Next I checked for a parcellite fork named clipit, and that's when it started to get funky. It's packaged in Debian, QA maintained, and recently received at least two uploads to keep it working. Installed it and found it's greeting me with a nag screen that I should migrate to diodon. The real clipit tool is still shipped as a binary named clipit.real, so if you know it you can still use it. To achieve the nag screen it depends on zenity and to ease the migration it depends on diodon. Two things I do not really need. Also the package description prominently mentions that you should not use the package. diodon The nag screen of clipit made me look at diodon. It claims it was written for the Ubuntu Unity desktop, something where I've no idea how alive and relevant it still is. While there is still something on launchpad, it seems to receive sporadic commits on github. Not sure if it's dead or just feature complete. Interim Solution: clipit Settled with clipit for now, but decided to fork the Debian package to remove the nag screen and the dependency on diodon and zenity (package build). My hope is to convert this last X11 setup to wayland within the lifetime of trixie. I also contacted the last uploader regarding a removal of the nag screen, who then brought in the last maintainer who added the nag screen. While I first thought clipit is somewhat maintained upstream, Andrej quickly pointed out that this is not really the case. Still that leaves us in trixie with a rather odd situation. We ship now for the second stable release a package that recommends to move to a different tool while still shipping the original tool. Plus it's getting patched by some of its users who refuse to migrate to the alternative envisioned by the former maintainer. VirtualBox and moving to libvirt I always liked the GUI of VirtualBox, and it really made desktop virtualization easy. But with Linux 6.12, which enables KVM by default, it seems to get even more painful to get it up and running. In the past I just took the latest release from unstable and rebuild that one on the current stable. Currently the last release in unstable is 7.0.20, while the Linux 6.12 fixes only started to appear in VirtualBox 7.1.4 and later. The good thing is with virt-manager and the whole libvirt ecosystem there is a good enough replacement available, and it works fine with related tooling like vagrant. There are instructions available on how to set it up. I can only add that it makes sense to export VAGRANT_DEFAULT_PROVIDER=libvirt in your .bashrc to make that provider change permanent.

17 April 2025

Simon Josefsson: Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data. The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That s where auditing time would help the most. Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let s consider some approaches: While I like the properties of the first solution, and have made effort to support that approach, I don t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren t entirely clear to us today. So let s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it. How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this. We can do better. I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here: https://gitlab.com/debdistutils/verify-reproducible-releases The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come. I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn t it? First I had to somehow mimic upstream s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.) I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven t managed to complete it. Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project. Happy Supply-Chain Security Hacking!

Scarlett Gately Moore: KDE Applications 25.04 Snaps and Kubuntu Plucky Puffin 25.04 Released!

Very busy releasetastic week! The versions being the same is a complete coincidence  https://kde.org/announcements/gear/25.04.0 Which can be downloaded here: https://snapcraft.io/publisher/kde !
In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix. Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful

Arturo Borrero Gonz lez: My experience in the Debian LTS and ELTS projects

Debian Last year, I decided to start participating in the Debian LTS and ELTS projects. It was a great opportunity to engage in something new within the Debian community. I had been following these projects for many years, observing their evolution and how they gained traction both within the ecosystem and across the industry. I was curious to explore how contributors were working internally especially how they managed security patching and remediation for older software. I ve always felt this was a particularly challenging area, and I was fortunate to experience it firsthand. As of April 2025, the Debian LTS project was primarily focused on providing security maintenance for Debian 11 Bullseye. Meanwhile, the Debian ELTS project was targeting Debian 8 Jessie, Debian 9 Stretch, and Debian 10 Buster. During my time with the projects, I worked on a variety of packages and CVEs. Some of the most notable ones include: There are several technical highlights I d like to share things I learned or had to apply while participating: In March 2025, I decided to scale back my involvement in the projects due to some changes in my personal life. Still, this experience has been one of the highlights of my career, and I would definitely recommend it to others. I m very grateful for the warm welcome I received from the LTS/ELTS community, and I don t rule out the possibility of rejoining the LTS/ELTS efforts in the future. The Debian LTS/ELTS projects are currently coordinated by folks at Freexian. Many thanks to Freexian and sponsors for providing this opportunity!

15 April 2025

Russell Coker: What Desktop PCs Need

It seems to me that we haven t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25 floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren t really modern) and carriers for 4*2.5 drives both of which most people don t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25 drive bays, inefficient PSUs, hardware that doesn t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly. Here are some of the things that I think should be in a modern PC System Design Guide. Power Supply The power supply is a core part of the computer and it s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs. A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn t needed) that have been around for ~20 years but there hasn t been a standard so all white-box PC systems have had really large PSUs. PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it s not unreasonable to expect a PC to be able to charge a phone at it s maximum speed. GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior. All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice. Motherboard Features Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature). On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future. ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that. There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it. Case Features The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap. The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go. GPU Placement In modern systems it s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots). A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn t be an afterthought, it should be central to the design. Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU? External Cooling There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good. Noise For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like under typical load or with a typical feature set that excuse them from liability if the noise is louder than expected. It doesn t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same. We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold. What Else? Please comment about other things you think should be standard PC features.

Russell Coker: Storage Trends 2025

It s been almost 15 months since I blogged about Storage Trends 2024 [1]. There hasn t been much change in this time (in Australia at least I m not tracking prices in other countries). The change was so small I had to check how the Australian dollar has performed against other currencies to see if changes to currencies had countered changes to storage prices, but there has been little overall change when compared to the Chinese Yuan and the Australian dollar is only about 11% worse against the US dollar when compared to a year ago. Generally there s a trend of computer parts decreasing in price by significantly more than 11% per annum. Small Storage The cheapest storage device from MSY now is a Patriot P210 128G SATA SSD for $19, cheaper than the $24 last year and the same price as the year before. So over the last 2 years there has been no change to the cheapest storage device on sale. It would almost never make sense to buy that as a 256G SATA SSD (also Patriot P210) is $25 and has twice the lifetime (120TBW vs 60TBW). There are also 256G NVMe devices for $29 and $30 which would be better options if the system has a NVMe socket built in. The cheapest 500G devices are $42.50 for a 512G SATA SSD and $45 for a 500G NVMe. Last year the prices were $33 for SATA and $36 for NVMe in that size so there s been a significant increase in price there. The difference is enough that if someone was on a tight budget they might reasonably decide to use smaller storage than they might have used last year! 2TB hard drives are still $89 the same price as last year! Last year a 2TB SATA SSD was $118 and a 2TB NVMe was $145, now a 2TB SATA SSD is $157 and a 2TB NVMe is $127. So NVMe has become cheaper than SATA in that segment but overall prices are higher than last year. Again for business use 2TB seems a sensible minimum for most systems if you are paying MSY rates (or similar rates from Amazon etc). Medium Storage Last year 4TB HDDs were $135, now they are $148. Last year the cheapest 4TB SSD was $299, now the cheapest is a $309 NVMe. While the prices have all gone up the price difference between hard drives and SSD has decreased in that size range. So for a small server (a lot of home servers and small business servers) 4TB of RAID-1 storage is all that s needed and for that SSDs are the best option. The price difference between $296 for 4TB of RAID-1 HDDs and $618 for RAID-1 NVMe is small enough to be justified by the benefits of speed and being quiet for most small server uses. In 2023 a 8TB hard drive cost $179 and a 8TB SSD cost $739. Last year a 8TB hard drive cost $239 and a 8TB SATA SSD cost, $899. Now a 8TB HDD costs $229 and MSY doesn t sell 8TB SSDs but for comparison Amazon has a Samsung 8TB SATA SSD for $919. So for storing 8TB+ there are benefits of hard drives as SSDs are difficult to get in that size range and more expensive than they were before. It seems that 8TB SSDs aren t used by enough people to have a large market in the home and small office space, so those of us who want the larger storage sizes will have to get second hand enterprise gear. It will probably be another few years before 8TB enterprise SSDs start appearing on the second hand market. Serious Storage Last year I wrote about the affordability of U.2 devices. I regret not buying some then as there are fewer on sale now and prices are higher. For hard drives they still aren t a good choice for most users because most users don t have more than 4TB of data. For large quantities of data hard drives are still a good option, a 22TB disk costs $899. For companies this is a good option for many situations. For home users there is the additional problem that determining whether a drive is Shingled Magnetic Recording which has some serious performance issues for some use and it s very difficult to determine which drives use it. Conclusion For corporate purchases the options for serious storage are probably decent. But for small companies and home users things definitely don t seem to have improved as much as we expect from the computer industry, I had expected 8TB SSDs to go for $450 by now and SSDs less than 500G to not even be sold new any more. The prices on 8TB SSDs have gone up more in the last 2 yeas than the ASX 200 (index of 200 biggest companies in the Australian stock market). I would never recommend using SSDs as an investment, but in retrospect 8TB SSDs could have been a good one. $20 seems to be about the minimum cost that SSDs approach while hard drives have a higher minimum price of a bit under $100 because they are larger, heavier, and more fragile. It seems that the market is likely to move to most SSDs being close to $20, if they can make 2TB SSDs cheaply enough to sell for about that price then that would cover the majority of the market. I ve created a table of the prices, I should have done this before but I initially didn t plan an ongoing series of posts on this topic.
Jun 2020 Apr 2021 Apr 2023 Jan 2024 Apr 2025
128G SSD $49 $19 $24 $19
500G SSD $97 $73 $32 $33 $42.50
2TB HDD $95 $72 $75 $89 $89
2TB SSD $335 $245 $149
4TB HDD $115 $135 $148
4TB SSD $895 $349 $299 $309
8TB HDD $179 $239 $229
8TB SSD $949 $739 $899 $919
10TB HDD $549 $395

13 April 2025

Keith Packard: sanitizer-fun

Fun with -fsanitize=undefined and Picolibc Both GCC and Clang support the -fsanitize=undefined flag which instruments the generated code to detect places where the program wanders into parts of the C language specification which are either undefined or implementation defined. Many of these are also common programming errors. It would be great if there were sanitizers for other easily detected bugs, but for now, at least the undefined sanitizer does catch several useful problems. Supporting the sanitizer The sanitizer can be built to either trap on any error or call handlers. In both modes, the same problems are identified, but when trap mode is enabled, the compiler inserts a trap instruction and doesn't expect the program to continue running. When handlers are in use, each identified issue is tagged with a bunch of useful data and then a specific sanitizer handling function is called. The specific functions are not all that well documented, nor are the parameters they receive. Maybe this is because both compilers provide an implementation of all of the functions they use and don't really expect external implementations to exist? However, to make these useful in an embedded environment, picolibc needs to provide a complete set of handlers that support all versions both gcc and clang as the compiler-provided versions depend upon specific C (and C++) libraries. Of course, programs can be built in trap-on-error mode, but that makes it much more difficult to figure out what went wrong. Fixing Sanitizer Issues Once the sanitizer handlers were implemented, picolibc could be built with them enabled and all of the picolibc tests run to uncover issues within the library. As with the static analyzer adventure from last year, the vast bulk of sanitizer complaints came from invoking undefined or implementation-defined behavior in harmless ways: Signed integer shifts This is one area where the C language spec is just wrong. For left shift, before C99, it worked on signed integers as a bit-wise operator, equivalent to the operator on unsigned integers. After that, left shift of negative integers became undefined. Fortunately, it's straightforward (if tedious) to work around this issue by just casting the operand to unsigned, performing the shift and casting it back to the original type. Picolibc now has an internal macro, lsl, which does this:
    #define lsl(__x,__s) ((sizeof(__x) == sizeof(char)) ?                   \
                          (__typeof(__x)) ((unsigned char) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(short)) ?                  \
                          (__typeof(__x)) ((unsigned short) (__x) << (__s)) : \
                          (sizeof(__x) == sizeof(int)) ?                    \
                          (__typeof(__x)) ((unsigned int) (__x) << (__s)) :   \
                          (sizeof(__x) == sizeof(long)) ?                   \
                          (__typeof(__x)) ((unsigned long) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(long long)) ?              \
                          (__typeof(__x)) ((unsigned long long) (__x) << (__s)) : \
                          __undefined_shift_size(__x, __s))
Right shift is significantly more complicated to implement. What we want is an arithmetic shift with the sign bit being replicated as the value is shifted rightwards. C defines no such operator. Instead, right shift of negative integers is implementation defined. Fortunately, both gcc and clang define the >> operator on signed integers as arithmetic shift. Also fortunately, C hasn't made this undefined, so the program itself doesn't end up undefined. The trouble with arithmetic right shift is that it is not equivalent to right shift of unsigned values. Here's what Per Vognsen came up with using standard C operators:
    int
    __asr_int(int x, int s)  
        return x < 0 ? ~(~x >> s) : x >> s;
     
When the value is negative, we invert all of the bits (making it positive), shift right, then flip all of the bits back. Both GCC and Clang seem to compile this to a single asr instruction. This function is replicated for each of the five standard integer types and then the set of them wrapped in another sizeof-selecting macro:
    #define asr(__x,__s) ((sizeof(__x) == sizeof(char)) ?           \
                          (__typeof(__x))__asr_char(__x, __s) :       \
                          (sizeof(__x) == sizeof(short)) ?          \
                          (__typeof(__x))__asr_short(__x, __s) :      \
                          (sizeof(__x) == sizeof(int)) ?            \
                          (__typeof(__x))__asr_int(__x, __s) :        \
                          (sizeof(__x) == sizeof(long)) ?           \
                          (__typeof(__x))__asr_long(__x, __s) :       \
                          (sizeof(__x) == sizeof(long long)) ?      \
                          (__typeof(__x))__asr_long_long(__x, __s):   \
                          __undefined_shift_size(__x, __s))
The lsl and asr macros use sizeof instead of the type-generic mechanism to remain compatible with compilers that lack type-generic support. Once these macros were written, they needed to be applied where required. To preserve the benefits of detecting programming errors, they were only applied where required, not blindly across the whole codebase. There are a couple of common patterns in the math code using shift operators. One is when computing the exponent value for subnormal numbers.
for (ix = -1022, i = hx << 11; i > 0; i <<= 1)
    ix -= 1;
This code computes the exponent by shifting the significand left by 11 bits (the width of the exponent field) and then incrementally shifting it one bit at a time until the sign flips, which indicates that the most-significant bit is set. Use of the pre-C99 definition of the left shift operator is intentional here; so both shifts are replaced with our lsl operator. In the implementation of pow, the final exponent is computed as the sum of the two exponents, both of which are in the allowed range. The resulting sum is then tested to see if it is zero or negative to see if the final value is sub-normal:
hx += n << 20;
if (hx >> 20 <= 0)
    /* do sub-normal things */
In this case, the exponent adjustment, n, is a signed value and so that shift is replaced with the lsl macro. The test value needs to compute the correct the sign bit, so we replace this with the asr macro. Because the right shift operation is not undefined, we only use our fancy macro above when the undefined behavior sanitizer is enabled. On the other hand, the lsl macro should have zero cost and covers undefined behavior, so it is always used. Actual Bugs Found! The goal of this little adventure was both to make using the undefined behavior sanitizer with picolibc possible as well as to use the sanitizer to identify bugs in the library code. I fully expected that most of the effort would be spent masking harmless undefined behavior instances, but was hopeful that the effort would also uncover real bugs in the code. I was not disappointed. Through this work, I found (and fixed) eight bugs in the code:
  1. setlocale/newlocale didn't check for NULL locale names
  2. qsort was using uintptr_t to swap data around. On MSP430 in 'large' mode, that's a 20-bit type inside a 32-bit representation.
  3. random() was returning values in int range rather than long.
  4. m68k assembly for memcpy was broken for sizes > 64kB.
  5. freopen returned NULL, even on success
  6. The optimized version of memrchr was always performing unaligned accesses.
  7. String to float conversion had a table missing four values. This caused an array access overflow which resulted in imprecise values in some cases.
  8. vfwscanf mis-parsed floating point values by assuming that wchar_t was unsigned.
Sanitizer Wishes While it's great to have a way to detect places in your C code which evoke undefined and implementation defined behaviors, it seems like this tooling could easily be extended to detect other common programming mistakes, even where the code is well defined according to the language spec. An obvious example is in unsigned arithmetic. How many bugs come from this seemingly innocuous line of code?
    p = malloc(sizeof(*p) * c);
Because sizeof returns an unsigned value, the resulting computation never results in undefined behavior, even when the multiplication wraps around, so even with the undefined behavior sanitizer enabled, this bug will not be caught. Clang seems to have an unsigned integer overflow sanitizer which should do this, but I couldn't find anything like this in gcc. Summary The undefined behavior sanitizers present in clang and gcc both provide useful diagnostics which uncover some common programming errors. In most cases, replacing undefined behavior with defined behavior is straightforward, although the lack of an arithmetic right shift operator in standard C is irksome. I recommend anyone using C to give it a try.

Ben Hutchings: FOSS activity in March 2025

Ben Hutchings: FOSS activity in February 2025

Ben Hutchings: FOSS activity in January 2025

Ben Hutchings: FOSS activity in November 2024

11 April 2025

Reproducible Builds: Reproducible Builds in March 2025

Welcome to the third report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Debian bookworm live images now fully reproducible from their binary packages
  2. How NixOS and reproducible builds could have detected the xz backdoor
  3. LWN: Fedora change aims for 99% package reproducibility
  4. Python adopts PEP standard for specifying package dependencies
  5. OSS Rebuild real-time validation and tooling improvements
  6. SimpleX Chat server components now reproducible
  7. Three new scholarly papers
  8. Distribution roundup
  9. An overview of Supply Chain Attacks on Linux distributions
  10. diffoscope & strip-nondeterminism
  11. Website updates
  12. Reproducibility testing framework
  13. Upstream patches

Debian bookworm live images now fully reproducible from their binary packages Roland Clobus announced on our mailing list this month that all the major desktop variants (ie. Gnome, KDE, etc.) can be reproducibly created for Debian bullseye, bookworm and trixie from their (pre-compiled) binary packages. Building reproducible Debian live images does not require building from reproducible source code, but this is still a remarkable achievement. Some large proportion of the binary packages that comprise these live images can (and were) built reproducibly, but live image generation works at a higher level. (By contrast, full or end-to-end reproducibility of a bootable OS image will, in time, require both the compile-the-packages the build-the-bootable-image stages to be reproducible.) Nevertheless, in response, Roland s announcement generated significant congratulations as well as some discussion regarding the finer points of the terms employed: a full outline of the replies can be found here. The news was also picked up by Linux Weekly News (LWN) as well as to Hacker News.

How NixOS and reproducible builds could have detected the xz backdoor Julien Malka aka luj published an in-depth blog post this month with the highly-stimulating title How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all . Starting with an dive into the relevant technical details of the XZ Utils backdoor, Julien s article goes on to describe how we might avoid the xz catastrophe in the future by building software from trusted sources and building trust into untrusted release tarballs by way of comparing sources and leveraging bitwise reproducibility, i.e. applying the practices of Reproducible Builds. The article generated significant discussion on Hacker News as well as on Linux Weekly News (LWN).

LWN: Fedora change aims for 99% package reproducibility Linux Weekly News (LWN) contributor Joe Brockmeier has published a detailed round-up on how Fedora change aims for 99% package reproducibility. The article opens by mentioning that although Debian has been working toward reproducible builds for more than a decade , the Fedora project has now:
progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora s package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal with minimal pain for packagers rather than whether to attempt it.
The Change Proposal itself is worth reading:
Over the last few releases, we [Fedora] changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. After this Change, package builds are expected to be reproducible. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.
Further discussion can be found on the Fedora mailing list as well as on Fedora s Discourse instance.

Python adopts PEP standard for specifying package dependencies Python developer Brett Cannon reported on Fosstodon that PEP 751 was recently accepted. This design document has the purpose of describing a file format to record Python dependencies for installation reproducibility . As the abstract of the proposal writes:
This PEP proposes a new file format for specifying dependencies to enable reproducible installation in a Python environment. The format is designed to be human-readable and machine-generated. Installers consuming the file should be able to calculate what to install without the need for dependency resolution at install-time.
The PEP, which itself supersedes PEP 665, mentions that there are at least five well-known solutions to this problem in the community .

OSS Rebuild real-time validation and tooling improvements OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io, npm registries) and publish signed attestations and build definitions for public use. OSS Rebuild is now attempting rebuilds as packages are published, shortening the time to validating rebuilds and publishing attestations. Aman Sharma contributed classifiers and fixes for common sources of non-determinism in JAR packages. Improvements were also made to some of the core tools in the project:
  • timewarp for simulating the registry responses from sometime in the past.
  • proxy for transparent interception and logging of network activity.
  • and stabilize, yet another nondeterminism fixer.

SimpleX Chat server components now reproducible SimpleX Chat is a privacy-oriented decentralised messaging platform that eliminates user identifiers and metadata, offers end-to-end encryption and has a unique approach to decentralised identity. Starting from version 6.3, however, Simplex has implemented reproducible builds for its server components. This advancement allows anyone to verify that the binaries distributed by SimpleX match the source code, improving transparency and trustworthiness.

Three new scholarly papers Aman Sharma of the KTH Royal Institute of Technology of Stockholm, Sweden published a paper on Build and Runtime Integrity for Java (PDF). The paper s abstract notes that Software Supply Chain attacks are increasingly threatening the security of software systems and goes on to compare build- and run-time integrity:
Build-time integrity ensures that the software artifact creation process, from source code to compiled binaries, remains untampered. Runtime integrity, on the other hand, guarantees that the executing application loads and runs only trusted code, preventing dynamic injection of malicious components.
Aman s paper explores solutions to safeguard Java applications and proposes some novel techniques to detect malicious code injection. A full PDF of the paper is available.
In addition, Hamed Okhravi and Nathan Burow of Massachusetts Institute of Technology (MIT) Lincoln Laboratory along with Fred B. Schneider of Cornell University published a paper in the most recent edition of IEEE Security & Privacy on Software Bill of Materials as a Proactive Defense:
The recently mandated software bill of materials (SBOM) is intended to help mitigate software supply-chain risk. We discuss extensions that would enable an SBOM to serve as a basis for making trust assessments thus also serving as a proactive defense.
A full PDF of the paper is available.
Lastly, congratulations to Giacomo Benedetti of the University of Genoa for publishing their PhD thesis. Titled Improving Transparency, Trust, and Automation in the Software Supply Chain, Giacomo s thesis:
addresses three critical aspects of the software supply chain to enhance security: transparency, trust, and automation. First, it investigates transparency as a mechanism to empower developers with accurate and complete insights into the software components integrated into their applications. To this end, the thesis introduces SUNSET and PIP-SBOM, leveraging modeling and SBOMs (Software Bill of Materials) as foundational tools for transparency and security. Second, it examines software trust, focusing on the effectiveness of reproducible builds in major ecosystems and proposing solutions to bolster their adoption. Finally, it emphasizes the role of automation in modern software management, particularly in ensuring user safety and application reliability. This includes developing a tool for automated security testing of GitHub Actions and analyzing the permission models of prominent platforms like GitHub, GitLab, and BitBucket.

Distribution roundup In Debian this month:
The IzzyOnDroid Android APK repository reached another milestone in March, crossing the 40% coverage mark specifically, more than 42% of the apps in the repository is now reproducible Thanks to funding by NLnet/Mobifree, the project was also to put more time into their tooling. For instance, developers can now run easily their own verification builder in less than 5 minutes . This currently supports Debian-based systems, but support for RPM-based systems is incoming. Future work in the pipeline, including documentation, guidelines and helpers for debugging.
Fedora developer Zbigniew J drzejewski-Szmek announced a work-in-progress script called fedora-repro-build which attempts to reproduce an existing package within a Koji build environment. Although the project s README file lists a number of fields will always or almost always vary (and there are a non-zero list of other known issues), this is an excellent first step towards full Fedora reproducibility (see above for more information).
Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for his work there.

An overview of Supply Chain Attacks on Linux distributions Fenrisk, a cybersecurity risk-management company, has published a lengthy overview of Supply Chain Attacks on Linux distributions. Authored by Maxime Rinaudo, the article asks:
[What] would it take to compromise an entire Linux distribution directly through their public infrastructure? Is it possible to perform such a compromise as simple security researchers with no available resources but time?

diffoscope & strip-nondeterminism diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 290, 291, 292 and 293 and 293 to Debian:
  • Bug fixes:
    • file(1) version 5.46 now returns XHTML document for .xhtml files such as those found nested within our .epub tests. [ ]
    • Also consider .aar files as APK files, at least for the sake of diffoscope. [ ]
    • Require the new, upcoming, version of file(1) and update our quine-related testcase. [ ]
  • Codebase improvements:
    • Ensure all calls to our_check_output in the ELF comparator have the potential CalledProcessError exception caught. [ ][ ]
    • Correct an import masking issue. [ ]
    • Add a missing subprocess import. [ ]
    • Reformat openssl.py. [ ]
    • Update copyright years. [ ][ ][ ]
In addition, Ivan Trubach contributed a change to ignore the st_size metadata entry for directories as it is essentially arbitrary and introduces unnecessary or even spurious changes. [ ]

Website updates Once again, there were a number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add links to two related bugs about buildinfos.debian.net. [ ]
    • Add an extra sync to the database backup. [ ]
    • Overhaul description of what the service is about. [ ][ ][ ][ ][ ][ ]
    • Improve the documentation to indicate that need to fix syncronisation pipes. [ ][ ]
    • Improve the statistics page by breaking down output by architecture. [ ]
    • Add a copyright statement. [ ]
    • Add a space after the package name so one can search for specific packages more easily. [ ]
    • Add a script to work around/implement a missing feature of debrebuild. [ ]
  • Misc:
    • Run debian-repro-status at the end of the chroot-install tests. [ ][ ]
    • Document that we have unused diskspace at Ionos. [ ]
In addition:
  • James Addison made a number of changes to the reproduce.debian.net homepage. [ ][ ].
  • Jochen Sprickerhof updated the statistics generation to catch No space left on device issues. [ ]
  • Mattia Rizzolo added a better command to stop the builders [ ] and fixed the reStructuredText syntax in the README.infrastructure file. [ ]
And finally, node maintenance was performed by Holger Levsen [ ][ ][ ] and Mattia Rizzolo [ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Bits from Debian: Bits from the DPL

Dear Debian community, this is bits from DPL for March (sorry for the delay, I was waiting for some additional input). Conferences In March, I attended two conferences, each with a distinct motivation. I joined FOSSASIA to address the imbalance in geographical developer representation. Encouraging more developers from Asia to contribute to Free Software is an important goal for me, and FOSSASIA provided a valuable opportunity to work towards this. I also attended Chemnitzer Linux-Tage, a conference I have been part of for over 20 years. To me, it remains a key gathering for the German Free Software community a place where contributors meet, collaborate, and exchange ideas. I have a remark about submitting an event proposal to both FOSDEM and FOSSASIA: Cross distribution experience exchange
As Debian Project Leader, I have often reflected on how other Free Software distributions address challenges we all face. I am interested in discussing how we can learn from each other to improve our work and better serve our users. Recognizing my limited understanding of other distributions, I aim to bridge this gap through open knowledge exchange. My hope is to foster a constructive dialogue that benefits the broader Free Software ecosystem. Representatives of other distributions are encouraged to participate in this BoF whether as contributors or official co-speakers. My intention is not to drive the discussion from a Debian-centric perspective but to ensure that all distributions have an equal voice in the conversation.
This event proposal was part of my commitment from my 2024 DPL platform, specifically under the section "Reaching Out to Learn". Had it been accepted, I would have also attended FOSDEM. However, both FOSDEM and FOSSASIA rejected the proposal. In hindsight, reaching out to other distribution contributors beforehand might have improved its chances. I may take this approach in the future if a similar opportunity arises. That said, rejecting an interdistribution discussion without any feedback is, in my view, a missed opportunity for collaboration. FOSSASIA Summit The 14th FOSSASIA Summit took place in Bangkok. As a leading open-source technology conference in Asia, it brings together developers, startups, and tech enthusiasts to collaborate on projects in AI, cloud computing, IoT, and more. With a strong focus on open innovation, the event features hands-on workshops, keynote speeches, and community-driven discussions, emphasizing open-source software, hardware, and digital freedom. It fosters a diverse, inclusive environment and highlights Asia's growing role in the global FOSS ecosystem. I presented a talk on Debian as a Global Project and led a packaging workshop. Additionally, to further support attendees interested in packaging, I hosted an extra self-organized workshop at a hacker caf , initiated by participants eager to deepen their skills. There was another Debian related talk given by Ananthu titled "The Herculean Task of OS Maintenance - The Debian Way!" To further my goal of increasing diversity within Debian particularly by encouraging more non-male contributors I actively engaged with attendees, seeking opportunities to involve new people in the project. Whether through discussions, mentoring, or hands-on sessions, I aimed to make Debian more approachable for those who might not yet see themselves as contributors. I was fortunate to have the support of Debian enthusiasts from India and China, who ran the Debian booth and helped create a welcoming environment for these conversations. Strengthening diversity in Free Software is a collective effort, and I hope these interactions will inspire more people to get involved. Chemnitzer Linuxtage The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and longest-running community-driven Linux and open-source conferences, held annually in Chemnitz since 2000. It has been my favorite conference in Germany, and I have tried to attend every year. Focusing on Free Software, Linux, and digital sovereignty, CLT offers a mix of expert talks, workshops, and exhibitions, attracting hobbyists, professionals, and businesses alike. With a strong grassroots ethos, it emphasizes hands-on learning, privacy, and open-source advocacy while fostering a welcoming environment for both newcomers and experienced Linux users. Despite my appreciation for the diverse and high-quality talks at CLT, my main focus was on connecting with people who share the goal of attracting more newcomers to Debian. Engaging with both longtime contributors and potential new participants remains one of the most valuable aspects of the event for me. I was fortunate to be joined by Debian enthusiasts staffing the Debian booth, where I found myself among both experienced booth volunteers who have attended many previous CLT events and young newcomers. This was particularly reassuring, as I certainly can't answer every detailed question at the booth. I greatly appreciate the knowledgeable people who represent Debian at this event and help make it more accessible to visitors. As a small point of comparison while FOSSASIA and CLT are fundamentally different events the gender ratio stood out. FOSSASIA had a noticeably higher proportion of women compared to Chemnitz. This contrast highlighted the ongoing need to foster more diversity within Free Software communities in Europe. At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand Volunteers, One Goal), which was video recorded. It took place in the grand auditorium and attracted a mix of long-term contributors and newcomers, making for an engaging and rewarding experience. Kind regards Andreas.

Gunnar Wolf: Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's Computing Reviews . I do want, though, to share it with people that follow my general interests and such stuff.
This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself not too different from what we are now witnessing on a global level. I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides And rtica (and Mariana Fossatti) are clearly among them. Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre. Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual s possibility to act without interferences or coertion, and is limited by other people s freedom, while positive freedom is the real capacity to exercise one s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom. Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it s legal (in order not to infringe other s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will. The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights. She closes the article by stating (and I ll happily sign as if they were my own words) that we are Free Culture militants not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods ( ) We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective .

9 April 2025

Freexian Collaborators: Debian Contributions: Preparations for Trixie, Updated debvm, DebConf 25 registration website updates and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-03 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for Trixie, by Rapha l Hertzog As we are approaching the trixie freeze, it is customary for Debian developers to review their packages and clean them up in preparation for the next stable release. That s precisely what Rapha l did with publican, a package that had not seen any change since the last Debian release and that partially stopped working along the way due to a major Perl upgrade. While upstream s activity is close to zero, hope is not yet entirely gone as the git repository moved to a new location a couple of months ago and contained the required fix. Rapha l also developed another fix to avoid an annoying warning that was seen at runtime. Rapha l also ensured that the last upstream version of zim was uploaded to Debian unstable, and developed a fix for gnome-shell-extension-hamster to make it work with GNOME 48 and thus ensure that the package does not get removed from trixie.

Abseil and re2 transition in Debian, by Stefano Rivera One of the last transitions to happen for trixie was an update to abseil, bringing it up to 202407. This library is a dependency for one of Freexian s customers, as well as blocking newer versions of re2, a package maintained by Stefano. The transition had been stalled for several months while some issues with reverse dependencies were investigated and dealt with. It took a final push to make the transition happen, including fixing a few newly discovered problems downstream. The abseil package s autopkgtests were (trivially) broken by newer cmake versions, and some tests started failing on PPC64 (a known issue upstream).

debvm uploaded, by Helmut Grohne debvm is a command line tool for quickly creating a Debian-based virtual machine for testing purposes. Over time, it accumulated quite a few minor issues as well as CI failures. The most notorious one was an ARM32 failure present since August. It was diagnosed down to a glibc bug by Tj and Chris Hofstaedtler and little has happened since then. To have debvm work somewhat, it now contains a workaround for this situation. Few changes are expected to be noticeable, but related tools such as apt, file, linux, passwd, and qemu required quite a few adaptations all over the place. Much of the necessary debugging was contributed by others.

DebConf 25 Registration website, by Stefano Rivera and Santiago Ruano Rinc n DebConf 25, the annual Debian developer conference, is now open for registration. Other than preparing the conference website, getting there always requires some last minute changes to the software behind the registration interface and this year was no exception. Every year, the conference is a little different to previous years, and has some different details that need to be captured from attendees. And every year we make minor incremental improvements to fix long-standing problems. New concepts this year included: brunch, the closing talks on the departure day, venue security clearance, partial contributions towards food and accommodation bursaries, and attendee-selected bursary budgets.

Miscellaneous contributions
  • Helmut uploaded guess-concurrency incorporating feedback from others.
  • Helmut reacted to rebootstrap CI results and adapted it to cope with changes in unstable.
  • Helmut researched real world /usr-move fallout though little was actually attributable. He also NMUed systemd unsuccessfully.
  • Helmut sent 12 cross build patches.
  • Helmut looked into undeclared file conflicts in Debian more systematically and filed quite some bugs.
  • Helmut attended the cross/bootstrap sprint in W rzburg. A report of the event is pending.
  • Lucas worked on the CFP and tracks definition for DebConf 25.
  • Lucas worked on some bits involving Rails 7 transition.
  • Carles investigated why the job piuparts on salsa-ci/pipeline was passing but was failing on piuparts.debian.org for simplemonitor package. Created an issue and MR with a suggested fix, under discussion.
  • Carles improved the documentation of salsa-ci/pipeline: added documentation for different variables.
  • Carles made debian-history package reproducible (with help from Chris Lamb).
  • Carles updated simplemonitor package (new upstream version), prepared a new qdacco version (fixed bugs in qdacco, packaged with the upgrade from Qt 5 to Qt 6).
  • Carles reviewed and submitted translations to Catalan for adduser, apt, shadow, apt-listchanges.
  • Carles reviewed, created merge-requests for translations to Catalan of 38 packages (using po-debconf-manager tooling). Created 40 bug reports for some merge requests that haven t been actioned for some time.
  • Colin Watson fixed 59 RC bugs (including 26 packages broken by the long-overdue removal of dh-python s dependency on python3-setuptools), and upgraded 38 packages (mostly Python-related) to new upstream versions.
  • Colin worked with Pranav P to track down and fix a dnspython autopkgtest regression on s390x caused by an endianness bug in pylsqpack.
  • Colin fixed a time-based test failure in python-dateutil that would have triggered in 2027, and contributed the fix upstream.
  • Colin fixed debconf to automatically use the noninteractive frontend if stdin is not a terminal.
  • Stefano bisected and fixed a pypy translation regression on Debian stable and older on 32-bit ARM.
  • Emilio coordinated and helped finish various transitions in light of the transition freeze.
  • Thorsten Alteholz uploaded cups-filters to fix an FTBFS with a new upstream version of qpdf.
  • With the aim of enhancing the support for packages related to Software Bill of Materials (SBOMs) in recent industrial standards, Santiago has worked on finishing the packaging of and uploaded CycloneDX python library. There is on-going work about SPDX python tools, but it requires (build-)dependencies currently not shipped in Debian, such as owlrl and pyshacl.
  • Anupa worked with the Publicity team to announce the Debian 12.10 point release.
  • Anupa with the support of Santiago prepared an announcement and announced the opening of CfP and Registrations for DebConf 25.

7 April 2025

Scarlett Gately Moore: KDE Snap Updates, Kubuntu Updates, More life updates!

Icy morning Witch Wells AzIcy morning Witch Wells Az
Life: Last week we were enjoying springtime, this week winter has made a comeback! Good news on the broken arm front, the infection is gone, so they can finally deal with the broken issue again. I will have a less invasive surgery April 25th to pull the bones back together so they can properly knit back together! If you can spare any change please consider a donation to my continued healing and recovery, or just support my work  Kubuntu: While testing Beta I came across some crashy apps ( Namely PIM ) due to apparmor. I have uploaded fixed profiles for kmail, akregator, akonadiconsole, konqueror, tellico KDE Snaps: Added sctp support in Qt https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/commit/bbcb1dc39044b930ab718c8ffabfa20ccd2b0f75 This will allow me to finish a pyside6 snap and fix FreeCAD build. Changed build type to Release in the kf6-core24-sdk which will reduce the size of kf6-core24 significantly. Fixed a few startup errors in kf5-core24 and kf6-core24 snapcraft-desktop-integration. Soumyadeep fixed wayland icons in https://invent.kde.org/neon/snap-packaging/kf6-core-sdk/-/merge_requests/3 KDE Applications 25.03.90 RC released to candidate ( I know it says 24.12.3, version won t be updated until 25.04.0 release ) Kasts core24 fixed in candidate Kate now core24 with Breeze theme! candidate Neochat: Fixed missing QML and 25.04 dependencies in candidate Kdenlive now with Galxnimate animations! candidate Digikam 8.6.0 now with scanner support in stable Kstars 3.7.6 released to stable for realz, removed store rejected plugs. Thanks for stopping by!

4 April 2025

Gunnar Wolf: Naming things revisited

How long has it been since you last saw a conversation over different blogs syndicated at the same planet? Well, it s one of the good memories of the early 2010s. And there is an opportunity to re-engage! I came across Evgeni s post naming things is hard in Planet Debian. So, what names have I given my computers? I have had many since the mid-1990s I also had several during the decade before that, but before Linux, my computers didn t hve a formal name. Naming my computers something nice Linux gave me. I have forgotten many. Some of the names I have used:

Guido G nther: Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging. One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this. To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it. Flashing the factory image If you still have Android on the device you can skip this step. You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:
unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh
This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below. Enabling USB debugging Now boot Android and enable Developer mode by going to Settings About then touching Build Number (at the very bottom) 7 times. Go back one level, then go to System Developer Options and enable "USB Debugging". Obtaining boot.img There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:
unzip image-sargo-sp2a.220505.008.zip boot.img
If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post). Becoming root with Magisk Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current. Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):
adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download
In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK. We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded. The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:
adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img
Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):
fastboot flash boot magisk_patched-28100_3ucVs.img
Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell. If you now connect to your phone via adb again and now su should work:
adb shell
su
As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:
fastboot boot magisk_patched-28100_3ucVs.img
and then perform the same adb shell su check as above. Building the custom kernel For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:
sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync
With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here. We then build the kernel:
PATH=/usr/sbin:$PATH ./build_bonito.sh
The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb. In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below). Preparing a new boot.img To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:
ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "$ ARGS "   sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" $ CLEAN_PARAMS  --cmdline "$ ARGS "
This will give you a boot.patched.img with the just built kernel. Boot the new kernel via fastboot We can now boot the new boot.patched.img. No need to flash that onto the device for that:
fastboot boot boot.patched.img
Fetching the kernel logs With that we can fetch the kernel logs with the debug output via adb:
adb shell su -c 'dmesg -t' > dmesg_dump.xml
or already filtering out the QMI commands:
adb shell su -c 'dmesg -t'    grep "@QMI@"   sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml
That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar. In case you just need a rooted boot.img for sargo you can find a patched one here. If this procedure can be improved / streamlined somehow please let me know. Appendix: Fetching boot.img from the phone If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone. First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:
fastboot boot twrp-3.7.1_12-1-sargo.img
Within TWRP select Backup Boot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:
adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win
You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

Johannes Schauer Marin Rodrigues: To boldly build what no one has built before

Last week, we (Helmut, Jochen, Holger, Gioele and josch) met in W rzburg for a Debian crossbuilding & bootstrap sprint. We would like to thank Angest pselt e. V. for generously providing us with their hacker space which we were able to use exclusively during the four-day-sprint. We d further like to thank Debian for their sponsorship of accommodation of Helmut and Jochen. The most important topics that we worked on together were: Our TODO items for after the sprint are: In addition to what was already listed above, people worked on the following tasks specifically: Thank you all for attending this sprint, for making it so productive and for the amazing atmosphere and enlightening discussions!

1 April 2025

Colin Watson: Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay. OpenSSH Changes in dropbear 2025.87 broke OpenSSH s regression tests. I cherry-picked the fix. I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables. Python team Following up on last month, I fixed some more uscan errors: I upgraded these packages to new upstream versions: In bookworm-backports, I updated python-django to 3:4.2.19-1. Although Debian s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we re going to have to deal with it eventually: dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this: We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint. There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian. I fixed various other build/test failures: I enabled more tests in python-moto and contributed a supporting fix upstream. I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy. I fixed various odds and ends of bugs: I contributed a small documentation improvement to pybuild-autopkgtest(1). Rust team I upgraded rust-asn1 to 0.20.0. Science team I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it. I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version. I fixed python-vispy: missing dependency on numpy abi. Other bits and pieces I fixed debconf should automatically be noninteractive if input is /dev/null. I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian). Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder. After regaining access to the repository, I fixed telegnome: missing app icon in About dialogue and made a new 0.3.7 release.

Next.