Search Results: "otto"

9 February 2025

Dave Hibberd: Radio Activity 10-16 Feb 2025

It s been quite the week of radio related nonsense for me, where I ve been channelling my time and brainspace for radio into activity on air and system refinements, not working on Debian.

POTA, Antennas and why do my toys not work? Having had my interest piqued by Ian at mastodon.radio, I looked online and spotted a couple of parks within stumbling distance of my house, that s good news! It looks like the list has been refactored and expanded since I last looked at it, so there are now more entities to activate and explore. My concerns about antennas noted last week rumbled on. There was a second strand to this concern too, my end fed 64:1 (or 49:1?!) transformer from MM0OPX sits in my mind as not having worked very well in Spain last year, and I want to get to the bottom of why. As with most things in my life, it s probably a me problem. I came up with a cunning plan - firstly, buy a new mast to replace the one I broke a few weeks back on Cat Law. Secondly, buy a couple of new connectors and some heatshrink to reterminate my cable that I m sure is broken. Spending more money on a problem never hurt anyone, right? Come Wednesday, the new toys arrived and I figured combining everything into one convenient night time walk and radio was a good plan. So I walk out to the nearest park with my LoRa APRS doofer going and see what happens: APRS-Map After circling a bit to find somewhere suitable (there appear to be construction works in the park!) I set up my gear in 2C with frost on the ground, called CQ, spotted and got nothing on either the end fed half wave or the cheap vertical. As it was too late for 20m, I tried 40 and a bit of 80 using the inbuilt tuner, but wasn t heard by stations I called or when calling independently. I packed everything up and lora-doofered my way home, mildly deflated.

Try it at home It still didn t sit with me that the end fed wasn t working, so come Friday night I set it up in the back garden/woods behind the house to try and diagnose why it wasn t working. Up it went, I worked some Irish stations pretty effortlessly, and down everything came. No complaints - the only things I did differently was have the feedpoint a little higher and check my power, limiting it to 10W. The G90 can do 20W, I wonder if running at that was saturating the core in the 64:1. At some point in the evening I stepped in some dog s shit too, and spent some time cleaning my boots outside to avoid further tramping the smell through the house. Win some, lose some.

Take it to the Hills On Friday, some of the other GM-ES Sota-ists had been out for an activity day. On account of me being busy in work, I couldn t go outside to play, but I figured a weekend of activity was on the books.

Saturday - A day above the clouds On Saturday I took myself up Tap O Noth, a favourite of mine for some reason, and Lord Arthur s Hill. Before I hit the hills, I took myself to the hackerspace and printed myself a K6ARK Winder and a guy ring for the mast, cut string, tied it together and wound the string on to the winder. I also took time to buzz out my wonky coax and it showed great continuity. Hmm, that can be continued later. I didn t quite get to crimping the radial network of the Aliexpress whip with a 12mm stud crimp, that can also be put on the TODO list.

Tap O Noth Once finally out, the weather was a bit cloudy with passing snow showers, but in between the showers I was above the clouds and the air was clear: After a mild struggle on 2m, I set up the end fed the first hill and got to work from the old hill fort: The end fed worked flawlessly. Exactly as promised, switching between 7MHz, 14MHz, 21MHz and 28MHz without a tuner was perfect, I chased hills on all the bands, and had a great time. Apart from 40m, where there was absolutely no space due to a contest. That wasn t such a fun time! My fingers were bitterly cold, so on went the big gloves for the descent and I felt like I was warm by the time I made it back to the car. It worked so well, in fact, I took the 1/4 wave cheap vertical out my bag and decided to brave it on the next activation.

Lord Arthur s Hill GM5ALX has posted a .gpx to sotlas which is shorter than the other ascent, but much sharper - I figured this would be a fun new way to try up the hill! It takes you right through the heart of the Littlewood Park estate, and I felt a bit uncomfortable walking straight past the estate cottages, especially when there were vehicles moving and active work happening. Presumably this is where Lord Arthur lived, at the foot of his hill. I cut through the woods to the west of the cottages, disturbing some deer and many, many pheasants, but I met the path fairly quickly. From there it was a 2km walk, 300m vertical ascent. Short and sharp! At the top, I was treated to a view of the hill I had activated only an hour or so before, which is a view that always makes me smile: To get some height for the feedpoint, I wrapped the coax around my winder a couple of turns and trapped it with the elastic while draping the coax over the trig. This bought me some more height and I felt clever because of it. Maybe a pole would be easier? From here, I worked inter-G on 40m and had a wee pile up, eventually working 15 or so European stations on 20m. Pleased with that! I had been considering a third hill, but home was the call in the failing light. Back to the car I walked to find my key didn t have any battery, so out came the Audi App and I used the Internet of Things to unlock my car. The modern world is bizarre.

Sunday - Cloudy Head // Head in the Clouds Sunday started off migraney, so I stayed within the confines of my house until I felt safe driving! After some back and forth in my cloudy head, I opted for the easier option of Ladylea Hill as I wasn t feeling up for major physical exertion. It was a long drive, after which I felt more wonky, but I hit the path eventually - I run to Hibby Standard Time, a few hours to a few days behind the rest of GM/ES. I was ready to bail if my head didn t improve, but it turns out, fresh cold air, silence and bloodflow helped. Ladylea Hill was incredibly quiet, a feature I really appreciated. It feels incredibly remote, with a long winding drive down Glenbuchat, which still has ice on the surface of the lochs and standing water. A brooding summit crowned with grey cloud in fantastic scenery that only revealed itself upon the clouds blowing through: I set up at the cairn and picked up 30 contacts overall, split between 40m and 20m, with some inter-g on 40 and a couple of continental surprises. 20 had longer skip today, so I saw Spain, Finland, Slovenia, Poland. On teardown, I managed to snap the top segment of my brand new mast with my cold, clumsy fingers, but thankfully sotabeams stock replacements. More money at the problem, again. Back to the car, no app needed, and homeward bound as the light faded. At the end of the weekend, I find myself finally over 100 activator points and over 400 chaser points. Somehow I ve collected more points this year already than last year, the winter bonuses really do stack up!

Addendum - OSMAnd & Open Street Map I ve been using OSMAnd on my iPhone quite extensively recently, I think offline mapping is super important if you re going out to get mildly lost in the hills. On more than one occasion, I have confidently set off in the wrong direction in the mist, and maps have saved my bacon! As you can download .gpx files, it s great to have them on the device and available for guidance in case you get lost, coupled with an offline map. Plus, as I drive around I love to have the dark red of a hill I ve walked appear on the map in my car dash or in my hand: This weekend I discovered it s possible to have height maps for nice 3d maps and contours marked on the map - you just need to download some additions for the maps. This is a really nice feature, it makes maps more pretty and more useful when you re in the middle of nowhere. Open Street Map also has designators for SOTA summits here and similar for POTA here GM5ALX has set to adding the summits around Scotland here. While the benefits aren t immediately obvious, it allows developers of mapping applications access to more data at no extra cost, really. It helps add depth to an already rich set of information, and allows us as radio amateurs to do more interesting things with maps and not be shackled to Apple/Google. Because it s open data, we can also fix things we find wrong as users. I like to fix road surfaces after I ve been cycling as that will feed forward to route planning through Komoot and data on my wahoo too, which can be modified with osm maps. In the future, it s possible to have an OSMAnd plugin highlighting local SOTA summits or mimicking features of sotl.as but offline. It s cool to be able to put open technologies to use like this in the field and really is the convergence point of all my favourite things!

29 January 2025

Keith Packard: picolibc-i18n

Internationalization support in Picolibc There are two major internationalization APIs in the C library: locales and iconv. Iconv is an isolated component which only performs charset conversion in ways that don't interact with anything else in the library. Locales affect pretty much every API that deals with strings and covers charset conversion along with a huge range of localized information from character classification to formatting of time, money, people's names, addresses and even standard paper sizes. Picolibc inherits it's implementation of both of these from newlib. Given that embedded applications rarely need advanced functionality from either these APIs, I hadn't spent much time exploring this space. Newlib locale code When run on Cygwin, Newlib's locale support is quite complete as it leverages the underlying Windows locale support. Without Windows support, everything aside from charset conversion and character classification data is stubbed out at the bottom of the stack. Because the implementation can support full locale functionality, the implementation is designed for that, with large data structures and lots of code. Charset conversion and character classification data for locales is all built-in; none of that can be loaded at runtime. There is support for all of the ISO-8859 charsets, three JIS variants, a bunch of Windows code pages and a few other single-byte encodings. One oddity in this code is that when using a JIS locale, wide characters are stored in EUC-JP rather than Unicode. Every other locale uses Unicode. This means APIs like wctype are implemented by mapping the JIS-encoded character to Unicode and then using the underlying Unicode character classification tables. One consequence of this is that there isn't any Unicode to JIS mapping provided as it isn't necessary. When testing the charset conversion and Unicode character classification data, I found numerous minor errors and a couple of pretty significant ones. The JIS conversion code had the most serious issue I found; most of the conversions are in a 2d array which is manually indexed with the wrong value for the length of each row. This led to nearly every translated value being incorrect. The charset conversion tables and Unicode classification data are now generated using python charset support and the standard Unicode data files. In addition, tests have been added which compare Picolibc to the system C library for every supported charset. Newlib iconv code The iconv charset support is completely separate from the locale charset support with a much wider range of supported targets. It also supports loading charset data from files at runtime, which reduces the size of application images. Because the iconv and locale implementations are completely separate, the charset support isn't the same. Iconv supports a lot more charsets, but it doesn't support all of those available to locales. For example, Iconv has Big5 support which locale lacks. Conversely, locale has Shift-JIS support which iconv does not. There's also a difference in how charset names are mapped in the two APIs. The locale code has a small fixed set of aliases, which doesn't include things like US-ASCII or ANSI X3.4. In contrast, the iconv code has an extensive database of charset aliases which are compiled into the library. Picolibc has a few tests for the iconv API which verify charset names and perform some translations. Without an external reference, it's hard to know if the results are correct. POSIX vs C internationalization In addition to including the iconv API, POSIX extends locale support in a couple of ways:
  1. Exposing locale objects via the newlocale, uselocale, duplocale and freelocale APIs.
  2. uselocale sets a per-thread locale, rather than the process-wide locale.
Goals for Picolibc internationalization support For charsets, supporting UTF-8 should cover the bulk of embedded application needs, and even that is probably more than what most applications require. Most (all?) compilers use Unicode for wide character and string constants. That means wchar_t needs to be Unicode in every locale. Aside from charset support, the rest of the locale infrastructure is heavily focused on creating human-consumable strings. I don't think it's a stretch to say that none of this is very useful these days, even for systems with sophisticated user interactions. For picolibc, the cost to provide any of this would be high. Having two completely separate charset conversion datasets makes for a confusing and error-prone experience for developers. Replacing iconv with code that leverages the existing locale support for translating between multi-byte and wide-character representations will save a bunch of source code and improve consistency. Embedded systems can be very sensitive to memory usage, both read-only and read-write. Applications not using internationalization capabilities shouldn't pay a heavy premium even when the library binary is built with support. For the most sensitive targets, the library should be configurable to remove unnecessary functionality. Picolibc needs to be conforming with at least the C language standard, and as much of POSIX as makes sense. Fortunately, the requirements for C are modest as it only includes a few locale-related APIs and doesn't include iconv. Finally, picolibc should test these APIs to make sure they conform with relevant standards, especially character set translation and character classification. The easiest way to do this is to reference another implementation of the same API and compare results. Switching to Unicode for JIS wchar_t This involved ripping the JIS to Unicode translations out of all of the wide character APIs and inserting them into the translations between multi-byte and wide-char representations. The missing Unicode to JIS translation was kludged by iterating over all JIS code points until a matching Unicode value was found. That's an obvious place for a performance improvement, but at least it works. Tiny locale This is a minimal implementation of locales which conforms with the C language standard while providing only charset translation and character classification data. It handles all of the existing charsets, but splits things into three levels
  1. ASCII
  2. UTF-8
  3. Extended, including any or all of: a. ISO 8859 b. Windows code pages and other 8-bit encodings c. JIS (JIS, EUC-JP and Shift-JIS)
When built for ASCII-only, all of the locale support is short-circuited, except for error checking. In addition, support in printf and scanf for wide characters is removed by default (it can be re-enabled with the -Dio-wchar=true meson option). This offers the smallest code size. Because the wctype APIs (e.g. iswupper) are all locale-specific, this mode restricts them to ASCII-only, which means they become wrappers on top of the ctype APIs with added range checking. When built for UTF-8, character classification for wide characters uses tables that provide the full Unicode range. Setlocale now selects between two locales, "C" and "C.UTF-8". Any locale name other than "C" selects the UTF-8 version. If the locale name contains "." or "-", then the rest of the locale name is taken to be a charset name and matched against the list of supported charsets. In this mode, only "us_ascii", "ascii" and "utf-8" are recognized. Because a single byte of a utf-8 string with the high-bit set is not a complete character, all of the ctype APIs in this mode can use the same implementation as the ASCII-only mode. This means the small ctype implementation is available. Calling setlocale(LC_ALL, "C.UTF-8") will allow the application to use the APIs which translate between multi-byte and wide-characters to deal with UTF-8 encoded strings. In addition, scanf and printf can read and write UTF-8 strings into wchar_t strings. Locale names are converted into locale IDs, an enumeration which lists the available locales. Each ID implies a specific charset as that's the only thing which differs between them. This means a locale can be encoded in a few bytes rather than an array of strings. In terms of memory usage, applications not using locales and not using the wctype APIs should see only a small increase in code space. That's due to the wchar_t support added to printf and scanf which need to translate between multi-byte and wide-character representations. There aren't any tables required as ASCII and UTF-8 are directly convertible to Unicode. On ARM-v7m, The added code in printf and scanf add up to about 1kB and another 32 bytes of RAM is used. The big difference when enabling extended charset support is that all of the charset conversion and character classification operations become table driven and dependent on the locale. Depending on the extended charsets supported, these can be quite large. With all of the extended charsets included, this adds an additional 30kB of code and static data and uses another 56 bytes of RAM. There are two known gaps in functionality compared with the newlib code:
  1. Locale strings that encode different locales for different categories. That's nominally required by POSIX as LC_ALL is supposed to return a string sufficient to restore the locale, but the only category which actually matters is LC_CTYPE.
  2. No nl_langinfo support. This would be fairly easy to add, returning appropriate constant values for each parameter.
Tiny locale was merged to picolibc main in this PR Tiny iconv Replacing the bulky newlib iconv code was far easier than swapping locale implementations. Essentially all that iconv does is compute two functions, one which maps from multi-byte to wide-char in one locale and another which maps from wide-char to multi-byte in another locale. Once the JIS locales were fixed to use Unicode, the new iconv implementation was straightforward. POSIX doesn't provide any _l version of mbrtowc or wcrtomb, so using standard C APIs would have been clunky. Instead, the implementation uses the internal APIs to compute the correct charset conversion functions. The entire implementation fits in under 200 lines of code. Tiny iconv is in process in this PR Future directions Right now, both of these new bits of code sit in the source tree parallel to the old versions. I'm not seeing any particular reason to keep the old versions around; they have provided a useful point of comparison in developing the new code, but I don't think they offer any compelling benefits going forward.

26 January 2025

Otto Kek l inen: 10 habits to help becoming a Debian Maintainer

Featured image of post 10 habits to help becoming a Debian MaintainerBecoming a Debian maintainer is a journey that combines technical expertise, community collaboration, and continuous learning. In this post, I ll share 10 key habits that will both help you navigate the complexities of Debian packaging without getting lost, and also enable you to contribute more effectively to one of the world s largest open source projects.

1. Read and re-read the Debian Policy, the Developer s Reference and the git-buildpackage manual Anyone learning Debian packaging and aspiring to become a Debian maintainer is likely to wade through a lot of documentation, only to realize that much of it is outdated or sometimes outright incorrect. Therefore, it is important to learn right from the start which sources are the most reliable and truly worth reading and re-reading. I recommend these documents, in order of importance:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive, and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • The git-buildpackage man pages: While the Policy focuses on the end result and is intentionally void of practical instructions on creating or maintaining Debian packages, the Developer s Reference goes into greater detail. However, it too lacks step-by-step instructions. For the exact commands, consult the man pages of git-buildpackage and its subcommands (e.g., gbp clone, gbp import-orig, gbp pq, gbp dch, gbp push). See also my post on Debian source package git branch and tags for an easy to understand diagrams.

2. Make reading man pages a habit In addition to the above, try to make a habit of checking out the man page of every new tool you use to ensure you are using it as intended. The best place to read accurate and up-to-date documentation is manpages.debian.org. The manual pages are maintained alongside the tools by their developers, ensuring greater accuracy than any third-party documentation. If you are using a tool in the way the tool author documented, you can be confident you are doing the right thing, even if it wasn t explicitly mentioned in some third-party guide about Debian packaging best practices.

3. Read and write emails While members of the Debian community have many channels of communication, the mailing lists are by far the most prominent. Asking questions on the appropriate list is a good way to get current advice from other people doing Debian packaging. Staying subscribed to lists of interest is also a good way to read about new developments as they happen. Note that every post is public and archived permanently, so the discussions on the mailing lists also form a body of documentation that can later be searched and referred to. Regularly writing short and well-structured emails on the mailing lists is great practice for improving technical communication skills a useful ability in general. For Debian specifically, being active on mailing lists helps build a reputation that can later attract collaborators and supporters for more complex initiatives.

4. Create and use an OpenPGP key Related to reputation and identity, OpenPGP keys play a central role in the Debian community. OpenPGP is used to various degrees to sign git commits and tags, sign and encrypt email, and most importantly to sign Debian packages so their origin can be verified. The process of becoming a Debian Maintainer and eventually a Debian Developer culminates in getting your OpenPGP key included in the Debian keyring, which is used to control who can upload packages into the Debian archive. The earlier you create a key and start using it to gain reputation for that specific key that is used to sign your work, the better. Note that due to a recent schism in the OpenPGP standards working group, it is safest to create an OpenPGP key using GnuPG version 2.2.x (not 2.4.x), or using Sequoia-PGP.

5. Integrate Salsa CI in all work One reason Debian remains popular, even 30 years after its inception, is due to its culture of maintaining high standards. For a newcomer, learning all the quality assurance tools such as Lintian, Piuparts, Adequate, various build variations, and reproducible builds may be overwhelming. However, these tasks are easier to manage thanks to Salsa CI, the continuous integration pipeline in Debian that runs tests on every commit at salsa.debian.org. The earlier you activate Salsa CI in the package repository you are working on, the faster you will achieve high quality in your package with fewer missteps. You can also further customize a package specific salsa-ci.yml to have more testing coverage. Example Salsa CI pipeline with customizations

6. Fork on Salsa and use draft Merge Requests to solicit feedback All modern Debian packages are hosted on salsa.debian.org. If you want to make a change to any package, it is easy to fork, make an initial attempt at the change, and publish it as a draft Merge Request (MR) on Salsa to solicit feedback. People might have surprising reasons to object to the change you propose, or they might need time to get used to the idea before agreeing to it. Also, some people might object to a vague idea out of suspicion but agree once they see the exact implementation. There may also be a surprising number of people supporting your idea, and if there is an MR, they have a place to show their support at. Don t expect every Merge Request to be accepted. However, proposing an idea as running code in an MR is far more effective than raising the idea on a mailing list or in a bug report. Get into the habit of publishing plenty of merge requests to solicit feedback and drive discussions toward consensus.

7. Use git rebase frequently Linear git history is much easier to read. The ease of reading git log and git blame output is vital in Debian, where packages often have updates from multiple people spanning many years even decades. Debian packagers likely spend more time than the average software developer reading git history. Make sure you master git commands such as gitk --all, git citool --amend, git commit -a --fixup <commit id>, git rebase -i --autosquash <target branch>, git cherry-pick <commit id 1> <id 2> <id 3>, and git pull --rebase. If rebasing is not done on your initiative, rest assured others will ask you to do it. Thus, if the commands above are familiar, rebasing will be quick and easy for you.

8. Reviews: give some, get some In open source, the larger a project becomes, the more it attracts contributions, and the bottleneck for its growth isn t how much code developers can create but how much code submissions can be properly reviewed. At the time of writing, the main Salsa group Debian has over 800 open merge requests pending reviews and approvals. Feel free to read and comment on any merge request you find. You don t have to be a subject matter expert to provide valuable feedback. Even if you don t have specific feedback, your comment as another human acknowledging that you read the MR and found no issues is viewed positively by the author. Besides, if you spend enough time reviewing MRs in a specific domain, you will eventually become an expert in it. Code reviews are not just about providing feedback to the submitter; they are also great learning opportunities for the reviewer. As a rule of thumb, you should review at least twice as many merge requests as you submit yourself.

9. Improve Debian by improving upstream It is common that while packaging software for Debian, bugs are uncovered and patched in Debian. Do not forget to submit the fixes upstream, and add a Forwarded field to the file in debian/patches! As the person building and packaging something in Debian, you automatically become an authority on that software, and the upstream is likely glad to receive your improvements. While submitting patches upstream is a bit of work initially, getting improvements merged upstream eventually saves time for everyone and makes packaging in Debian easier, as there will be fewer patches to maintain with each new upstream release.

10. Don t hold any habits too firmly Last but not least: Once people learn a specific way of working, they tend to stick to it for decades. Learning how to create and maintain Debian packages requires significant effort, and people tend to stop learning once they feel they ve reached a sufficient level. This tendency to get stuck in a local optimum is understandable and natural, but try to resist it. It is likely that better techniques will evolve over time, so stay humble and re-evaluate your beliefs and practices every few years. Mastering these habits takes time, but each small step brings you closer to making a meaningful impact on Debian. By staying curious, collaborative, and adaptable, you can ensure your contributions stand the test of time just like Debian itself. Good luck on your journey toward becoming a Debian Maintainer!

18 January 2025

Dominique Dumont: How we solved storage API throttling on our Azure Kubernetes clusters

Hi This issue was quite puzzling, so I m sharing how we investigated this issue. I hope it can be useful for you. My client informed me that he was no longer able to install new instances of his application. k9s showed that only some pods could not be created, only the ones that created physical volume (PV). The description of these pods showed a HTTP error 429 when creating pods: New PVC could not be created because we were throttled by Azure storage API. This issue was confirmed by Azure diagnostic console on Kubernetes ( menu Diagnose and solve problems Cluster and Control Plane Availability and Performance Azure Resource Request Throttling ). We had a lot of throttling:
2025-01-18_11-01-k8s-throttles.png
Which were explained by the high call rate:
2025-01-18_11-01-k8s-calls.png
The first clue was found at the bottom of Azure diagnostic page:
2025-01-18_11-27-throttles-by-user-agent.png
According, to this page, throttling is done by services whose user agent is:
Go/go1.23.1 (amd64-linux) go-autorest/v14.2.1 Azure-SDK-For-Go/v68.0.0
storage/2021-09-01microsoft.com/aks-operat azsdk-go-armcompute/v1.0.0 (go1.22.3; linux)
The main information is Azure-SDK-For-Go, which means the program making all these calls to storage API is written in Go. All our services are written in Typescript or Rust, so they are not suspect. That leaves controllers running in kube-systems namespace. I could not find anything suspects in the logs of these services. At that point I was convinced that a component in Kubernetes control plane was making all those calls. Unfortunately, AKS is managed by Microsoft and I don t have access to the control plane logs. However, we re realized that we had quite a lot of volumesnapshots that are created in our clusters using k8s-scheduled-volume-snapshotter: We suspected that kubernetes reconciliation loop is throttled when checking the status of all these snapshots. May be so, but we also had the same issues and throttle rates on preprod and prod were the number of snapshots were quite different. We tried to get more information using Azure console on our snapshot account, but it was also broken by the throttling issue. We were so puzzled that we decided to try L odagan s advice (tout cr mer pour repartir sur des bases saines, loosely translated as burn everything down to start from scratch ) and we destroyed piece by piece our dev cluster while checking if the throttling stopped. First, we removed all our applications, no change. Then, all ancillary components like rabbitmq, cert-manager were removed, no change. Then, we tried remove the namespace containing our applications. But, we faced another issue: Kubernetes was unable to remove the namespace because it could not destroy some PVC and volumesnapshots. That was actually good news, because it meant that we were close to the actual issue. We managed to destroy the PVC and volumesnapshots by removing their finalizers. Finalizers are some kind of markers that tell kubernetes that something needs to be done before actually deleting a resource. The finalizers were removed with a command like:
kubectl patch volumesnapshots $ volumesnapshot  \
  -p ' \"metadata\": \"finalizers\":null '  --type merge
Then, we got the first progress : the throttling and high call rate stopped on our dev cluster. To make sure that the snapshots were the issue, we re-installed the ancillary components and our applications. Everything was copacetic. So, the problem was indeed with PVC and snapshots. Even though we have backups outside of Azure, we weren t really thrilled at trying L odagan s method on our prod cluster So we looked for a better fix to try on our preprod cluster. Poking around in PVC and volumesnapshots, I finally found this error message in the description on a volumesnapshotcontents:
Code="ShareSnapshotCountExceeded" Message="The total number of snapshots
for the share is over the limit."
The number of snapshots found in our cluster was not that high. So I wanted to check the snapshots present in our storage account using Azure console, which was still broken. Fortunately, Azure CLI is able to retry HTTP calls when getting 429 errors. I managed to get a list of snapshots with
az storage share list --account-name [redacted] --include-snapshots \
      tee preprod-list.json
There, I found a lot of snapshots dating back from 2024. These were no longer managed by Kubernetes and should have been cleaned up. That was our smoking gun. I guess that we had a chain of events like: To make things worse, k8s-scheduled-volume-snapshotter creates new snapshots when it cannot list the old ones. So we had 4 new snapshots per day instead of one. Since we had the chain of events, fixing the issue was not too difficult (but quite long ):
  1. stop k8s-scheduled-volume-snapshotter by disabling its cron job
  2. delete all volumesnapshots and volume snapshots contents from k8s.
  3. since Azure API was throttled, we also had to remove their finalizers
  4. delete all snapshots from azure using az command and a Perl script (this step took several hours)
  5. re-enable k8s-scheduled-volume-snapshotter
After these steps, preprod was back to normal. I m now applying the same recipe on prod. We still don t know why we had all these stale snapshots. It may have been a human error or a bug in k8s-scheduled-volume-snapshotter. Anyway, to avoid this problem is the future, we will: My name is Dominique Dumont, I m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn. All the best

10 January 2025

Valhalla's Things: Winter

Posted on January 10, 2025
Tags: madeof:atoms, craft:painting, medium:acrylic
An acrylic painting in blue and greenish grey vaguely resembling rocky slopes at the bottom right and a cloudy sky at the top left. A few days ago1 I wanted to paint, but I didn t know what to paint, so I did a few more colour tests to find out which green combinations I can get out of the available yellows and blues (and greens) acrylic I have (from a cheap student grade line). A sheet of colour tests with mixes of various yellows and blues. I liked the cool grey tones in the second to last line, from 200 Naples yellow (PY3 PY83 PW6) and 410 Ultramarine blue (PB29), so I went up a step on the scale of things to do when having a desire to paint, but the utter inability to actually paint something and did a bit of a study with those two colours plus titanium white. The study above over a sheet of scrap paper: at one of the edge the excess paint has connected the two together. And btw, painting with acrylics on watercolour paper taped to a sheet of scrap paper works just fine for stuff like this that is not supposed to last, but the work will get glued to the paper below when (not if) the paint overflows :D

  1. i.e. almost three months, and then I wrote 95% of this post and forgot to finish and publish it.

1 January 2025

Louis-Philippe V ronneau: 2024 A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one. Albums In 2024, I added 88 new albums to my collection that's a lot! This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union. Money continues to ruin the world, I guess. Concerts I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :) If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year. Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait. Anyway, here are the concerts I went to in 2024: Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2025!

  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics.

31 December 2024

Chris Lamb: Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024. It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

Films Recent releases

Seen at a 2023 festival. Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almod var) and Emilia P rez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie Deux (Todd Phillips).
Older releases ie. Films released before 2023, and not including rewatches from previous years. Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser s Husband (Patrice Leconte, 19990). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).

13 December 2024

Emanuele Rocca: Murder Mystery: GCC Builds Failing After sbuild Refactoring

This is the story of an investigation conducted by Jochen Sprickerhof, Helmut Grohne, and myself. It was true teamwork, and we would have not reached the bottom of the issue working individually. We think you will find it as interesting and fun as we did, so here is a brief writeup. A few of the steps mentioned here took several days, others just a few minutes. What is described as a natural progression of events did not always look very obvious at the moment at all.
Let us go through the Six Stages of Debugging together.

Stage 1: That cannot happen
Official Debian GCC builds start failing on multiple architectures in late November.
The build error happens on the build servers when running the testuite, but we know this cannot happen. GCC builds are not meant to fail in case of testsuite failures! Return codes are not making the build fail, make is being called with -k, it just cannot happen.
A lot of the GCC tests are always failing in fact, and an extensive log of the results is posted to the debian-gcc mailing list, but the packages always build fine regardless.
On the build daemons, build failures take several hours.

Stage 2: That does not happen on my machine
Building on my machine running Bookworm is just fine. The Build Daemons run Bookworm and use a Sid chroot for the build environment, just like I am. Same kernel.
The only obvious difference between my setup and the Debian buildds is that I am using sbuild 0.85.0 from bookworm, and the buildds have 0.86.3~bpo12+1 from bookworm-backports. Trying again with 0.86.3~bpo12+1, the build fails on my system too. The build daemons were updated to the bookworm-backports version of sbuild at some point in late November. Ha.

Stage 3: That should not happen
There are quite a few sbuild versions in between 0.85.0 and 0.86.3~bpo12+1, but looking at recent sbuild bugs shows that sbuild 0.86.0 was breaking "quite a number of packages". Indeed, with 0.86.0 the build still fails. Trying the version immediately before, 0.85.11, the build finishes correctly. This took more time than it sounds, one run including the tests takes several hours. We need a way to shorten this somehow.
The Debian packaging of GCC allows to specify which languages you may want to skip, and by default it builds Ada, Go, C, C++, D, Fortran, Objective C, Objective C++, M2, and Rust. When running the tests sequentially, the build logs stop roughly around the tests of a runtime library for D, libphobos. So can we still reproduce the failure by skipping everything except for D? With DEB_BUILD_OPTIONS=nolang=ada,go,c,c++,fortran,objc,obj-c++,m2,rust the build still fails, and it fails faster than before. Several minutes, not hours. This is progress, and time to file a bug. The report contains massive spoilers, so no link. :-)

Stage 4: Why does that happen?
Something is causing the build to end prematurely. It s not the OOM killer, and the kernel does not have anything useful to say in the logs. Can it be that the D language tests are sending signals to some process, and that is what s killing make ? We start tracing signals sent with bpftrace by writing the following script, signals.bt:
tracepoint:signal:signal_generate  
    printf("%s PID %d (%s) sent signal %d to PID %d\n", comm, pid, args->sig, args->pid);
 
And executing it with sudo bpftrace signals.bt.
The build takes its sweet time, and it fails. Looking at the trace output there s a suspicious process.exe terminating stuff.
process.exe (PID: 2868133) sent signal 15 to PID 711826
That looks interesting, but we have no clue what PID 711826 may be. Let s change the script a bit, and trace signals received as well.
tracepoint:signal:signal_generate  
    printf("PID %d (%s) sent signal %d to %d\n", pid, comm, args->sig, args->pid);
 
tracepoint:signal:signal_deliver  
    printf("PID %d (%s) received signal %d\n", pid, comm, args->sig);
 
The working version of sbuild was using dumb-init, whereas the new one features a little init in perl. We patch the current version of sbuild by making it use dumb-init instead, and trace two builds: one with the perl init, one with dumb-init.
Here are the signals observed when building with dumb-init.
PID 3590011 (process.exe) sent signal 2 to 3590014
PID 3590014 (sleep) received signal 9
PID 3590011 (process.exe) sent signal 15 to 3590063
PID 3590063 (std.process tem) received signal 9
PID 3590011 (process.exe) sent signal 9 to 3590065
PID 3590065 (std.process tem) received signal 9
And this is what happens with the new init in perl:
PID 3589274 (process.exe) sent signal 2 to 3589291
PID 3589291 (sleep) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589338
PID 3589338 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 9 to 3589340
PID 3589340 (std.process tem) received signal 9
PID 3589274 (process.exe) sent signal 15 to 3589341
PID 3589274 (process.exe) sent signal 15 to 3589323
PID 3589274 (process.exe) sent signal 15 to 3589320
PID 3589274 (process.exe) sent signal 15 to 3589274
PID 3589274 (process.exe) received signal 9
PID 3589341 (sleep) received signal 9
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589320
PID 3589273 (sbuild-usernsex) sent signal 9 to 3589323
There are a few additional SIGTERM being sent when using the perl init, that s helpful. At this point we are fairly convinced that process.exe is worth additional inspection. The source code of process.d shows something interesting:
1221 @system unittest
1222  
[...]
1247     auto pid = spawnProcess(["sleep", "10000"],
[...]
1260     // kill the spawned process with SIGINT
1261     // and send its return code
1262     spawn((shared Pid pid)  
1263         auto p = cast() pid;
1264         kill(p, SIGINT);
So yes, there s our sleep and the SIGINT (signal 2) right in the unit tests of process.d, just like we have observed in the bpftrace output.
Can we study the behavior of process.exe in isolation, separatedly from the build? Indeed we can. Let s take the executable from a failed build, and try running it under /usr/libexec/sbuild-usernsexec.
First, we prepare a chroot inside a suitable user namespace:
unshare --map-auto --setuid 0 --setgid 0 mkdir /tmp/rootfs
cd /tmp/rootfs
cat /home/ema/.cache/sbuild/unstable-arm64.tar   unshare --map-auto --setuid 0 --setgid 0 tar xf  -
unshare --map-auto --setuid 0 --setgid 0 mkdir /tmp/rootfs/whatever
unshare --map-auto --setuid 0 --setgid 0 cp process.exe /tmp/rootfs/
Now we can run process.exe on its own using the perl init, and trace signals at will:
/usr/libexec/sbuild-usernsexec --pivotroot --nonet u:0:100000:65536  g:0:100000:65536 /tmp/rootfs ema /whatever -- /process.exe
We can compare the behavior of the perl init vis-a-vis the one using dumb-init in milliseconds instead of minutes.

Stage 5: Oh, I see.
Why does process.exe send more SIGTERMs when using the perl init is now the big question. We have a simple reproducer, so this is where using strace becomes possible.
sudo strace --user ema --follow-forks -o sbuild-dumb-init.strace ./sbuild-usernsexec-dumb-init --pivotroot --nonet u:0:100000:65536  g:0:100000:65536 /tmp/dumbroot ema /whatever -- /process.exe
We start comparing the strace output of dumb-init with that of perl-init, looking in particular for different calls to kill.
Here is what process.exe does under dumb-init:
3593883 kill(-2, SIGTERM)               = -1 ESRCH (No such process)
No such process. Under perl-init instead:
3593777 kill(-2, SIGTERM <unfinished ...>
The process is there under perl-init!
That is a kill with negative pid. From the kill(2) man page:
If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid.
It would have been very useful to see this kill with negative pid in the output of bpftrace, why didn t we? The tracepoint used, tracepoint:signal:signal_generate, shows when signals are actually being sent, and not the syscall being called. To confirm, one can trace tracepoint:syscalls:sys_enter_kill and see the negative PIDs, for example:
PID 312719 (bash) sent signal 2 to -312728
The obvious question at this point is: why is there no process group 2 when using dumb-init?

Stage 6: How did that ever work?
We know that process.exe sends a SIGTERM to every process in the process group with ID 2. To find out what this process group may be, we spawn a shell with dumb-init and observe under /proc PIDs 1, 16, and 17. With perl-init we have 1, 2, and 17. When running dumb-init, there are a few forks before launching the program, explaining the difference. Looking at /proc/2/cmdline we see that it s bash, ie. the program we are running under perl-init. When building a package, that is dpkg-buildpackage itself.
The test is accidentally killing its own process group.
Now where does this -2 come from in the test?
2363     // Special values for _processID.
2364     enum invalid = -1, terminated = -2;
Oh. -2 is used as a special value for PID, meaning "terminated". And there s a call to kill() later on:
2694     do   s = tryWait(pid);   while (!s.terminated);
[...]
2697     assertThrown!ProcessException(kill(pid));
What sets pid to terminated you ask?
Here is tryWait:
2568 auto tryWait(Pid pid) @safe
2569  
2570     import std.typecons : Tuple;
2571     assert(pid !is null, "Called tryWait on a null Pid.");
2572     auto code = pid.performWait(false);
And performWait:
2306         _processID = terminated;
The solution, dear reader, is not to kill.
PS: the bug report with spoilers for those interested is #1089007.

7 December 2024

Louis-Philippe V ronneau: lintian.debian.org: Episode IV A New Hope

After weeks dare I say months of work, it is finally done. lintian.debian.org is back online! Screenshot of the new lintian.debian.org website Many, many thanks to everyone who worked hard to make this possible: All in all, I did very little (mostly coordinating these fine folks) and they should get the credit for this very useful service being back.

4 December 2024

Enrico Zini: How to right click

I climbed on top of a mountain with a beautiful view, and when I started readying my new laptop for a work call (as one does on top of mountains), I realised that I couldn't right click and it kind of spoiled the mood. Clicking on the bottom right corner of my touchpad left-clicked. Clicking with two fingers left-clicked. Alt-clicking, Super-clicking, Control-clicking, left clicked. Here's there are two ways to simulate mouse buttons with touchpads in Wayland: Skippable digression:
I'm not sure why Gnome insists in following Macs for defaults, which is what people with non-Mac hardware are less likely to be used to. In my experience, Macs are as arbitrarily awkward to use as anything else, but they managed to build a community where if you don't understand how it works you get told you're stupid. All other systems (including Gnome) have communities where instead you get told (as is generally the case) that the system design is stupid, which at least gives you some amount of validation in your suffering. Oh well.
How to configure right click Surprisingly, this is not available in Gnome Shell settings. It can be found in gnome-tweaks: under "Keyboard & Mouse", "Mouse Click Emulation", one can choose between "Fingers" or "Area". I tried both and went for "Area": I use right-drag a lot to resize windows, and I couldn't find a way, at least with this touchpad, to make it work consistently in "Fingers" mode.

22 November 2024

Matthew Palmer: Invalid Excuses for Why Your Release Process Sucks

In my companion article, I made the bold claim that your release process should consist of no more than two steps:
  1. Create an annotated Git tag;
  2. Run a single command to trigger the release pipeline.
As I have been on the Internet for more than five minutes, I m aware that a great many people will have a great many objections to this simple and straightforward idea. In the interests of saving them a lot of wear and tear on their keyboards, I present this list of common reasons why these objections are invalid. If you have an objection I don t cover here, the comment box is down the bottom of the article. If you think you ve got a real stumper, I m available for consulting engagements, and if you turn out to have a release process which cannot feasibly be reduced to the above two steps for legitimate technical reasons, I ll waive my fees.

But I automatically generate my release notes from commit messages! This one is really easy to solve: have the release note generation tool feed directly into the annotation. Boom! Headshot.

But all these files need to be edited to make a release! No, they absolutely don t. But I can see why you might think you do, given how inflexible some packaging environments can seem, and since that s how we ve always done it .

Language Packages Most languages require you to encode the version of the library or binary in a file that you want to revision control. This is teh suck, but I m yet to encounter a situation that can t be worked around some way or another. In Ruby, for instance, gemspec files are actually executable Ruby code, so I call code (that s part of git-version-bump, as an aside) to calculate the version number from the git tags. The Rust build tool, Cargo, uses a TOML file, which isn t as easy, but a small amount of release automation is used to take care of that.

Distribution Packages If you re building Linux distribution packages, you can easily apply similar automation faffery. For example, Debian packages take their metadata from the debian/changelog file in the build directory. Don t keep that file in revision control, though: build it at release time. Everything you need to construct a Debian (or RPM) changelog is in the tag version numbers, dates, times, authors, release notes. Use it for much good.

The Dreaded Changelog Finally, there s the CHANGELOG file. If it s maintained during the development process, it typically has an archive of all the release notes, under version numbers, with an Unreleased heading at the top. It s one more place to remember to have to edit when making that preparing release X.Y.Z commit, and it is a gift to the Demon of Spurious Merge Conflicts if you follow the policy of every commit must add a changelog entry . My solution: just burn it to the ground. Add a line to the top with a link to wherever the contents of annotated tags get published (such as GitHub Releases, if that s your bag) and never open it ever again.

But I need to know other things about my release, too! For some reason, you might think you need some other metadata about your releases. You re probably wrong it s amazing how much information you can obtain or derive from the humble tag so think creatively about your situation before you start making unnecessary complexity for yourself. But, on the off chance you re in a situation that legitimately needs some extra release-related information, here s the secret: structured annotation. The annotation on a tag can be literally any sequence of octets you like. How that data is interpreted is up to you. So, require that annotations on release tags use some sort of structured data format (say YAML or TOML or even XML if you hate your release manager), and mandate that it contain whatever information you need. You can make sure that the annotation has a valid structure and contains all the information you need with an update hook, which can reject the tag push if it doesn t meet the requirements, and you re sorted.

But I have multiple packages in my repo, with different release cadences and versions! This one is common enough that I just refer to it as the monorepo drama . Personally, I m not a huge fan of monorepos, but you do you, boo. Annotated tags can still handle it just fine. The trick is to include the package name being released in the tag name. So rather than a release tag being named vX.Y.Z, you use foo/vX.Y.Z, bar/vX.Y.Z, and baz/vX.Y.Z. The release automation for each package just triggers on tags that match the pattern for that particular package, and limits itself to those tags when figuring out what the version number is.

But we don t semver our releases! Oh, that s easy. The tag pattern that marks a release doesn t have to be vX.Y.Z. It can be anything you want. Relatedly, there is a (rare, but existent) need for packages that don t really have a conception of releases in the traditional sense. The example I ve hit most often is automatically generated bindings packages, such as protobuf definitions. The source of truth for these is a bunch of .proto files, but to be useful, they need to be packaged into code for the various language(s) you re using. But those packages need versions, and while someone could manually make releases, the best option is to build new per-language packages automatically every time any of those definitions change. The versions of those packages, then, can be datestamps (I like something like YYYY.MM.DD.N, where N starts at 0 each day and increments if there are multiple releases in a single day). This process allows all the code that needs the definitions to declare the minimum version of the definitions that it relies on, and everything is kept in sync and tracked almost like magic.

Th-th-th-th-that s all, folks! I hope you ve enjoyed this bit of mild debunking. Show your gratitude by buying me a refreshing beverage, or purchase my professional expertise and I ll answer all of your questions and write all your CI jobs.

10 October 2024

Freexian Collaborators: Debian Contributions: Packaging Pydantic v2, Reworking of glib2.0 for cross bootstrap, Python archive rebuilds and more! (by Anupa Ann Joseph)

Debian Contributions: 2024-09 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Pydantic v2, by Colin Watson Pydantic is a useful library for validating data in Python using type hints: Freexian uses it in a number of projects, including Debusine. Its Debian packaging had been stalled at 1.10.17 in testing for some time, partly due to needing to make sure everything else could cope with the breaking changes introduced in 2.x, but mostly due to needing to sort out packaging of its new Rust dependencies. Several other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and Timo R hling) had made some good progress on this, but nobody had quite got it over the line and it seemed a bit stuck. Colin upgraded a few Rust libraries to new upstream versions, packaged rust-jiter, and chased various failures in other packages. This eventually allowed getting current versions of both pydantic-core and pydantic into testing. It should now be much easier for us to stay up to date routinely.

Reworking of glib2.0 for cross bootstrap, by Helmut Grohne Simon McVittie (not affiliated with Freexian) earlier restructured the libglib2.0-dev such that it would absorb more functionality and in particular provide tools for working with .gir files. Those tools practically require being run for their host architecture (practically this means running under qemu-user) which is at odds with the requirements of architecture cross bootstrap. The qemu requirement was expressed in package dependencies and also made people unhappy attempting to use libglib2.0-dev for i386 on amd64 without resorting to qemu. The use of qemu in architecture bootstrap is particularly problematic as it tends to not be ready at the time bootstrapping is needed. As a result, Simon proposed and implemented the introduction of a libgio-2.0-dev package providing a subset of libglib2.0-dev that does not require qemu. Packages should continue to use libglib2.0-dev in their Build-Depends unless involved in architecture bootstrap. Helmut reviewed and tested the implementation and integrated the necessary changes into rebootstrap. He also prepared a patch for libverto to use the new package and proposed adding forward compatibility to glib2.0. Helmut continued working on adding cross-exe-wrapper to architecture-properties and implemented autopkgtests later improved by Simon. The cross-exe-wrapper package now provides a generic mechanism to a program on a different architecture by using qemu when needed only. For instance, a dependency on cross-exe-wrapper:i386 provides a i686-linux-gnu-cross-exe-wrapper program that can be used to wrap an ELF executable for the i386 architecture. When installed on amd64 or i386 it will skip installing or running qemu, but for other architectures qemu will be used automatically. This facility can be used to support cross building with targeted use of qemu in cases where running host code is unavoidable as is the case for GObject introspection. This concludes the joint work with Simon and Niels Thykier on glib2.0 and architecture-properties resolving known architecture bootstrap regressions arising from the glib2.0 refactoring earlier this year.

Analyzing binary package metadata, by Helmut Grohne As Guillem Jover (not affiliated with Freexian) continues to work on adding metadata tracking to dpkg, the question arises how this affects existing packages. The dedup.debian.net infrastructure provides an easy playground to answer such questions, so Helmut gathered file metadata from all binary packages in unstable and performed an explorative analysis. Some results include: Guillem also performed a cursory analysis and reported other problem categories such as mismatching directory permissions for directories installed by multiple packages and thus gained a better understanding of what consistency checks dpkg can enforce.

Python archive rebuilds, by Stefano Rivera Last month Stefano started to write some tooling to do large-scale rebuilds in debusine, starting with finding packages that had already started to fail to build from source (FTBFS) due to the removal of setup.py test. This month, Stefano did some more rebuilds, starting with experimental versions of dh-python. During the Python 3.12 transition, we had added a dependency on python3-setuptools to dh-python, to ease the transition. Python 3.12 removed distutils from the stdlib, but many packages were expecting it to still be available. Setuptools contains a version of distutils, and dh-python was a convenient place to depend on setuptools for most package builds. This dependency was never meant to be permanent. A rebuild without it resulted in mass-filing about 340 bugs (and around 80 more by mistake). A new feature in Python 3.12, was to have unittest s test runner exit with a non-zero return code, if no tests were run. We added this feature, to be able to detect tests that are not being discovered, by mistake. We are ignoring this failure, as we wouldn t want to suddenly cause hundreds of packages to fail to build, if they have no tests. Stefano did a rebuild to see how many packages were affected, and found that around 1000 were. The Debian Python community has not come to a conclusion on how to move forward with this. As soon as Python 3.13 release candidate 2 was available, Stefano did a rebuild of the Python packages in the archive against it. This was a more complex rebuild than the others, as it had to be done in stages. Many packages need other Python packages at build time, typically to run tests. So transitions like this involve some manual bootstrapping, followed by several rounds of builds. Not all packages could be tested, as not all their dependencies support 3.13 yet. The result was around 100 bugs in packages that need work to support Python 3.13. Many other packages will need additional work to properly support Python 3.13, but being able to build (and run tests) is an important first step.

Miscellaneous contributions
  • Carles prepared the update of python-pyaarlo package to a new upstream release.
  • Carles worked on updating python-ring-doorbell to a new upstream release. Unfinished, pending to package a new dependency python3-firebase-messaging RFP #1082958 and its dependency python3-http-ece RFP #1083020.
  • Carles improved po-debconf-manager. Main new feature is that it can open Salsa merge requests. Aiming for a lightning talk in MiniDebConf Toulouse (November) to be functional end to end and get feedback from the wider public for this proof of concept.
  • Carles helped one translator to use po-debconf-manager (added compatibility for bullseye, fixed other issues) and reviewed 17 package templates.
  • Colin upgraded the OpenSSH packaging to 9.9p1.
  • Colin upgraded the various YubiHSM packages to new upstream versions, enabled more tests, fixed yubihsm-shell build failures on some 32-bit architectures, made yubihsm-shell build reproducibly, and fixed yubihsm-connector to apply udev rules to existing devices when the package is installed. As usual, bookworm-backports is up to date with all these changes.
  • Colin fixed quite a bit of fallout from setuptools 72.0.0 removing setup.py test, backported a large upstream patch set to make buildbot work with SQLAlchemy 2.0, and upgraded 25 other Python packages to new upstream versions.
  • Enrico worked with Jakob Haufe to get him up to speed for managing sso.debian.org
  • Rapha l did remove spam entries in the list of teams on tracker.debian.org (see #1080446), and he applied a few external contributions, fixing a rendering issue and replacing the DDPO link with a more useful alternative. He also gave feedback on a couple of merge requests that required more work. As part of the analysis of the underlying problem, he suggested to the ftpmasters (via #1083068) to auto-reject packages having the too-many-contacts lintian error, and he raised the severity of #1076048 to serious to actually have that 4 year old bug fixed.
  • Rapha l uploaded zim and hamster-time-tracker to fix issues with Python 3.12 getting rid of setuptools. He also uploaded a new gnome-shell-extension-hamster to cope with the upcoming transition to GNOME 47.
  • Helmut sent seven patches and sponsored one upload for cross build failures.
  • Helmut uploaded a Nagios/Icinga plugin check-smart-attributes for monitoring the health of physical disks.
  • Helmut collaborated on sbuild reviewing and improving a MR for refactoring the unshare backend.
  • Helmut sent a patch fixing coinstallability of gcc-defaults.
  • Helmut continued to monitor the evolution of the /usr-move. With more and more key packages such as libvirt or fuse3 fixed. We re moving into the boring long-tail of the transition.
  • Helmut proposed updating the meson buildsystem in debhelper to use env2mfile.
  • Helmut continued to update patches maintained in rebootstrap. Due to the work on glib2.0 above, rebootstrap moves a lot further, but still fails for any architecture.
  • Santiago reviewed some Merge Request in Salsa CI, such as: !478, proposed by Otto to extend the information about how to use additional runners in the pipeline and !518, proposed by Ahmed to add support for Ubuntu images, that will help to test how some debian packages, including the complex MariaDB are built on Ubuntu. Santiago also prepared !545, which will make the reprotest job more consistent with the result seen on reproducible-builds.
  • Santiago worked on different tasks related to DebConf 25. Especially he drafted the fundraising brochure (which is almost ready).
  • Thorsten Alteholz uploaded package libcupsfilter to fix the autopkgtest and a dependency problem of this package. After package splix was abandoned by upstream and OpenPrinting.org adopted its maintenance, Thorsten uploaded their first release.
  • Anupa published posts on the Debian Administrators group in LinkedIn and moderated the group, one of the tasks of the Debian Publicity Team.
  • Anupa helped organize DebUtsav 2024. It had over 100 attendees with hand-on sessions on making initial contributions to Linux Kernel, Debian packaging, submitting documentation to Debian wiki and assisting Debian Installations.

1 October 2024

Guido G nther: Free Software Activities September 2024

Another short status update of what happened on my side last month. Besides the usual amount of housekeeping last month was a lot about getting old issues resolved by finishing some stale merge requests and work in pogress MRs. I also pushed out the Phosh 0.42.0 Release phosh phoc phosh-mobile-settings libphosh-rs phosh-osk-stub phosh-wallpapers meta-phosh Debian ModemManager Calls bluez gnome-text-editor feedbackd Chatty libcall-ui glib wlr-protocols git-buildpackage iio-sensor-proxy Fotema Help Development If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

5 September 2024

Sandro Tosi: TL;DR belongs at the top of an article

TL;DR
It has happen to probably everyone to read an article, reach the end of it only to see a TL;DR section right at the bottom, and thinking: "eeh i wish this would have been at the top so i didnt have to read (DR) this long article (TL) to gather its core ideas".

If the reason for "Too Long; Didn't Read" to exist is to avoid the reader to go thru the whole article to get its main points, then the natural place to present it is at the very top of said article.

So if you're planning on writing something and to add a TL;DR section (you don't have to, of course, but if you do that work too) then please position it at the very beginning of your work.

1 September 2024

Bits from Debian: Bits from the DPL

Dear Debian community, this are my bits from DPL for August. Happy Birthday Debian On 16th of August Debian celebrated its 31th birthday. Since I'm unable to write a better text than our great publicity team I'm simply linking to their article for those who might have missed it: https://bits.debian.org/2024/08/debian-turns-31.html Removing more packages from unstable Helmut Grohne argued for more aggressive package removal and sought consensus on a way forward. He provided six examples of processes where packages that are candidates for removal are consuming valuable person-power. I d like to add that the Bug of the Day initiative (see below) also frequently encounters long-unmaintained packages with popcon votes sometimes as low as zero, and often fewer than ten. Helmut's email included a list of packages that would meet the suggested removal criteria. There was some discussion about whether a popcon vote should be included in these criteria, with arguments both for and against it. Although I support including popcon, I acknowledge that Helmut has a valid point in suggesting it be left out. While I ve read several emails in agreement, Scott Kitterman made a valid point "I don't think we need more process. We just need someone to do the work of finding the packages and filing the bugs." I agree that this is crucial to ensure an automated process doesn t lead to unwanted removals. However, I don t see "someone" stepping up to file RM bugs against other maintainers' packages. As long as we have strict ownership of packages, many people are hesitant to touch a package, even for fixing it. Asking for its removal might be even less well-received. Therefore, if an automated procedure were to create RM bugs based on defined criteria, it could help reduce some of the social pressure. In this aspect the opinion of Niels Thykier is interesting: "As much as I want automation, I do not mind the prototype starting as a semi-automatic process if that is what it takes to get started." The urgency of the problem to remove packages was put by CharlesPlessy into the words: "So as of today, it is much less work to keep a package rotting than removing it." My observation when trying to fix the Bug of the Day exactly fits this statement. I would love for this discussion to lead to more aggressive removals that we can agree upon, whether they are automated, semi-automated, or managed by a person processing an automatically generated list (supported by an objective procedure). To use an analogy: I ve found that every image collection improves with aggressive pruning. Similarly, I m convinced that Debian will improve if we remove packages that no longer serve our users well. DEP14 / DEP18 There are two DEPs that affect our workflow for maintaining packages particularly for those who agree on using Git for Debian packages. DEP-14 recommends a standardized layout for Git packaging repositories, which benefits maintainers working across teams and makes it easier for newcomers to learn a consistent repository structure. DEP-14 stalled for various reasons. Sam Hartman suspected it might be because 'it doesn't bring sufficient value.' However, the assumption that git-buildpackage is incompatible with DEP-14 is incorrect, as confirmed by its author, Guido G nther. As one of the two key tools for Debian Git repositories (besides dgit) fully supports DEP-14, though the migration from the previous default is somewhat complex. Some investigation into mass-converting older formats to DEP-14 was conducted by the Perl team, as Gregor Hermann pointed out.. The discussion about DEP-14 resurfaced with the suggestion of DEP-18. Guido G nther proposed the title Encourage Continuous Integration and Merge Request-Based Collaboration for Debian Packages , which more accurately reflects the DEP's technical intent. Otto Kek l inen, who initiated DEP-18 (thank you, Otto), provided a good summary of the current status. He also assembled a very helpful overview of Git and GitLab usage in other Linux distros. More Salsa CI As a result of the DEP-18 discussion, Otto Kek l inen suggested implementing Salsa CI for our top popcon packages. I believe it would be a good idea to enable CI by default across Salsa whenever a new repository is created. Progress in Salsa migration In my campaign, I stated that I aim to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. Today, it stands at 2,187 (UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;). After a third of my DPL term (OMG), we've made significant progress, reducing the amount in question (369 packages) by nearly half. I'm pleased with the support from the DDs who moved their packages to Salsa. Some packages were transferred as part of the Bug of the Day initiative (see below). Bug of the Day As announced in my 'Bits from the DPL' talk at DebConf, I started an initiative called Bug of the Day. The goal is to train newcomers in bug triaging by enabling them to tackle small, self-contained QA tasks. We have consistently identified target packages and resolved at least one bug per day, often addressing multiple bugs in a single package. In several cases, we followed the Package Salvaging procedure outlined in the Developers Reference. Most instances were either welcomed by the maintainer or did not elicit a response. Unfortunately, there was one exception where the recipient of the Package Salvage bug expressed significant dissatisfaction. The takeaway is to balance formal procedures with consideration for the recipient s perspective. I'm pleased to confirm that the Matrix channel has seen an increase in active contributors. This aligns with my hope that our efforts would attract individuals interested in QA work. I m particularly pleased that, within just one month, we have had help with both fixing bugs and improving the code that aids in bug selection. As I aim to introduce newcomers to various teams within Debian, I also take the opportunity to learn about each team's specific policies myself. I rely on team members' assistance to adapt to these policies. I find that gaining this practical insight into team dynamics is an effective way to understand the different teams within Debian as DPL. Another finding from this initiative, which aligns with my goal as DPL, is that many of the packages we addressed are already on Salsa but have not been uploaded, meaning their VCS fields are not published. This suggests that maintainers are generally open to managing their packages on Salsa. For packages that were not yet on Salsa, the move was generally welcomed. Publicity team wants you The publicity team has decided to resume regular meetings to coordinate their efforts. Given my high regard for their work, I plan to attend their meetings as frequently as possible, which I began doing with the first IRC meeting. During discussions with some team members, I learned that the team could use additional help. If anyone interested in supporting Debian with non-packaging tasks reads this, please consider introducing yourself to debian-publicity@lists.debian.org. Note that this is a publicly archived mailing list, so it's not the best place for sharing private information. Kind regards Andreas.

22 August 2024

Thomas Goirand: Packaging Home Assistant

During Debconf, Edward Betts and myself started packaging Home Assistant for Debian. It consists of hundreds of Python packages. So far, we counted at least 675 packages. That s a lot, though most packages are just libraries to talk with some IoT devices and some APIs. It s fairly easy to create a new package: it takes me about 15 to 20 minutes, probably half that time to Edward. And it s a lot of fun. So far in one month of time, we managed to package about 1 third of the list (probably 200+ Python packages already). Once we ve done all the dependencies, we may start to have fun with the core of the application! At the current speed, hopefully we ll be done before the end of the year. Edward and myself have swear to make at least one package a day, which I ve been doing so far, and Edward did a way more We also received contributions from Silton0506, Tianyu, piotr, EiPi Fun, sourabhtk37, and Count-Dracula, as per the very bottom of the TODO list in the wiki (see link below). If you have a bit of free time, we d love to have more contributors. Here s were to get the needed information: We created a team in Salsa: https://salsa.debian.org/homeassistant-team/ Our TODO list: https://wiki.debian.org/Python/HomeAssistant Our DDPO Q/A page: https://qa.debian.org/developer.php?login=team%2Bhomeassistant%40tracker.debian.org Feel free to join us on IRC: #debian-homeassistant Discussing with a lot of people about it, I realized that A LOT of DDs are actually using Home Assistant. Wouldn t you like it better if it was just a apt install away ? Any DD can simply take a package in the wiki, open an ITP, upload it s debianized source on Salsa, and upload to the Debian archive. Most are very easy simple packages to make.

2 August 2024

Bits from Debian: Bits from the DPL

Dear Debian community, this are my bits from DPL written at my last day at another great DebConf. DebConf attendance At the beginning of July, there was some discussion with the bursary and content team about sponsoring attendees. The discussion continued at DebConf. I do not have much experience with these discussions. My summary is that while there is an honest attempt to be fair to everyone, it did not seem to work for all, and some critical points for future discussion remained. In any case, I'm thankful to the bursary team for doing such a time-draining and tedious job. Popular packages not yet on Salsa at all Otto Kek l inen did some interesting investigation about Popular packages not yet on Salsa at all. I think I might provide some more up to date list soon by some UDD query which considers more recent uploads than the trends data soon. For instance wget was meanwhile moved to Salsa (thanks to No l K the for this). Keep on contacting more teams I kept on contacting teams in July. Despite I managed to contact way less teams than I was hoping I was able to present some conclusions in the Debian Teams exchange BoF and Slide 16/23 of my Bits from the DPL talk. I intend to do further contacts next months. Nominating Jeremy B cha for GNOME Advisory Board I've nominated Jeremy B cha to GNOME Advisory Board. Jeremy has volunteered to represent Debian at GUADEC in Denver. DebCamp / DebConf I attended DebCamp starting from 22 July evening and had a lot of fun with other attendees. As always DebConf is some important event nearly every year for me. I enjoyed Korean food, Korean bath, nature at the costline and other things. I had a small event without video coverage Creating web galleries including maps from a geo-tagged photo collection. At least two attendees of this workshop confirmed success in creating their own web galleries. I used DebCamp and DebConf for several discussions. My main focus was on discussions with FTP master team members Luke Faraone, Sean Whitton, and Utkarsh Gupta. I'm really happy that the four of us absolutely agree on some proposed changes to the structure of the FTP master team, as well as changes that might be fruitful for the work of the FTP master team itself and for Debian developers regarding the processing of new packages. My explicit thanks go to Luke Faraone, who gave a great introduction to FTP master work in their BoF. It was very instructive for the attending developers to understand how the FTP master team checks licenses and copyright and what workflow is used for accepting new packages. In the first days of DebConf, I talked to representatives of DebConf platinum sponsor WindRiver, who announced the derivative eLxr. I warmly welcome this new derivative and look forward to some great cooperation. I also talked to the representative of our gold sponsor, Microsoft. My first own event was the Debian Med BoF. I'd like to repeat that it might not only be interesting for people working in medicine and microbiology but always contains some hints how to work together in a team. As said above I was trying to summarise some first results of my team contacts and got some further input from other teams in the Debian Teams exchange BoF. Finally, I had my Bits from DPL talk. I received positive responses from attendees as well as from remote participants, which makes me quite happy. For those who were not able to join the events on-site or remotely, the videos of all events will be available on the DebConf site soon. I'd like to repeat the explicit need for some volunteers to join the Lintian team. I'd also like to point out the "Tiny tasks" initiative I'd like to start (see below). BTW, if someone might happen to solve my quiz for the background images there is a summary page in my slides which might help to assign every slide to some DebConf. I could assume that if you pool your knowledge you can solve more than just the simple ones. Just let me know if you have some solution. You can add numbers to the rows and letters to the columns and send me:
 2000/2001:  Uv + Wx
 2002: not attended
 2003: Yz
 2004: not attended
 2005:
 2006: not attended
 2007:
 ...
 2024: A1
This list provides some additional information for DebConfs I did not attend and when no video stream was available. It also reminds you about the one I uncovered this year and that I used two images from 2001 since I did not have one from 2000. Have fun reassembling good memories. Tiny tasks: Bug of the day As I mentioned in my Bits from DPL talk, I'd like to start a "Tiny tasks" effort within Debian. The first type of tasks will be the Bug of the day initiative. For those who would like to join, please join the corresponding Matrix channel. I'm curious to see how this might work out and am eager to gain some initial experiences with newcomers. I won't be available until next Monday, as I'll start traveling soon and have a family event (which is why I need to leave DebConf today after the formal dinner). Kind regards from DebConf in Busan Andreas.

1 July 2024

Niels Thykier: Debian packaging with style black

When I started working on the language server for debputy, one of several reasons was about automatic applying a formatting style. Such that you would not have to remember to manually reformat the file. One of the problems with supporting automatic formatting is that no one agrees on the "one true style". To make this concrete, Johannes Schauer Marin Rodrigues did the numbers of which wrap-and-sort option that are most common in https://bugs.debian.org/895570#46. Unsurprising, we end up with 14-15 different styles with various degrees of popularity. To make matters worse, wrap-and-sort does not provide a way to declare "this package uses options -sat". So that begged the question, how would debputy know which style it should use when it was going to reformat file. After a couple of false-starts, Christian Hofstaedtler mentioned that we could just have a field in debian/control for supporting a "per-package" setting in responds to my concern about adding a new "per-package" config file. At first, I was not happy with it, because how would you specify all of these options in a field (in a decent manner)? But then I realized that one I do not want all these styles and that I could start simpler. The Python code formatter black is quite successful despite not having a lot of personalized style options. In fact, black makes a statement out of not allowing a lot of different styles. Combing that, the result was X-Style: black (to be added to the Source stanza of debian/control), which every possible reference to the black tool for how styling would work. Namely, you outsource the style management to the tool (debputy) and then start using your focus on something else than discussing styles. As with black, this packaging formatting style is going to be opinionated and it will evolve over time. At the starting point, it is similar to wrap-and-sort -sat for the deb822 files (debputy does not reformat other files at the moment). But as mentioned, it will likely evolve and possible diverge from wrap-and-sort over time. The choice of the starting point was based on the numbers posted by Johannes #895570. It was not my personal favorite but it seemed to have a majority and is also close to the one suggested by salsa pipeline maintainers. The delta being -kb which I had originally but removed in 0.1.34 at request of Otto Kek l inen after reviewing the numbers from Johannes one more time. To facilitate this new change, I uploaded debputy/0.1.30 (a while back) to Debian unstable with the following changes:
  • Support for the X-Style: black header.
  • When a style is defined, the debputy lsp server command will now automatically reformat deb822 files on save (if the editor supports it) or on explicit "reformat file" request from the editor (usually indirectly from the user).
  • New subcommand debputy reformat command that will reformat the files, when a style is defined.
  • A new pre-commit hook repo to run debputy lint and debputy reformat. These hooks are available from https://salsa.debian.org/debian/debputy-pre-commit-hooks version v0.1 and can be used with the pre-commit tool (from the package of same name).
The obvious omission is a salsa-pipeline feature for this. Otto has put that on to his personal todo list and I am looking forward to that.
Beyond black Another thing I dislike about our existing style tooling is that if you run wrap-and-sort without any arguments, you have a higher probability of "trashing" the style of the current package than getting the desired result. Part of this is because wrap-and-sort's defaults are out of sync with the usage (which is basically what https://bugs.debian.org/895570 is about). But I see another problem. The wrap-and-sort tool explicitly defined options to tweak the style but provided maintainers no way to record their preference in any machine readable way. The net result is that we have tons of diverging styles and that you (as a user of wrap-and-sort) have to manually tell wrap-and-sort which style you want every time you run the tool. In my opinion that is not playing to the strengths of neither human nor machine. Rather, it is playing to the weaknesses of the human if anything at all. But the salsa-CI pipeline people also ran into this issue and decided to work around this deficiency. To use wrap-and-sort in the salsa-CI pipeline, you have to set a variable to activate the job and another variable with the actual options you want. The salsa-CI pipeline is quite machine readable and wrap-and-sort is widely used. I had debputy reformat also check for the salsa-CI variables as a fallback. This fallback also works for the editor mode (debputy lsp server), so you might not even have to run debputy reformat. :) This was a deliberate trade-off. While I do not want all us to have all these options, I also want Debian packaging to be less painful and have fewer paper cuts. Having debputy go extra lengths to meet wrap-and-sort users where they are came out as the better solution for me. A nice side-effect of this trade-off is that debputy reformat now a good tool for drive-by contributors. You can safely run debputy reformat on any package and either it will apply the styling or it will back out and inform you that no obvious style was detected. In the latter case, you would have to fallback to manually deducing the style and applying it.
Differences to wrap-and-sort The debputy reformat has some limitations or known differences to wrap-and-sort. Notably, debputy reformat (nor debputy lsp server) will not invoke wrap-and-sort. Instead, debputy has its own reformatting engine that provides similar features. One reason for not running wrap-and-sort is that I want debputy reformat to match the style that debputy lsp server will give you. That way, you get consistent style across all debputy commands. Another reason is that it is important to me that reformatting is safe and does not change semantics. This leads to two regrettable known differences to the wrap-and-sort behavior due to safety in addition to one scope limitation in debputy:
  1. debputy will ignore requests to sort the stanzas when the "keep first" option is disabled (-b --no-keep-first). This combination is unsafe reformatting. I feel it was a mistake for wrap-and-sort to ever allow this but at least it is no longer the default (-b is now -bk by default). This will be less of a problem in debhelper-compat 15, since the concept of "main package" will disappear and all multi-binary source packages will be required to use debian/package.install rather than debian/install.
  2. debputy will not reorder the contents of debhelper packaging files such as debian/install. This is also an (theoretical) unsafe thing to do. While the average package will not experience issues with this, there are rare corner cases where the re-ordering can affect the end result. I happen to know this, because I ran into issues when trying to optimize dh_install in a way that assumed the order did not matter. Stuff broke and there is now special-case code in dh_install to back out of that optimization when that happens.
  3. debputy has a limited list of wrap-and-sort options it understands. Some options may cause debputy to back out and disable reformatting entirely with a remark that it cannot apply that style. If you run into a case of this, feel free to file a feature request to support it. I will not promise to support everything, but if it is safe and trivially doable with the engine already, then I probably will.
As stated, where debputy cannot implement the wrap-and-sort styles fully, then it will currently implement a subset that is safe if that can be identified or back out entirely of the formatting when it cannot. In all cases, debputy will not break the formatting if it is correct. It may just fail at correcting one aspect of the wrap-and-sort style if you happen to get it wrong. It is also important to remember that the prerequisite for debputy applying any wrap-and-sort style is that you have set the salsa-CI pipeline variables to trigger wrap-and-sort with the salsa-CI pipeline. So there is still a CI check before the merge that will run the wrap-and-sort in its full glory that provides the final safety net for you.
Just give me a style In conclusion, if you, like me, are more interested in getting a consistent style rather than discussing what that style should be, now you can get that with X-Style: black. You can also have your custom wrap-and-sort style be picked up automatically for drive-by contributors.
$ apt satisfy 'dh-debputy (>= 0.1.30), python3-lsprotocol'
# Add  X-Style: black  to  debian/control  for "just give me a style"
#
# OR, if there is a specific  wrap-and-sort  style for you then set
# SALSA_CI_DISABLE_WRAP_AND_SORT=no plus set relevant options in
# SALSA_CI_WRAP_AND_SORT_ARGS in debian/salsa-ci.yml (or .gitlab-ci.yml)
$ debputy reformat
It is sadly not yet in the salsa-ci pipeline. Otto is looking into that and hopefully we will have it soon. :) And if you find yourself often doing archive-wide contributions and is tired of having to reverse engineer package formatting styles, consider using debputy reformat or debputy lsp server. If you use debputy in this way, please consider providing feedback on what would help you.

7 June 2024

Freexian Collaborators: Debian Contributions: DebConf Bursaries, /usr-move, sbuild, and more! (by Stefano Rivera)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Bursary updates, by Utkarsh Gupta Utkarsh is the bursaries team lead for DebConf 24. Bursary requests are dispatched to a team of volunteers to review. The results are collated, adjusted and merged to produce priority lists of requests to fund. Utkarsh raised the team, coordinated the review, and issued bursaries to attendees.

/usr-move, by Helmut Grohne More and more, the /usr-move transition is being carried out by multiple contributors and many performed around a hundred of the requested uploads. Of these, Helmut contributed five patches and two uploads. As a result, there are less than 350 packages left to be converted, and all of the non-trivial cases have patches. We started with three times that number. Thanks to everyone involved for supporting this effort. For people interested in background information of this transition, Helmut gave a presentation at MiniDebConf Berlin 2024 (slides).

sbuild, by Helmut Grohne While unshare mode of sbuild has existed for quite a while, it is now getting significant use in Debian, and new problems are popping up. Helmut looked into an apparmor-related failure and provided a diagnosis. While relevant code would detect the chroot nature of a schroot backend and skip apparmor tests, the unshare environment would be just good enough to run and fail the test. As sbuild exposes fewer special kernel filesystems, the tests will be skipped again. Another problem popped up when gobject-introspection added a dependency on the host architecture Python interpreter in a cross build environment. sbuild would prefer installing (and failing) a host architecture Python to installing the qemu alternative. Attempts to fix this would result in systemd killing sbuild. ischroot as used by libc6.postinst would not classify the unshare environment as a chroot. Therefore libc6.postinst would run telinit which would kill the build process. This is a complex interaction problem that shall eventually be solved by providing triggers from libc6 to be implemented by affected init systems.

Salsa CI updates, by Santiago Ruano Rinc n Several issues arose about Salsa CI last month, and it is probably worth mentioning part of the challenges of defining its framework in YAML. With the upcoming end-of-support of Debian 10 buster as LTS, armel was removed from deb.debian.org, making the jobs that build images for buster/armel to fail. While the removal of buster/armel from the repositories is a natural change, it put some light on the flaws in the Salsa CI design regarding the support of the different Debian releases. Currently, the images are defined like these (from .images-debian.yml):
.all-supported-releases: &all-supported-releases
  - stretch
  - stretch-backports
  - buster
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
And from them, different images are built according to the different jobs and how they are supported, for example:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, all releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE: *all-supported-releases
The removal of buster/armel could be easily reflected as:
images-prod-arm:
  stage: build
  extends: .build_template
  tags:
    - $SALSA_CI_ARM_RUNNER_TAG
  parallel:
    matrix:
      # Base image, fully supported releases, all arches
      - IMAGE_NAME: base
        ARCH:
          - arm32v5
          - arm32v7
          - arm64v8
        RELEASE:
          - stretch
          - buster
          - bullseye
          - bullseye-backports
          - bookworm
          - bookworm-backports
          - trixie
          - sid
          - experimental
      # buster only supports armhf and arm64
      - IMAGE_NAME: base
        ARCH:
          - arm32v7
          - arm64v8
        RELEASE: buster
Evidently, this increases duplication of the release support data, which is of course not optimal and it is error prone when changing the data about supported releases. A better approach would be to have two different YAML lists, such as:
# releases that have partial support. E.g.: buster is transitioning to
# Debian LTS, and buster armel is no longer found in deb.debian.org
.old-releases: &old-releases
  - stretch
  - buster

.currently-supported-releases: &currently-supported-releases
  - bullseye
  - bullseye-backports
  - bookworm
  - bookworm-backports
  - trixie
  - sid
  - experimental
and then a unified list:
.all-supported-releases: &all-supported-releases
  - *old-releases
  - *currently-supported-releases
that could be used in the matrix of the jobs that build all the images available in the pipeline container registry. However, due to limitations in GitLab, it is not possible to expand the variables or mapping values in a parallel:matrix context. At least not in an elegant fashion. This is the kind of issue that recently arose and that Santiago is currently working to solve, in the simplest possible way. Astute readers would notice that stretch is listed in the fully supported releases. And there is no problem with stretch, because it is built from archive.debian.org. Otto actually has tried to fix the broken image build job doing the same, but it is still incorrect, because the security repository is not (yet) available in archive.debian.org. Additionally, Santiago has also worked on other merge requests, such as:
  1. support branch/tags as target head in the test projects,
  2. build autopkgtest image on top of stable
  3. Add .yamllint and make it happy in the autopkgtest-lxc project
  4. enable FF_SCRIPT_SECTIONS to log multiline commands, among others.

Archiving DebConf Websites, by Stefano Rivera DebConf, the annual Debian conference, has its own new website every year. These are typically complex dynamic web applications (featuring registration, call for papers, scheduling, etc.) Once the conference is over, there is no need to keep maintaining these applications, so we archive the sites off as static HTML, and serve them from Debian s static CDN. Stefano archived the websites for the last two DebConfs. The schedule system behind DebConf 14 and 15 s websites was a derivative of Canonical s summit system. This was only used for a couple of years before migrating to wafer, the current system. Archiving summit content has been on the nice to have list for years, but nobody has ever tackled it. The machine that served the sites went away a couple of years ago. After much digging, a backup of the database was found, and Stefano got this code running on an ancient Python 2.7. Recently Stefano put this all together and hooked in an archive export to finally get this content preserved.

Python 3.x and pypy3 security bug triage, by Stefano Rivera Stefano Rivera triaged all the open security bugs against the Python 3.x and PyPy3 packages for Debian s stable and LTS releases. Several had been fixed but this wasn t recorded in the security tracker.

Linux livepatching support for Debian, by Santiago Ruano Rinc n In collaboration with Emmanuel Arias, Santiago filed ITP bug #1070494. As stated in the bug, more than an Intent to Package, it is an Intent to Design and Implement live patching support for the Linux kernel in Debian. For now, Emmanuel and Santiago have done exploratory work and they are working to understand the different possibilities to implement livepatching. One possible direction is to rely on kpatch, and the other is to package the modules using regular packaging tools. Also, it is needed to evaluate if it is possible to rely on distributing the modules via packages, or instead as a service, as it is done by some commercial distributions.

Miscellaneous contributions
  • Thorsten Alteholz uploaded cups-bjnp to improve packaging.
  • Colin Watson tracked down a baffling CI issue in openssh to unblock several merge requests, removed the user_readenv=1 option from its PAM configuration, and started on the first stage of his plan to split out GSS-API key exchange support to separate packages.
  • Colin did his usual routine work on the Python team, upgrading 26 packages to new upstream versions, and cherry-picking an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • Colin NMUed a couple of packages to reduce the need for explicit overrides in Packages-arch-specific, and removed some other obsolete entries from there.
  • Emilio managed various library transitions, and helped finish a few of the remaining t64 transitions.
  • Helmut sent a patch for enabling piuparts to work as a regular user building on earlier work.
  • Helmut sent patches for 7 cross build failures, 6 other debian bugs and fixed an infrastructure problem in crossqa.debian.net.
  • Nicholas worked on a sponsored package upload, and discovered the blhc tool for diagnosing build hardening.
  • Stefano Rivera started and completed the re2 transition. The release team suggested moving to a virtual package scheme that includes the absl ABI (as re2 now depends on it). Adopted this.
  • Stefano continued to work on DebConf 24 planning.
  • Santiago continued to work on DebConf24 Content tasks as well as Debconf25 organisation.

18 May 2024

Russell Coker: Kogan 5120*2160 40 Monitor

I ve just got a new Kogan 5120*2160 40 curved monitor. It cost $599 including shipping etc which is much cheaper than the Dell monitor with similar specs selling for about $2500. For monitors with better than 4K resolution (by which I don t mean 5K*1440) this is the cheapest option. The nearest competitors are the 27 monitors that do 5120*2880 from Apple and some companies copying Apple s specs. While 5120*2880 is a significantly better resolution than what I got it s probably not going to help me at 27 size. I ve had a Dell 32 4K monitor since the 1st of July 2022 [1]. It is a really good monitor and I had no complaints at all about it. It was clearer than the Samsung 27 4K monitor I used before it and I m not sure how much of that is due to better display technology (the Samsung was from 2017) and how much was due to larger size. But larger size was definitely a significant factor. I briefly owned a Phillips 43 4K monitor [2] and determined that a 43 flat screen was definitely too big. At the time I thought that about 35 would have been ideal but after a couple of years using a flat 32 screen I think that 32 is about the upper limit for a flat screen. This is the first curved monitor I ve used but I m already thinking that maybe 40 is too big for a 21:9 aspect ratio even with a curved screen. Maybe if it was 4:4 or even 16:9 that would be ok. Otherwise the ideal for a curved screen for me would be something between about 36 and 38 . Also 43 is awkward to move around my desk. But this is still quite close to ideal. The first system I tested this on was a work laptop, a Dell Latitude 7400 2in1. On the Dell dock that did 4K resolution and on a HDMI cable it did 1440p which was a disappointment as that laptop has talked to many 4K monitors at native resolution on the HDMI port with the same cable. This isn t an impossible problem, as I work in the IT department I can just go through all the laptops in the store room until I find one that supports it. But the 2in1 is a very nice laptop, so I might even just keep using it in 4K resolution when WFH. The laptop in question is deemed an executive laptop so I have to wait another 2 years for the executives to get new laptops before I can get a newer 2in1. On my regular desktop I had the problem of the display going off for a few seconds every minute or so and also occasionally giving a white flicker. That was using 5120*2160 with a DisplayPort switch as described in the blog post about the Dell 32 monitor. When I ran it in 4K resolution with the DisplayPort switch from my desktop it was fine. I then used the DisplayPort cable that came with the monitor directly connecting the video card to the display and it was fine at 5120*2160 with 75Hz. The monitor has the joystick thing that seems to have become some sort of standard for controlling modern monitors. It s annoying that pressing it in powers it off. I think there should be a separate button for that. Also the UI in general made me wonder if one of the vendors of expensive monitors had paid whoever designed it to make the UI suck. The monitor had a single dead pixel in the center of the screen about 1/4 the way down from the top when I started writing this post. Now it s gone away which is a concern as I don t know which pixels might have problems next or if the number of stuck pixels will increase. Also it would be good if there was a dark mode for the WordPress editor. I use dark mode wherever possible so I didn t notice the dead pixel for several hours until I started writing this blog post. I watched a movie on Netflix and it took the entire screen area, I don t know if they are storing movies in 64:27 ratio or if the clipped the top and bottom, it was probably clipped but still looked OK. The monitor has different screen modes which make it look different, I can t see much benefit to the different modes. The standard mode is what I usually use and it s brighter and the movie mode seems OK for the one movie I ve watched so far. In other news BenQ has just announced a 3840*2560 28 monitor specifically designed for programming [3]. This is the first time I ve heard of a monitor with 3:2 ratio with modern resolution, we still aren t at the 4:3 type ratio that we were used to when 640*480 was high resolution but it s a definite step in the right direction. It s also the only time I recall ever seeing a monitor advertised as being designed for programming. In the 80s there were home computers advertised as being computers for kids to program, but at that time it was either TV sets for monitors or monitors sold with computers. It was only after the IBM PC compatible market took off that having a choice of different monitors for one computer was a thing. In recent years monitors advertised as being for office use (meaning bright and expensive) have become common as are monitors designed for gamer use (meaning high refresh rate). But BenQ seems to be the first to advertise a monitor for the purpose of programming. They have a desktop partition feature (which could be software or hardware the article doesn t make it clear) to give some of the benefits of a tiled window manager to people who use OSs that don t support such things. The BenQ monitor is a bit small for my taste, I don t know if my vision is good enough to take advantage of 3840*2560 in a 28 monitor nowadays. I think at least 32 would be better. Google seems to be really into buying good monitors for their programmers, if every Google programmer got one of those BenQ monitors then that would be enough sales to make it worth-while for them. I had hoped that we would have 6K monitors become affordable this year and 8K become less expensive than most cars. Maybe that won t happen and we will instead have a wider range of products like the ultra wide monitor I just bought and the BenQ programmer s monitor. If so I don t think that will be a bad result. Now the question is whether I can use this monitor for 2 years before finding something else that makes me want to upgrade. I can afford to spend the equivalent of a bit under $1/day on monitor upgrades.

Next.