Search Results: "noop"

26 January 2026

Otto Kek l inen: Ubuntu Pro subscription - should you pay to use Linux?

Featured image of post Ubuntu Pro subscription - should you pay to use Linux?Ubuntu Pro is a subscription offering for Ubuntu users who want to pay for the assurance of getting quick and high-quality security updates for Ubuntu. I tested it out to see how it works in practice, and to evaluate how well it works as a commercial open source service model for Linux. Anyone running Ubuntu can subscribe to it at ubuntu.com/pro/subscribe by selecting the setup type Desktops for the price of $25 per year (+applicable taxes) for Enterprise users. There is also a free version for personal use. Once you have an account, you can find your activation token at ubuntu.com/pro/dashboard, and use it to activate Ubuntu Pro on your desktop or laptop Ubuntu machine by running sudo pro attach <token>:
$ sudo pro attach aabbcc112233aabbcc112233
Enabling default service esm-apps
Updating package lists
Ubuntu Pro: ESM Apps enabled
Enabling default service esm-infra
Updating package lists
Ubuntu Pro: ESM Infra enabled
Enabling default service livepatch
Installing canonical-livepatch snap
Canonical livepatch enabled.
Unable to determine current instance-id
This machine is now attached to 'Ubuntu Pro Desktop'
You can at any time confirm the Ubuntu Pro status by running:
$ sudo pro status --all
SERVICE ENTITLED STATUS DESCRIPTION
anbox-cloud yes disabled Scalable Android in the cloud
cc-eal yes n/a Common Criteria EAL2 Provisioning Packages
esm-apps yes enabled Expanded Security Maintenance for Applications
esm-infra yes enabled Expanded Security Maintenance for Infrastructure
fips yes n/a NIST-certified FIPS crypto packages
fips-preview yes n/a Preview of FIPS crypto packages undergoing certification with NIST
fips-updates yes disabled FIPS compliant crypto packages with stable security updates
landscape yes enabled Management and administration tool for Ubuntu
livepatch yes disabled Canonical Livepatch service
realtime-kernel yes disabled Ubuntu kernel with PREEMPT_RT patches integrated
  generic yes disabled Generic version of the RT kernel (default)
  intel-iotg yes n/a RT kernel optimized for Intel IOTG platform
  raspi yes n/a 24.04 Real-time kernel optimised for Raspberry Pi
ros yes n/a Security Updates for the Robot Operating System
ros-updates yes n/a All Updates for the Robot Operating System
usg yes disabled Security compliance and audit tools
Enable services with: pro enable <service>
Account: Otto Kekalainen
Subscription: Ubuntu Pro Desktop
Valid until: Thu Mar 3 08:08:38 2026 PDT
Technical support level: essential
For a regular desktop/laptop user the most relevant service is the esm-apps, which delivers extended security updates to many applications typically used on desktop systems. Another relevant command to confirm the current subscription status is:
$ sudo pro security-status
2828 packages installed:
2143 packages from Ubuntu Main/Restricted repository
660 packages from Ubuntu Universe/Multiverse repository
13 packages from third parties
12 packages no longer available for download
To get more information about the packages, run
pro security-status --help
for a list of available options.
This machine is receiving security patching for Ubuntu Main/Restricted
repository until 2029.
This machine is attached to an Ubuntu Pro subscription.
Ubuntu Pro with 'esm-infra' enabled provides security updates for
Main/Restricted packages until 2034.
Ubuntu Pro with 'esm-apps' enabled provides security updates for
Universe/Multiverse packages until 2034. You have received 26 security
updates.
This confirms the scope of the security support. You can even run sudo pro security-status --esm-apps to get a detailed breakdown of the installed software packages in scope for Expanded Security Maintenance (ESM).

Experiences from using Ubuntu Pro for over a year Personally I have been using it on two laptop systems for well over a year now and everything seems to have worked well. I see apt is downloading software updates from https://esm.ubuntu.com/apps/ubuntu, but other than that there aren t any notable signs of Ubuntu Pro being in use. That is a good thing after all one is paying for assurance that everything works with minimum disruptions, so the system that enables smooth sailing should stay in the background and not make too much noise of itself.

Using Landscape to manage multiple Ubuntu laptops Landscape portal reports showing security update status and resource utilization Landscape.canonical.com is a fleet management system that shows information like security update status and resource utilization for the computers you administer. Ubuntu Pro attached systems under one s account are not automatically visible in Landscape, but have to be enrolled in it. To enroll an Ubuntu Pro attached desktop/laptop to Landscape, first install the required package with sudo apt install landscape-client and then run sudo landscape-config --account-name <account name> to start the configuration wizard. You can find your account name in the Landscape portal. On the last wizard question Request a new registration for this computer now? [y/N] hit y to accept. If successful, the new computer will be visible on the Landscape portal page Pending computers , from where you can click to accept it. Landscape portal page showing pending computer registration If I had a large fleet of computers, Landscape might come in useful. Also it is obvious Landscape is intended primarily for managing server systems. For example, the default alarm trigger on systems being offline, which is common for laptops and desktop computers, is an alert-worthy thing only on server systems. It is good to know that Landscape exists, but on desktop systems I would probably skip it, and only stick to the security updates offered by Ubuntu Pro without using Landscape.

Landscape is evolving The screenshots above are from the current Landscape portal which I have been using so far. Recently Canonical has also launched a new web portal, with a fresh look: New Landscape dashboard with fresh look This shows Canonical is actively investing in the service and it is likely going to sit at the center of their business model for years to come.

Other offerings by Canonical for individual users Canonical, the company behind the world s most popular desktop Linux distribution Ubuntu, has been offering various commercial support services for corporate customers since the company launched back in 2005, but there haven t been any offerings available to individual users since Ubuntu One, with file syncing, a music store and more, was wound down back in 2014. Canonical and the other major Linux companies, Red Hat and SUSE, have always been very enterprise-oriented, presumably because achieving economies of scale is much easier when maintaining standardized corporate environments compared to dealing with a wide range of custom configurations that individual consumer customers might have. I remember some years ago Canonical offered desktop support under the Ubuntu Advantage product name, but the minimum subscription was for 5 desktop systems, which typically isn t an option for a regular home consumer. I am glad to see Ubuntu Pro is now available and I honestly hope people using Ubuntu will opt into it. The more customers it has, the more it incentivizes Canonical to develop and maintain features that are important for desktop and home users.

Pay for Linux because you can, not because you have to Open source is a great software development model for rapid innovation and adoption, but I don t think the business models in the space are yet quite mature. Users who get long-term value should participate more in funding open source maintenance work. While some donation platforms like GitHub Sponsors, OpenCollective and the like have gained popularity in recent years, none of them seem to generate recurring revenue comparable to the scale of how popular open source software is now in 2026. I welcome more paid schemes, such as Ubuntu Pro, as I believe it is beneficial for the whole ecosystem. I also expect more service providers to enter this space and experiment with different open source business models and various forms of decentralized funding. Linux and open source are primarily free as in speech, but as a side effect license fees are hard to enforce and many use Linux without paying for it. The more people, corporations and even countries rely on it to stay sovereign in the information society, the more users should think about how they want to use Linux and who they want to pay to maintain it and other critical parts of the open source ecosystem.

19 January 2026

Isoken Ibizugbe: Mid-Point Project Progress

Halfway There

Hurray!  I have officially reached the 6-week mark, the halfway point of my Outreachy internship. The time has flown by incredibly fast, yet it feels short because there is still so much exciting work to do.

I remember starting this journey feeling overwhelmed, trying to gain momentum. Today, I feel much more confident. I began with the apps_startstop task during the contribution period, writing manual test steps and creating preparation Perl scripts for the desktop environments. Since then, I ve transitioned into full automation and taken a liking to reading openQA upstream documentation when I have issues or for reference.

In all of this, I ve committed over 30 hours a week to the project. This dedicated time has allowed me to look in-depth into the Debian ecosystem and automated quality assurance.

The Original Roadmap vs. Reality

Reviewing my 12-week goal, which included extending automated tests for live image testing, installer testing, and documentation, I am happy to report that I am right on track. My work on desktop apps tests has directly improved the quality of both the Live Images and the netinst (network installer) ISOs.

Accomplishments

I have successfully extended the apps_startstop tests for two Desktop Environments (DEs): Cinnamon and LXQt. These tests ensure that common and DE specific apps launch and close correctly across different environments.

  • Merged Milestone: My Cinnamon tests have been officially merged into the upstream repository! [MR !84]
  • LXQt & Adaptability: I am in the final stages of the LXQt tests. Interestingly, I had to update these tests mid-way through because of a version update in the DE. This required me to update the needles (image references) to match the new UI, a great lesson in software maintenance.

Solving for Synergy

One of my favorite challenges was suggested by my mentor, Roland: synergizing the tests to reduce redundancy. I observed that some applications (like Firefox and LibreOffice) behave identically across different desktops. Instead of duplicating Perl scripts/code for every single DE, I used symbolic links. This allows the use of the same Perl script and possibly the same needles, making the test suite lighter and much easier to maintain.

The Contributor Guide

During the contribution phase, I noticed how rigid the documentation and coding style requirements are. While this ensures high standards and uniformity, it can be intimidating for newcomers and time-consuming for reviewers.

To help, I created a contributor guide [MR !97]. This guide addresses the project s writing style. My goal is to reduce the back-and-forth during reviews, making the process more efficient for everyone and helping new contributors.

Looking Forward

For the second half of the internship, I plan to:

  1. Assist others: Help new contributors extend apps start-stop tests to even more desktop environments.
  2. Explore new coverage: Move beyond start-stop tests into deeper functional testing.

This journey has been an amazing experience of learning and connecting with the wider open-source community, especially Debian Women and the Linux QA team.

I am deeply grateful to my mentors, Tassia Camoes Araujo, Roland Clobus, and Philip Hands, for their constant guidance and for believing in my ability to take on this project.

Here s to the next 6 weeks

11 January 2026

Otto Kek l inen: Stop using MySQL in 2026, it is not true open source

Featured image of post Stop using MySQL in 2026, it is not true open sourceIf you care about supporting open source software, and still use MySQL in 2026, you should switch to MariaDB like so many others have already done. The number of git commits on github.com/mysql/mysql-server has been significantly declining in 2025. The screenshot below shows the state of git commits as of writing this in January 2026, and the picture should be alarming to anyone who cares about software being open source. MySQL GitHub commit activity decreasing drastically

This is not surprising Oracle should not be trusted as the steward for open source projects When Oracle acquired Sun Microsystems and MySQL along with it back in 2009, the European Commission almost blocked the deal due to concerns that Oracle s goal was just to stifle competition. The deal went through as Oracle made a commitment to keep MySQL going and not kill it, but (to nobody s surprise) Oracle has not been a good steward of MySQL as an open source project and the community around it has been withering away for years now. All development is done behind closed doors. The publicly visible bug tracker is not the real one Oracle staff actually uses for MySQL development, and the few people who try to contribute to MySQL just see their Pull Requests and patch submissions marked as received with mostly no feedback and then those changes may or may not be in the next MySQL release, often rewritten, and with only Oracle staff in the git author/committer fields. The real author only gets a small mention in a blog post. When I was the engineering manager for the core team working on RDS MySQL and RDS MariaDB at Amazon Web Services, I oversaw my engineers contributions to both MySQL and MariaDB (the latter being a fork of MySQL by the original MySQL author, Michael Widenius). All the software developers in my org disliked submitting code to MySQL due to how bad the reception by Oracle was to their contributions. MariaDB is the stark opposite with all development taking place in real-time on github.com/mariadb/server, anyone being able to submit a Pull Request and get a review, all bugs being openly discussed at jira.mariadb.org and so forth, just like one would expect from a true open source project. MySQL is open source only by license (GPL v2), but not as a project.

MySQL s technical decline in recent years Despite not being a good open source steward, Oracle should be given credit that it did keep the MySQL organization alive and allowed it to exist fairly independently and continue developing and releasing new MySQL versions well over a decade after the acquisition. I have no insight into how many customers they had, but I assume the MySQL business was fairly profitable and financially useful to Oracle, at least as long as it didn t gain too many features to threaten Oracle s own main database business. I don t know why, perhaps because too many talented people had left the organization, but it seems that from a technical point of view MySQL clearly started to deteriorate from 2022 onward. When MySQL 8.0.29 was released with the default ALTER TABLE method switched to run in-place, it had a lot of corner cases that didn t work, causing the database to crash and data to corrupt for many users. The issue wasn t fully fixed until a year later in MySQL 8.0.32. To many users annoyance Oracle announced the 8.0 series as evergreen and introduced features and changes in the minor releases, instead of just doing bugfixes and security fixes like users historically had learnt to expect from these x.y.Z maintenance releases. There was no new major MySQL version for six years. After MySQL 8.0 in 2018 it wasn t until 2023 when MySQL 8.1 was released, and it was just a short-term preview release. The first actual new major release MySQL 8.4 LTS was released in 2024. Even though it was a new major release, many users got disappointed as it had barely any new features. Many also reported degraded performance with newer MySQL versions, for example the benchmark by famous MySQL performance expert Mark Callaghan below shows that on write-heavy workloads MySQL 9.5 throughput is typically 15% less than in 8.0. Benchmark showing new MySQL versions being slower than the old Due to newer MySQL versions deprecating many features, a lot of users also complained about significant struggles regarding both MySQL 5.7->8.0 and 8.0->8.4 upgrades. With few new features and heavy focus on code base cleanup and feature deprecation, it became obvious to many that Oracle had decided to just keep MySQL barely alive, and put all new relevant features (e.g. vector search) into Heatwave, Oracle s closed-source and cloud-only service for MySQL customers. As it was evident that Oracle isn t investing in MySQL, Percona s Peter Zaitsev wrote Is Oracle Finally Killing MySQL in June 2024. At this time MySQL s popularity as ranked by DB-Engines had also started to tank hard, a trend that likely accelerates in 2026. MySQL dropping significantly in DB-Engines ranking In September 2025 news reported that Oracle was reducing its workforce and that the MySQL staff was getting heavily reduced. Obviously this does not bode well for MySQL s future, and Peter Zaitsev posted already in November stats showing that the latest MySQL maintenance release contained fewer bug fixes than before.

Open source is more than ideology: it has very real effects on software security and sovereignty Some say they don t care if MySQL is truly open source or not, or that they don t care if it has a future in coming years, as long as it still works now. I am afraid people thinking so are taking a huge risk. The database is often the most critical part of a software application stack, and any flaw or problem in operations, let alone a security issue, will have immediate consequences, and not caring will eventually get people fired or sued. In open source problems are discussed openly, and the bigger the problem, the more people and companies will contribute to fixing it. Open source as a development methodology is similar to the scientific method with free flow of ideas that are constantly contested and only the ones with the most compelling evidence win. Not being open means more obscurity, more risk and more just trust us bro attitude. This open vs. closed is very visible for example in how Oracle handles security issues. We can see that in 2025 alone MySQL published 123 CVEs about security issues, while MariaDB had 8. There were 117 CVEs that only affected MySQL and not MariaDB in 2025. I haven t read them all, but typically the CVEs hardly contain any real details. As an example, the most recent one CVE-2025-53067 states Easily exploitable vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Server. There is no information a security researcher or auditor could use to verify if any original issue actually existed, or if it was fixed, or if the fix was sufficient and fully mitigating the issue or not. MySQL users just have to take the word of Oracle that it is all good now. Handling security issues like this is in stark contrast to other open source projects, where all security issues and their code fixes are open for full scrutiny after the initial embargo is over and CVE made public. There is also various forms of enshittification going on one would not see in a true open source project, and everything about MySQL as a software, documentation and website is pushing users to stop using the open source version and move to the closed MySQL versions, and in particular to Heatwave, which is not only closed-source but also results in Oracle fully controlling customer s databases contents. Of course, some could say this is how Oracle makes money and is able to provide a better product. But stories on Reddit and elsewhere suggest that what is going on is more like Oracle milking hard the last remaining MySQL customers who are forced to pay more and more for getting less and less.

There are options and migrating is easy, just do it A large part of MySQL users switched to MariaDB already in the mid-2010s, in particular everyone who had cared deeply about their database software staying truly open source. That included large installations such as Wikipedia, and Linux distributions such as Fedora and Debian. Because it s open source and there is no centralized machine collecting statistics, nobody knows what the exact market shares look like. There are however some application specific stats, such as that 57% of WordPress sites around the world run MariaDB, while the share for MySQL is 42%. For anyone running a classic LAMP stack application such as WordPress, Drupal, Mediawiki, Nextcloud, or Magento, switching the old MySQL database to MariaDB is be straightforward. As MariaDB is a fork of MySQL and mostly backwards compatible with it, swapping out MySQL for MariaDB can be done without changing any of the existing connectors or database clients, as they will continue to work with MariaDB as if it was MySQL. For those running custom applications and who have the freedom to make changes to how and what database is used, there are tens of mature and well-functioning open source databases to choose from, with PostgreSQL being the most popular general database. If your application was built from the start for MySQL, switching to PostgreSQL may however require a lot of work, and the MySQL/MariaDB architecture and storage engine InnoDB may still offer an edge in e.g. online services where high performance, scalability and solid replication features are of highest priority. For a quick and easy migration MariaDB is probably the best option. Switching from MySQL to the Percona Server is also very easy, as it closely tracks all changes in MySQL and deviates from it only by a small number of improvements done by Percona. However, also precisely because of it being basically just a customized version of the MySQL Server, it s not a viable long-term solution for those trying to fully ditch the dependency on Oracle. There are also several open source databases that have no common ancestry with MySQL, but strive to be MySQL-compatible. Thus most apps built for MySQL can simply switch to using them without needing SQL statements to be rewritten. One such database is TiDB, which has been designed from scratch specifically for highly scalable and large systems, and is so good that even Amazon s latest database solution DSQL was built borrowing many ideas from TiDB. However, TiDB only really shines with larger distributed setups, so for the vast majority of regular small- and mid-scale applications currently using MySQL, the most practical solution is probably to just switch to MariaDB, which on most Linux distributions can simply be installed by running apt/dnf/brew install mariadb-server. Whatever you end up choosing, as long as it is not Oracle, you will be better off.

4 January 2026

Matthew Garrett: What is a PC compatible?

Wikipedia says An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models . But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet? Before we dig into that, let s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn t run elsewhere. CP/M s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn t need to care about the underlying hardware and would run on all systems that had a working CP/M port. By 1979, boards based on the 8086, Intel s successor to the 8080, were hitting the market. The 8086 wasn t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM s hardware, and the rest is history. But one key part of this was that despite what was now MS-DOS existing only to support IBM s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn t include all the code needed to run on a PC - you needed IBM s BIOS. To begin with this wasn t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor s ROM code wasn t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way. And here s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM s functionality, or didn t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you d think wouldn t be necessary given that s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it. You d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn t maintain compatibility. As long as everything went via the BIOS this shouldn t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them. And that s what happened. IBM was the biggest player, so people targeted IBM s platform. When BIOS interfaces weren t sufficient they hit the hardware directly - and even if they weren t doing that, they d end up depending on behavioural quirks of IBM s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof. So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you d need some additional media for hardware-specific drivers. It s something that still distinguishes the PC market from the ARM desktop market. But it s not as true as it used to be, and it s interesting to think about whether it ever was as true as people thought. Let s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no. Ok. But the hardware is broadly the same, right? There s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that s really not going to work when you have a PCI card that s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won t2, so you re still actually relying on the firmware to do the right thing but it s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work. But imagine you are, or imagine you re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you re trying to run something built with IBM Pascal 1.0? There s a risk that it ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it ll break. It d work fine on an actual PC, and it won t work here, so are we PC compatible? That s a very interesting abstract question and I m going to entirely ignore it. Let s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware. Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn t display correctly on any future PCs either. This is going to become a theme. There s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you re going to have a bad time. This isn t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You d likely say yes , but there s software written for the original PC that won t work there. And, well, let s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It s fine, we d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through. So, what s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it ll run most old software, as long as it doesn t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it ll potentially be unusable or crash because time is hard. The truth is that there s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn t run Flight Simulator. PC Compatible is a socially defined construct, just like Woman . We can get hung up on the details or we can just chill.

  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don t provide BIOS compatibility
  2. Back in the 90s and early 2000s operating systems didn t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that s how I made a laptop that could boot unmodified MacOS X
  3. (my name will not be Wolfwings Shadowflight)
  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA
  5. and by advanced we re still talking about the 90s, don t get excited

22 December 2025

Isoken Ibizugbe: Everybody Struggles

That s right: everyone struggles. You could be working on a project only to find a mountain of new things to learn, or your code might keep failing until you start to doubt yourself. I feel like that sometimes, wondering if I m good enough. But in those moments, I whisper to myself: You don t know it yet; once you do, it will get easy. While contributing to the Debian openQA project, there was so much to learn, from understanding what Debian actually is to learning the various installation methods and image types. I then had to tackle the installation and usage of openQA itself. I am incredibly grateful for the installation guideprovided by Roland Clobus and the documentation on writing code for openQA.

Overcoming Technical Hurdles Even with amazing guides, I hit major roadblocks. Initially, I was using Windows with VirtualBox, but openQA couldn t seem to run the tests properly. Despite my mentors (Roland and Phil) suggesting alternatives, the issues persisted. I actually installed openQA twice on VirtualBox and realized that if you miss even one small step in the installation, it becomes very difficult to move forward. Eventually, I took the big step and dual-booted my machine to Linux. Even then, the challenges didn t stop. My openQA Virtual Machine (VM) ran out of allocated space and froze, halting my testing. I reached out on the IRC chat and received the help I needed to get back on track.

My Research Line-up When I m struggling for information, I follow my go-to first step for research, followed by other alternatives:
  1. Google: This is my first stop. It helped me navigate the switch to a Linux OS and troubleshoot KVM connection issues for the VM. Whether it s an Ubuntu forum or a technical blog, I skim through until I find what can help.
  2. The Upstream Documentation: If Google doesn t have the answer, I go straight to the official openQA documentation. This is a goldmine. It explains functions, how to use them, and lists usable test variables.
  3. The Debian openQA UI: While working on the apps_startstop tests, I look at previous similar tests on openqa.debian.net/tests. I checked the Settings tab to see exactly what variables were used and how the test was configured.
  1. Salsa (Debian s GitLab): I reference the Salsa Debian openQA README and the developer guides sometimes; Getting started, Developer docs on how to write tests
I also had to learn the basics of the Perl programming language during the four-week contribution stage. While we don t need to be Perl experts, I found it essential to understand the logic so I can explain my work to others. I ve spent a lot of time studying the codebase, which is time-consuming but incredibly valuable. For example, my apps_startstop test command originally used a long list of applications via ENTRYPOINT. I began to wonder if there was a more efficient way. With Roland s guidance, I explored the main.pm file. This helped me understand how the apps_startstop function works and how it calls variables. I also noticed there are utility functions that are called in tests. I also check them and try to understand their function, so I know if I need them or not. I know I still have a lot to learn, and yes, the doubt still creeps in sometimes. But I am encouraged by the support of my mentors and the fact that they believe in my ability to contribute to this project.
If you re struggling too, just remember: you don t know it yet; once you do, it will get easy.

19 December 2025

Otto Kek l inen: Backtesting trailing stop-loss strategies with Python and market data

Featured image of post Backtesting trailing stop-loss strategies with Python and market dataIn January 2024 I wrote about the insanity of the Magnificent Seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional, I decided to analyze whether using stop-loss orders could reliably automate avoiding deep drawdowns. As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, a restaurant dinner that costs 20 euros today will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years. Hence, if you intend to retain the value of your hard-earned savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.

What is a trailing stop-loss order? What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin. For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus, no matter what happens, you only lost 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash. In the simple case above, it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in too large a drawdown before exiting. If markets crash rapidly, it might be that nobody buys your stocks at the stop-loss price, and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?

Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock. First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the data points feel effortless. The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):
python
CACHE_DIR.mkdir(parents=True, exist_ok=True)
end_date = datetime.today().strftime("%Y-%m-%d")
cache_file = CACHE_DIR / f" TICKER - START_DATE -- end_date .parquet"

if cache_file.is_file():
 dataframe = pandas.read_parquet(cache_file)
 print(f"Loaded price data from cache:  cache_file ")
else:
 dataframe = yfinance.download(
 TICKER,
 start=START_DATE,
 end=end_date,
 progress=False,
 auto_adjust=False
 )

 dataframe.to_parquet(cache_file)
 print(f"Fetched new price data from Yahoo Finance and cached to:  cache_file ")
The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like, you can use:
python
print("First 5 rows of the raw data:")
print(df.head())
print("Last 5 rows of the raw data:")
print(df.tail())
Example output:
First 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
Last 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818
Adding new columns to the dataframe is easy. For example, I used a custom function to calculate the Relative Strength Index (RSI). To add a new column RSI with a value for every row based on the price from that row, only one line of code is needed, without custom loops:
python
df["RSI"] = compute_rsi(df["price"], period=14)
After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:
python
 baseline_series = [
  "time": ts, "value": val 
 for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
 ]

 baseline_json = json.dumps(baseline_series)
 template = jinja2.Template("template.html")
 rendered_html = template.render(
 title=title,
 heading=heading,
 description=description_html,
 ...
 baseline_json=baseline_json,
 ...
 )

 with open("report.html", "w", encoding="utf-8") as f:
 f.write(rendered_html)
 print("Report generated!")
In the HTML template, the marker variable in Jinja syntax gets replaced with the actual JSON:
html
<!DOCTYPE html>
<html lang="en">
<head>
 <meta charset="UTF-8">
 <title>  title  </title>
 ...
</head>
<body>
 <h1>  heading  </h1>
 <div id="chart"></div>
 <script>
 // Ensure the DOM is ready before we initialise the chart
 document.addEventListener('DOMContentLoaded', () =>  
 // Parse the JSON data passed from Python
 const baselineData =   baseline_json   safe  ;
 const strategyData =   strategy_json   safe  ;
 const markersData =   markers_json   safe  ;

 // Create the chart
 const chart = LightweightCharts.createChart(document.getElementById('chart'),  
 width: document.getElementById('chart').clientWidth,
 height: 500,
 layout:  
 background:   color: "#222"  ,
 textColor: "#ccc"
  ,
 grid:  
 vertLines:   color: "#555"  ,
 horzLines:   color: "#555"  
  
  );

 // Add baseline series
 const baselineSeries = chart.addLineSeries( 
 title: '  baseline_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 priceLineWidth: 1
  );
 baselineSeries.setData(baselineData);

 baselineSeries.priceScale().applyOptions( 
 entireTextOnly: true
  );

 // Add strategy series
 const strategySeries = chart.addLineSeries( 
 title: '  strategy_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 color: '#FF6D00'
  );
 strategySeries.setData(strategyData);

 // Add buy/sell markers to the strategy series
 strategySeries.setMarkers(markersData);

 // Fit the chart to show the full data range (full zoom)
 chart.timeScale().fitContent();
  )
 </script>
</body>
</html>
There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test. The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline Buy and hold scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way. Backtest run example

Results I experimented with multiple strategies and tested them with various parameters, but I don t think I found a strategy that was consistently and clearly better than just buy-and-hold. It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price. In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold. The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example, in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks; thus, the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash had been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price. Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if many people have stop-loss orders in place, a large price drop would trigger all of them, creating a flood of sell orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don t need to fear that too many people will be using them.

Conclusion Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of insurance policy to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I d rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels. Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall. Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently. Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

8 December 2025

Isoken Ibizugbe: Beginning My Outreachy Journey With Debian

Hello, my name is Isoken, I m a software engineer and product manager from Nigeria. I am excited and grateful to begin this journey as an Outreachy intern working on the project Debian Images Testing with OpenQA . I am particularly drawn to helping solve problems for people; it keeps bugging me till I can find a way to help. This interest in problem-solving and improving quality is what drew me to this project. OpenQA is an automated test tool that simulates a user s interaction by looking at the screen and sending actions like mouse clicks and keyboard inputs. It takes screenshots and compares the image to known reference images to verify if the system is behaving correctly. During the contribution phase (which was 4 weeks), I got to know about Debian software, it s different mode of installation and available different desktop environments, at first I felt intimidated as most of it was new to me, but the setup material and docs for the project made it easy for me, the fact that it looked like a gradual process, getting to know the program, registering the steps in mind and then take notes of every step and visuals, writing it in a way another developer would understand (this is called detail level 3), and then translate it to test code level 1. I contributed to improving the installation documents and a detail level 3 doc for a bug report on the system locale being incorrect. This strengthened my documentation skills and ability to adjust to the writing style of the project. I also started working on app start-stop tests for two desktop environments. I was able to explore, check the applications, and note their differences and similarities. It was really interesting to me; this was where I started writing on coding level 1, and my tests started passing. I will spend the next few weeks continuing on it and hopefully creating a way to synergize the tests and make it easy to maintain later on. I am also grateful for the assistance from other candidates during the contribution stage and the privilege of having the mentors, Tassia Camoes Araujo and Roland Clobus, and Philip Hands, to guide and correct me through the internship period. I will share my progress regularly here on the blog. You can follow the progress of the work here on the main repo. Wish me luck on the rest of this journey

30 November 2025

Otto Kek l inen: DEP-18: A proposal for Git-based collaboration in Debian

Featured image of post DEP-18: A proposal for Git-based collaboration in DebianI am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits. Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, forge websites, and CI systems. Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each commit is a new upload to the archive, and the commit message is the debian/changelog entry. The commit log is available at snapshots.debian.org. In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org the GitLab instance of Debian. This is, however, based on each DD s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.

Is collaborative software development possible without git and version control software? Debian, however, has some peculiarities that may be surprising to people who have grown accustomed to GitHub, GitLab or various company-internal code review systems. In Debian:
  • The source code of the next upload is not public but resides only on the developer s laptop.
  • Code contributions are plain patch files, based on the latest revision released in the Debian archive (where the unstable area is equivalent to the main development branch).
  • These patches are submitted by email to a bug tracker that does no validation or testing whatsoever.
  • Developers applying these patches typically have elaborate Mutt or Emacs setups to facilitate fetching patches from email.
  • There is no public staging area, no concept of rebasing patches or withdrawing a patch and replacing it with a better version.
  • The submitter won t see any progress information until a notification email arrives after a new version has been uploaded to the Debian archive.
This system has served Debian for three decades. It is not broken, but using the package archive just feels well, archaic. There is a more efficient way, and indeed the majority of Debian packages have a metadata field Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them. This also makes contributing to multiple packages in parallel hard. One can t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed. To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.

Inefficient ways of working prevent Debian from flourishing I would claim, based on my personal experiences from the past 10+ years as a Debian Developer, that the lack of high-quality and productive tooling is seriously harming Debian. The current methods of collaboration are cumbersome for aspiring contributors to learn and suboptimal to use for both new and seasoned contributors. There are no exit interviews for contributors who left Debian, no comprehensive data on reasons to contribute or stop contributing, nor are there any metrics tracking how many people tried but failed to contribute to Debian. Some data points to support my concerns do exist:
  • The contributor database shows that the number of contributors is growing slower than Debian s popularity.
  • Most packages are maintained by one person working alone (just pick any package at random and look at the upload history).

Debian should embrace git, but decision-making is slow Debian is all about community and collaboration. One would assume that Debian prioritized above all making collaboration tools and processes simpler, faster and less error-prone, as it would help both current and future package maintainers. Yet, it isn t so, due to some reasons unique to Debian. There is no single company or entity running Debian, and it has managed to operate as a pure meritocracy and do-cracy for over 30 years. This is impressive and admirable. Unfortunately, some of the infrastructure and technical processes are also nearly 30 years old and very difficult to change for the same reason: the nature of Debian s distributed decision-making process. As a software developer and manager with 25+ years of experience, I strongly feel that developing software collaboratively using Git is a major step forward that Debian needs to take, in one form or another, and I hope to see other DDs voice their support if they agree.

Debian Enhancement Proposal 18 Following how consensus is achieved in Debian, I started drafting DEP-18 in 2024, and it is currently awaiting enough thumbs up at https://salsa.debian.org/dep-team/deps/-/merge_requests/21 to get into CANDIDATE status next. In summary, the DEP-18 proposes that everyone keen on collaborating should:
  1. Maintain Debian packaging sources in Git on Salsa.
  2. Use Merge Requests to show your work and to get reviews.
  3. Run Salsa CI before upload.
The principles above are not novel. According to stats at e.g. trends.debian.net, and UDD, ~93% of all Debian source packages are already hosted on salsa.debian.org. As of June 1st, 2025, only 1640 source packages remain that are not hosted on Salsa. The purpose of DEP-18 is to state in writing what Debian is currently doing for most packages, and thus express what among others new contributors should be learning and doing, so basic collaboration is smooth and free from structural obstacles. Most packages are also already allowing Merge Requests and using Salsa CI, but there hasn t been any written recommendation anywhere in Debian to do so. The Debian Policy (v.4.7.2) does not even mention the word Salsa a single time. The current process documentation on how to do non-maintainer uploads or salvaging packages are all based on uploading packages to the archive, without any consideration of using git-based collaboration such as posting a Merge Request first. Personally I feel posting a Merge Request would be a better approach, as it would invite collaborators to discuss and provide code reviews. If there are no responses, the submitter can proceed to merge, but compared to direct uploads to the Debian archive, the Merge Request practice at least tries to offer a time and place for discussions and reviews to happen. It could very well be that in the future somebody comes up with a new packaging format that makes upstream source package management easier, or a monorepo with all packages, or some other future structures or processes. Having a DEP to state how to do things now does not prevent people from experimenting and innovating if they intentionally want to do that. The DEP is merely an expression of the minimal common denominators in the packaging workflow that maintainers and contributors should follow, unless they know better.

Transparency and collaboration Among the DEP-18 recommendations is:
The recommended first step in contributing to a package is to use the built-in Fork feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of Forks enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users rights. Git supports this by making both temporary and permanent forks easy to create and maintain.
Further, it states:
Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work. Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.
I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa which is based on GitLab the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and is license-compliant, but as they don t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.

Recommend, not force Note that the Debian Enhancement Proposals are not binding. Only the Debian Policy and Technical Committee decisions carry that weight. The nature of collaboration is voluntary anyway, so the DEP does not need to force anything on people who don t want to use salsa.debian.org. The DEP-18 is also not a guide for package maintainers. I have my own views and have written detailed guides in blog articles if you want to read more on, for example, how to do code reviews efficiently. Within DEP-18, there is plenty of room to work in many different ways, and it does not try to force one single workflow. The goal here is to simply have agreed-upon minimal common denominators among those who are keen to collaborate using salsa.debian.org, not to dictate a complete code submission workflow. Once we reach this, there will hopefully be less friction in the most basic and recurring collaboration tasks, giving DDs more energy to improve other processes or just invest in having more and newer packages for Debian users to enjoy.

Next steps In addition to lengthy online discussions on mailing lists and DEP reviews, I also presented on this topic at DebConf 2025 in Brest, France. Unfortunately the recording is not yet up on Peertube. The feedback has been overwhelmingly positive. However, there are a few loud and very negative voices that cannot be ignored. Maintaining a Linux distribution at the scale and complexity of Debian requires extraordinary talent and dedication, and people doing this kind of work often have strong views, most of the time for good reasons. We do not want to alienate existing key contributors with new processes, so maximum consensus is desirable. We also need more data on what the 1000+ current Debian Developers view as a good process to avoid being skewed by a loud minority. If you are a current or aspiring Debian Developer, please add a thumbs up if you think I should continue with this effort (or a thumbs down if not) on the Merge Request that would make DEP-18 have candidate status. There is also technical work to do. Increased Git use will obviously lead to growing adoption of the new tag2upload feature, which will need to get full git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone. Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution, and so forth. Details on this vision will be in a later blog post, so subscribe to updates!

19 October 2025

Otto Kek l inen: Could the XZ backdoor have been detected with better Git and Debian packaging practices?

Featured image of post Could the XZ backdoor have been detected with better Git and Debian packaging practices?The discovery of a backdoor in XZ Utils in the spring of 2024 shocked the open source community, raising critical questions about software supply chain security. This post explores whether better Debian packaging practices could have detected this threat, offering a guide to auditing packages and suggesting future improvements. The XZ backdoor in versions 5.6.0/5.6.1 made its way briefly into many major Linux distributions such as Debian and Fedora, but luckily didn t reach that many actual users, as the backdoored releases were quickly removed thanks to the heroic diligence of Andres Freund. We are all extremely lucky that he detected a half a second performance regression in SSH, cared enough to trace it down, discovered malicious code in the XZ library loaded by SSH, and reported promtly to various security teams for quick coordinated actions. This episode makes software engineers pondering the following questions: As a Debian Developer, I decided to audit the xz package in Debian, share my methodology and findings in this post, and also suggest some improvements on how the software supply-chain security could be tightened in Debian specifically. Note that the scope here is only to inspect how Debian imports software from its upstreams, and how they are distributed to Debian s users. This excludes the whole story of how to assess if an upstream project is following software development security best practices. This post doesn t discuss how to operate an individual computer running Debian to ensure it remains untampered as there are plenty of guides on that already.

Downloading Debian and upstream source packages Let s start by working backwards from what the Debian package repositories offer for download. As auditing binaries is extremely complicated, we skip that, and assume the Debian build hosts are trustworthy and reliably building binaries from the source packages, and the focus should be on auditing the source code packages. As with everything in Debian, there are multiple tools and ways to do the same thing, but in this post only one (and hopefully the best) way to do something is presented for brevity. The first step is to download the latest version and some past versions of the package from the Debian archive, which is easiest done with debsnap. The following command will download all Debian source packages of xz-utils from Debian release 5.2.4-1 onwards:
$ debsnap --verbose --first 5.2.4-1 xz-utils
Getting json https://snapshot.debian.org/mr/package/xz-utils/
...
Getting dsc file xz-utils_5.2.4-1.dsc: https://snapshot.debian.org/file/a98271e4291bed8df795ce04d9dc8e4ce959462d
Getting file xz-utils_5.2.4.orig.tar.xz.asc: https://snapshot.debian.org/file/59ccbfb2405abe510999afef4b374cad30c09275
Getting file xz-utils_5.2.4-1.debian.tar.xz: https://snapshot.debian.org/file/667c14fd9409ca54c397b07d2d70140d6297393f
source-xz-utils/xz-utils_5.2.4-1.dsc:
Good signature found
validating xz-utils_5.2.4.orig.tar.xz
validating xz-utils_5.2.4.orig.tar.xz.asc
validating xz-utils_5.2.4-1.debian.tar.xz
All files validated successfully.
Once debsnap completes there will be a subfolder source-<package name> with the following types of files:
  • *.orig.tar.xz: source code from upstream
  • *.orig.tar.xz.asc: detached signature (if upstream signs their releases)
  • *.debian.tar.xz: Debian packaging source, i.e. the debian/ subdirectory contents
  • *.dsc: Debian source control file, including signature by Debian Developer/Maintainer
Example:
$ ls -1 source-xz-utils/
...
xz-utils_5.6.4.orig.tar.xz
xz-utils_5.6.4.orig.tar.xz.asc
xz-utils_5.6.4-1.debian.tar.xz
xz-utils_5.6.4-1.dsc
xz-utils_5.8.0.orig.tar.xz
xz-utils_5.8.0.orig.tar.xz.asc
xz-utils_5.8.0-1.debian.tar.xz
xz-utils_5.8.0-1.dsc
xz-utils_5.8.1.orig.tar.xz
xz-utils_5.8.1.orig.tar.xz.asc
xz-utils_5.8.1-1.1.debian.tar.xz
xz-utils_5.8.1-1.1.dsc
xz-utils_5.8.1-1.debian.tar.xz
xz-utils_5.8.1-1.dsc
xz-utils_5.8.1-2.debian.tar.xz
xz-utils_5.8.1-2.dsc

Verifying authenticity of upstream and Debian sources using OpenPGP signatures As seen in the output of debsnap, it already automatically verifies that the downloaded files match the OpenPGP signatures. To have full clarity on what files were authenticated with what keys, we should verify the Debian packagers signature with:
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
The upstream tarball signature (if available) can be verified with:
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
Note that this only proves that there is a key that created a valid signature for this content. The authenticity of the keys themselves need to be validated separately before trusting they in fact are the keys of these people. That can be done by checking e.g. the upstream website for what key fingerprints they published, or the Debian keyring for Debian Developers and Maintainers, or by relying on the OpenPGP web-of-trust .

Verifying authenticity of upstream sources by comparing checksums In case the upstream in question does not publish release signatures, the second best way to verify the authenticity of the sources used in Debian is to download the sources directly from upstream and compare that the sha256 checksums match. This should be done using the debian/watch file inside the Debian packaging, which defines where the upstream source is downloaded from. Continuing on the example situation above, we can unpack the latest Debian sources, enter and then run uscan to download:
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
The original files downloaded from upstream are now in /tmp along with the files renamed to follow Debian conventions. Using everything downloaded so far the sha256 checksums can be compared across the files and also to what the .dsc file advertised:
$ ls -1 /tmp/
xz-5.8.1.tar.xz
xz-5.8.1.tar.xz.sig
xz-utils_5.8.1.orig.tar.xz
xz-utils_5.8.1.orig.tar.xz.asc
$ sha256sum xz-utils_5.8.1.orig.tar.xz /tmp/xz-5.8.1.tar.xz
0b54f79df85912504de0b14aec7971e3f964491af1812d83447005807513cd9e xz-utils_5.8.1.orig.tar.xz
0b54f79df85912504de0b14aec7971e3f964491af1812d83447005807513cd9e /tmp/xz-5.8.1.tar.xz
$ grep -A 3 Sha256 xz-utils_5.8.1-2.dsc
Checksums-Sha256:
0b54f79df85912504de0b14aec7971e3f964491af1812d83447005807513cd9e 1461872 xz-utils_5.8.1.orig.tar.xz
4138f4ceca1aa7fd2085fb15a23f6d495d27bca6d3c49c429a8520ea622c27ae 833 xz-utils_5.8.1.orig.tar.xz.asc
3ed458da17e4023ec45b2c398480ed4fe6a7bfc1d108675ec837b5ca9a4b5ccb 24648 xz-utils_5.8.1-2.debian.tar.xz
In the example above the checksum 0b54f79df85... is the same across the files, so it is a match.

Repackaged upstream sources can t be verified as easily Note that uscan may in rare cases repackage some upstream sources, for example to exclude files that don t adhere to Debian s copyright and licensing requirements. Those files and paths would be listed under the Files-Excluded section in the debian/copyright file. There are also other situations where the file that represents the upstream sources in Debian isn t bit-by-bit the same as what upstream published. If checksums don t match, an experienced Debian Developer should review all package settings (e.g. debian/source/options) to see if there was a valid and intentional reason for divergence.

Reviewing changes between two source packages using diffoscope Diffoscope is an incredibly capable and handy tool to compare arbitrary files. For example, to view a report in HTML format of the differences between two XZ releases, run:
diffoscope --html-dir xz-utils-5.6.4_vs_5.8.0 xz-utils_5.6.4.orig.tar.xz xz-utils_5.8.0.orig.tar.xz
browse xz-utils-5.6.4_vs_5.8.0/index.html
Inspecting diffoscope output of differences between two XZ Utils releases If the changes are extensive, and you want to use a LLM to help spot potential security issues, generate the report of both the upstream and Debian packaging differences in Markdown with:
diffoscope --markdown diffoscope-debian.md xz-utils_5.6.4-1.debian.tar.xz xz-utils_5.8.1-2.debian.tar.xz
diffoscope --markdown diffoscope.md xz-utils_5.6.4.orig.tar.xz xz-utils_5.8.0.orig.tar.xz
The Markdown files created above can then be passed to your favorite LLM, along with a prompt such as:
Based on the attached diffoscope output for a new Debian package version compared with the previous one, list all suspicious changes that might have introduced a backdoor, followed by other potential security issues. If there are none, list a short summary of changes as the conclusion.

Reviewing Debian source packages in version control As of today only 93% of all Debian source packages are tracked in git on Debian s GitLab instance at salsa.debian.org. Some key packages such as Coreutils and Bash are not using version control at all, as their maintainers apparently don t see value in using git for Debian packaging, and the Debian Policy does not require it. Thus, the only reliable and consistent way to audit changes in Debian packages is to compare the full versions from the archive as shown above. However, for packages that are hosted on Salsa, one can view the git history to gain additional insight into what exactly changed, when and why. For packages that are using version control, their location can be found in the Git-Vcs header in the debian/control file. For xz-utils the location is salsa.debian.org/debian/xz-utils. Note that the Debian policy does not state anything about how Salsa should be used, or what git repository layout or development practices to follow. In practice most packages follow the DEP-14 proposal, and use git-buildpackage as the tool for managing changes and pushing and pulling them between upstream and salsa.debian.org. To get the XZ Utils source, run:
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
At the time of writing this post the git history shows:
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
 \
  * fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
  * a522a226 Bump version and soname for 5.8.1
  * 1c462c2a Add NEWS for 5.8.1
  * 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
  * 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
  * 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
  * 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
  * d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
  * c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
  * 831b55b9 liblzma: mt dec: Fix a comment
  * b9d168ee liblzma: Add assertions to lzma_bufcpy()
  * c8e0a489 DOS: Update Makefile to fix the build
  * 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
  * 7ce38b31 Update THANKS
  * 688e51bd Translations: Update the Croatian translation
*   a6b54dde Prepare 5.8.0-1.
*   77d9470f Add 5.8 symbols.
*   9268eb66 Import 5.8.0
*   6f85ef4f Update upstream source from tag 'upstream/5.8.0'
 \ \
  *   afba662b New upstream version 5.8.0
   /
  * 173fb5c6 doc/SHA256SUMS: Add 5.8.0
  * db9258e8 Bump version and soname for 5.8.0
  * bfb752a3 Add NEWS for 5.8.0
  * 6ccbb904 Translations: Run "make -C po update-po"
  * 891a5f05 Translations: Run po4a/update-po
  * 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
  * ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
  * 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
This shows both the changes on the debian/unstable branch as well as the intermediate upstream import branch, and the actual real upstream development branch. See my Debian source packages in git explainer for details of what these branches are used for. To only view changes on the Debian branch, run git log --graph --oneline --first-parent or git log --graph --oneline -- debian. The Debian branch should only have changes inside the debian/ subdirectory, which is easy to check with:
$ git diff --stat upstream/v5.8
debian/README.source   16 +++
debian/autogen.sh   32 +++++
debian/changelog   949 ++++++++++++++++++++++++++
...
debian/upstream/signing-key.asc   52 +++++++++
debian/watch   4 +
debian/xz-utils.README.Debian   47 ++++++++
debian/xz-utils.docs   6 +
debian/xz-utils.install   28 +++++
debian/xz-utils.postinst   19 +++
debian/xz-utils.prerm   10 ++
debian/xzdec.docs   6 +
debian/xzdec.install   4 +
33 files changed, 2014 insertions(+)
All the files outside the debian/ directory originate from upstream, and for example running git blame on them should show only upstream commits:
$ git blame CMakeLists.txt
22af94128 (Lasse Collin 2024-02-12 17:09:10 +0200 1) # SPDX-License-Identifier: 0BSD
22af94128 (Lasse Collin 2024-02-12 17:09:10 +0200 2)
7e3493d40 (Lasse Collin 2020-02-24 23:38:16 +0200 3) ###############
7e3493d40 (Lasse Collin 2020-02-24 23:38:16 +0200 4) #
426bdc709 (Lasse Collin 2024-02-17 21:45:07 +0200 5) # CMake support for building XZ Utils
If the upstream in question signs commits or tags, they can be verified with e.g.:
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
The main benefit of reviewing changes in git is the ability to see detailed information about each individual change, instead of just staring at a massive list of changes without any explanations. In this example, to view all the upstream commits since the previous import to Debian, one would view the commit range from afba662b New upstream version 5.8.0 to fa1e8796 New upstream version 5.8.1 with git log --reverse -p afba662b...fa1e8796. However, a far superior way to review changes would be to browse this range using a visual git history viewer, such as gitk. Either way, looking at one code change at a time and reading the git commit message makes the review much easier. Browsing git history in gitk --all

Comparing Debian source packages to git contents As stated in the beginning of the previous section, and worth repeating, there is no guarantee that the contents in the Debian packaging git repository matches what was actually uploaded to Debian. While the tag2upload project in Debian is getting more and more popular, Debian is still far from having any system to enforce that the git repository would be in sync with the Debian archive contents. To detect such differences we can run diff across the Debian source packages downloaded with debsnap earlier (path source-xz-utils/xz-utils_5.8.1-2.debian) and the git repository cloned in the previous section (path xz-utils):
diff
$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
 * Remove the symlinks from -dev, pointing to the lib package.
 (Closes: #1109354)

- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
In the case above diff revealed that the timestamp in the changelog in the version uploaded to Debian is different from what was committed to git. This is not malicious, just a mistake by the maintainer who probably didn t run gbp tag immediately after upload, but instead some dch command and ended up with having a different timestamps in the git compared to what was actually uploaded to Debian.

Creating syntetic Debian packaging git repositories If no Debian packaging git repository exists, or if it is lagging behind what was uploaded to Debian s archive, one can use git-buildpackage s import-dscs feature to create synthetic git commits based on the files downloaded by debsnap, ensuring the git contents fully matches what was uploaded to the archive. To import a single version there is gbp import-dsc (no s at the end), of which an example invocation would be:
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
Example commit history from a repository with commits added with gbp import-dsc:
$ git log --graph --oneline
* 86aed07b (HEAD -> debian/unstable, tag: debian/5.8.1-2, origin/debian/unstable) Import Debian changes 5.8.1-2
* f111d93b (tag: debian/5.8.1-1.1) Import Debian changes 5.8.1-1.1
* 1106e19b (tag: debian/5.8.1-1) Import Debian changes 5.8.1-1
 \
  * 08edbe38 (tag: upstream/5.8.1, origin/upstream/v5.8, upstream/v5.8) Import Upstream version 5.8.1
   \
    * a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
    * 1c462c2a Add NEWS for 5.8.1
    * 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
An online example repository with only a few missing uploads added using gbp import-dsc can be viewed at salsa.debian.org/otto/xz-utils-2025-09-29/-/network/debian%2Funstable An example repository that was fully crafted using gbp import-dscs can be viewed at salsa.debian.org/otto/xz-utils-gbp-import-dscs-debsnap-generated/-/network/debian%2Flatest. There exists also dgit, which in a similar way creates a synthetic git history to allow viewing the Debian archive contents via git tools. However, its focus is on producing new package versions, so fetching a package with dgit that has not had the history recorded in dgit earlier will only show the latest versions:
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid   git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
 \
  * 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
Unlike git-buildpackage managed git repositories, the dgit managed repositories cannot incorporate the upstream git history and are thus less useful for auditing the full software supply-chain in git.

Comparing upstream source packages to git contents Equally important to the note in the beginning of the previous section, one must also keep in mind that the upstream release source packages, often called release tarballs, are not guaranteed to have the exact same contents as the upstream git repository. Projects might strip out test data or extra development files from their release tarballs to avoid shipping unnecessary files to users, or projects might add documentation files or versioning information into the tarball that isn t stored in git. While a small minority, there are also upstreams that don t use git at all, so the plain files in a release tarball is still the lowest common denominator for all open source software projects, and exporting and importing source code needs to interface with it. In the case of XZ, the release tarball has additional version info and also a sizeable amount of pregenerated compiler configuration files. Detecting and comparing differences between git contents and tarballs can of course be done manually by running diff across an unpacked tarball and a checked out git repository. If using git-buildpackage, the difference between the git contents and tarball contents can be made visible directly in the import commit. In this XZ example, consider this git history:
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
 \
  * fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
  * a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
  * 1c462c2a Add NEWS for 5.8.1
The commit a522a226 was the upstream release commit, which upstream also tagged v5.8.1. The merge commit 2808ec2d applied the new upstream import branch contents on the Debian branch. Between these is the special commit fa1e8796 New upstream version 5.8.1 tagged upstream/v5.8. This commit and tag exists only in the Debian packaging repository, and they show what is the contents imported into Debian. This is generated automatically by git-buildpackage when running git import-orig --uscan for Debian packages with the correct settings in debian/gbp.conf. By viewing this commit one can see exactly how the upstream release tarball differs from the upstream git contents (if at all). In the case of XZ, the difference is substantial, and shown below in full as it is very interesting:
$ git show --stat fa1e8796
commit fa1e8796dabd91a0f667b9e90f9841825225413a
(debian/upstream/v5.8, upstream/v5.8)
Author: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Date: Thu Apr 3 22:58:39 2025 +0200
New upstream version 5.8.1
.codespellrc   30 -
.gitattributes   8 -
.github/workflows/ci.yml   163 -
.github/workflows/freebsd.yml   32 -
.github/workflows/netbsd.yml   32 -
.github/workflows/openbsd.yml   35 -
.github/workflows/solaris.yml   32 -
.github/workflows/windows-ci.yml   124 -
.gitignore   113 -
ABOUT-NLS   1 +
ChangeLog   17392 +++++++++++++++++++++
Makefile.in   1097 +++++++
aclocal.m4   1353 ++++++++
build-aux/ci_build.bash   286 --
build-aux/compile   351 ++
build-aux/config.guess   1815 ++++++++++
build-aux/config.rpath   751 +++++
build-aux/config.sub   2354 +++++++++++++
build-aux/depcomp   792 +++++
build-aux/install-sh   541 +++
build-aux/ltmain.sh   11524 ++++++++++++++++++++++
build-aux/missing   236 ++
build-aux/test-driver   160 +
config.h.in   634 ++++
configure   26434 ++++++++++++++++++++++
debug/Makefile.in   756 +++++
doc/SHA256SUMS   236 --
doc/man/txt/lzmainfo.txt   36 +
doc/man/txt/xz.txt   1708 ++++++++++
doc/man/txt/xzdec.txt   76 +
doc/man/txt/xzdiff.txt   39 +
doc/man/txt/xzgrep.txt   70 +
doc/man/txt/xzless.txt   36 +
doc/man/txt/xzmore.txt   31 +
lib/Makefile.in   623 ++++
m4/.gitignore   40 -
m4/build-to-host.m4   274 ++
m4/gettext.m4   392 +++
m4/host-cpu-c-abi.m4   529 +++
m4/iconv.m4   324 ++
m4/intlmacosx.m4   71 +
m4/lib-ld.m4   170 +
m4/lib-link.m4   815 +++++
m4/lib-prefix.m4   334 ++
m4/libtool.m4   8488 +++++++++++++++++++++
m4/ltoptions.m4   467 +++
m4/ltsugar.m4   124 +
m4/ltversion.m4   24 +
m4/lt~obsolete.m4   99 +
m4/nls.m4   33 +
m4/po.m4   456 +++
m4/progtest.m4   92 +
po/.gitignore   31 -
po/Makefile.in.in   517 +++
po/Rules-quot   66 +
po/boldquot.sed   21 +
po/ca.gmo   Bin 0 -> 15587 bytes
po/cs.gmo   Bin 0 -> 7983 bytes
po/da.gmo   Bin 0 -> 9040 bytes
po/de.gmo   Bin 0 -> 29882 bytes
po/en@boldquot.header   35 +
po/en@quot.header   32 +
po/eo.gmo   Bin 0 -> 15060 bytes
po/es.gmo   Bin 0 -> 29228 bytes
po/fi.gmo   Bin 0 -> 28225 bytes
po/fr.gmo   Bin 0 -> 10232 bytes
To be able to easily inspect exactly what changed in the release tarball compared to git release tag contents, the best tool for the job is Meld, invoked via git difftool --dir-diff fa1e8796^..fa1e8796. Meld invoked by git difftool --dir-diff afba662b..fa1e8796 to show differences between git release tag and release tarball contents To compare changes across the new and old upstream tarball, one would need to compare commits afba662b New upstream version 5.8.0 and fa1e8796 New upstream version 5.8.1 by running git difftool --dir-diff afba662b..fa1e8796. Meld invoked by git difftool --dir-diff afba662b..fa1e8796 to show differences between to upstream release tarball contents With all the above tips you can now go and try to audit your own favorite package in Debian and see if it is identical with upstream, and if not, how it differs.

Should the XZ backdoor have been detected using these tools? The famous XZ Utils backdoor (CVE-2024-3094) consisted of two parts: the actual backdoor inside two binary blobs masqueraded as a test files (tests/files/bad-3-corrupt_lzma2.xz, tests/files/good-large_compressed.lzma), and a small modification in the build scripts (m4/build-to-host.m4) to extract the backdoor and plant it into the built binary. The build script was not tracked in version control, but generated with GNU Autotools at release time and only shipped as additional files in the release tarball. The entire reason for me to write this post was to ponder if a diligent engineer using git-buildpackage best practices could have reasonably spotted this while importing the new upstream release into Debian. The short answer is no . The malicious actor here clearly anticipated all the typical ways anyone might inspect both git commits, and release tarball contents, and masqueraded the changes very well and over a long timespan. First of all, XZ has for legitimate reasons for several carefully crafted .xz files as test data to help catch regressions in the decompression code path. The test files are shipped in the release so users can run the test suite and validate that the binary is built correctly and xz works properly. Debian famously runs massive amounts of testing in its CI and autopkgtest system across tens of thousands of packages to uphold high quality despite frequent upgrades of the build toolchain and while supporting more CPU architectures than any other distro. Test data is useful and should stay. When git-buildpackage is used correctly, the upstream commits are visible in the Debian packaging for easy review, but the commit cf44e4b that introduced the test files does not deviate enough from regular sloppy coding practices to really stand out. It is unfortunately very common for git commit to lack a message body explaining why the change was done, and to not be properly atomic with test code and test data together in the same commit, and for commits to be pushed directly to mainline without using code reviews (the commit was not part of any PR in this case). Only another upstream developer could have spotted that this change is not on par to what the project expects, and that the test code was never added, only test data, and thus that this commit was not just a sloppy one but potentially malicious. Secondly, the fact that a new Autotools file appeared (m4/build-to-host.m4) in the XZ Utils 5.6.0 is not suspicious. This is perfectly normal for Autotools. In fact, starting from XZ Utils version 5.8.1 it is now shipping a m4/build-to-host.m4 file that it actually uses now. Spotting that there is anything fishy is practically impossible by simply reading the code, as Autotools files are full custom m4 syntax interwoven with shell script, and there are plenty of backticks ( ) that spawn subshells and evals that execute variable contents further, which is just normal for Autotools. Russ Cox s XZ post explains how exactly the Autotools code fetched the actual backdoor from the test files and injected it into the build. Inspecting the m4/build-to-host.m4 changes in Meld launched via git difftool There is only one tiny thing that maybe a very experienced Autotools user could potentially have noticed: the serial 30 in the version header is way too high. In theory one could also have noticed this Autotools file deviates from what other packages in Debian ship with the same filename, such as e.g. the serial 3, serial 5a or 5b versions. That would however require and an insane amount extra checking work, and is not something we should plan to start doing. A much simpler solution would be to simply strongly recommend all open source projects to stop using Autotools to eventually get rid of it entirely.

Not detectable with reasonable effort While planting backdoors is evil, it is hard not to feel some respect to the level of skill and dedication of the people behind this. I ve been involved in a bunch of security breach investigations during my IT career, and never have I seen anything this well executed. If it hadn t slowed down SSH by ~500 milliseconds and been discovered due to that, it would most likely have stayed undetected for months or years. Hiding backdoors in closed source software is relatively trivial, but hiding backdoors in plain sight in a popular open source project requires some unusual amount of expertise and creativity as shown above.

Is the software supply-chain in Debian easy to audit? While maintaining a Debian package source using git-buildpackage can make the package history a lot easier to inspect, most packages have incomplete configurations in their debian/gbp.conf, and thus their package development histories are not always correctly constructed or uniform and easy to compare. The Debian Policy does not mandate git usage at all, and there are many important packages that are not using git at all. Additionally the Debian Policy also allows for non-maintainers to upload new versions to Debian without committing anything in git even for packages where the original maintainer wanted to use git. Uploads that bypass git unfortunately happen surpisingly often. Because of the situation, I am afraid that we could have multiple similar backdoors lurking that simply haven t been detected yet. More audits, that hopefully also get published openly, would be welcome! More people auditing the contents of the Debian archives would probably also help surface what tools and policies Debian might be missing to make the work easier, and thus help improve the security of Debian s users, and improve trust in Debian.

Is Debian currently missing some software that could help detect similar things? To my knowledge there is currently no system in place as part of Debian s QA or security infrastructure to verify that the upstream source packages in Debian are actually from upstream. I ve come across a lot of packages where the debian/watch or other configs are incorrect and even cases where maintainers have manually created upstream tarballs as it was easier than configuring automation to work. It is obvious that for those packages the source tarball now in Debian is not at all the same as upstream. I am not aware of any malicious cases though (if I was, I would report them of course). I am also aware of packages in the Debian repository that are misconfigured to be of type 1.0 (native) packages, mixing the upstream files and debian/ contents and having patches applied, while they actually should be configured as 3.0 (quilt), and not hide what is the true upstream sources. Debian should extend the QA tools to scan for such things. If I find a sponsor, I might build it myself as my next major contribution to Debian. In addition to better tooling for finding mismatches in the source code, Debian could also have better tooling for tracking in built binaries what their source files were, but solutions like Fraunhofer-AISEC s supply-graph or Sony s ESSTRA are not practical yet. Julien Malka s post about NixOS discusses the role of reproducible builds, which may help in some cases across all distros.

Or, is Debian missing some policies or practices to mitigate this? Perhaps more importantly than more security scanning, the Debian Developer community should switch the general mindset from anyone is free to do anything to valuing having more shared workflows. The ability to audit anything is severely hampered by the fact that there are so many ways to do the same thing, and distinguishing what is a normal deviation from a malicious deviation is too hard, as the normal can basically be almost anything. Also, as there is no documented and recommended default workflow, both those who are old and new to Debian packaging might never learn any one optimal workflow, and end up doing many steps in the packaging process in a way that kind of works, but is actually wrong or unnecessary, causing process deviations that look malicious, but turn out to just be a result of not fully understanding what would have been the right way to do something. In the long run, once individual developers workflows are more aligned, doing code reviews will become a lot easier and smoother as the excess noise of workflow differences diminishes and reviews will feel much more productive to all participants. Debian fostering a culture of code reviews would allow us to slowly move from the current practice of mainly solo packaging work towards true collaboration forming around those code reviews. I have been promoting increased use of Merge Requests in Debian already for some time, for example by proposing DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are involved in Debian development, please give a thumbs up in dep-team/deps!21 if you want me to continue promoting it.

Can we trust open source software? Yes and I would argue that we can only trust open source software. There is no way to audit closed source software, and anyone using e.g. Windows or MacOS just have to trust the vendor s word when they say they have no intentional or accidental backdoors in their software. Or, when the news gets out that the systems of a closed source vendor was compromised, like Crowdstrike some weeks ago, we can t audit anything, and time after time we simply need to take their word when they say they have properly cleaned up their code base. In theory, a vendor could give some kind of contractual or financial guarantee to its customer that there are no preventable security issues, but in practice that never happens. I am not aware of a single case of e.g. Microsoft or Oracle would have paid damages to their customers after a security flaw was found in their software. In theory you could also pay a vendor more to have them focus more effort in security, but since there is no way to verify what they did, or to get compensation when they didn t, any increased fees are likely just pocketed as increased profit. Open source is clearly better overall. You can, if you are an individual with the time and skills, audit every step in the supply-chain, or you could as an organization make investments in open source security improvements and actually verify what changes were made and how security improved. If your organisation is using Debian (or derivatives, such as Ubuntu) and you are interested in sponsoring my work to improve Debian, please reach out.

14 September 2025

Otto Kek l inen: Zero-configuration TLS and password management best practices in MariaDB 11.8

Featured image of post Zero-configuration TLS and password management best practices in MariaDB 11.8Locking down database access is probably the single most important thing for a system administrator or software developer to prevent their application from leaking its data. As MariaDB 11.8 is the first long-term supported version with a few new key security features, let s recap what the most important things are every DBA should know about MariaDB in 2025. Back in the old days, MySQL administrators had a habit of running the clumsy mysql-secure-installation script, but it has long been obsolete. A modern MariaDB database server is already secure by default and locked down out of the box, and no such extra scripts are needed. On the contrary, the database administrator is expected to open up access to MariaDB according to the specific needs of each server. Therefore, it is important that the DBA can understand and correctly configure three things:
  1. Separate application-specific users with granular permissions allowing only necessary access and no more.
  2. Distributing and storing passwords and credentials securely
  3. Ensuring all remote connections are properly encrypted
For holistic security, one should also consider proper auditing, logging, backups, regular security updates and more, but in this post we will focus only on the above aspects related to securing database access.

How encrypting database connections with TLS differs from web server HTTP(S) Even though MariaDB (and other databases) use the same SSL/TLS protocol for encrypting remote connections as web servers and HTTPS, the way it is implemented is significantly different, and the different security assumptions are important for a database administrator to grasp. Firstly, most HTTP requests to a web server are unauthenticated, meaning the web server serves public web pages and does not require users to log in. Traditionally, when a user logs in over a HTTP connection, the username and password were transmitted in plaintext as a HTTP POST request. Modern TLS, which was previously called SSL, does not change how HTTP works but simply encapsulates it. When using HTTPS, a web browser and a web server will start an encrypted TLS connection as the very first thing, and only once established, do they send HTTP requests and responses inside it. There are no passwords or other shared secrets needed to form the TLS connection. Instead, the web server relies on a trusted third party, a Certificate Authority (CA), to vet that the TLS certificate offered by the web server can be trusted by the web browser. For a database server like MariaDB, the situation is quite different. All users need to first authenticate and log in to the server before getting being allowed to run any SQL and getting any data out of the server. The database server and client programs have built-in authentication methods, and passwords are not, and have never been, sent in plaintext. Over the years, MySQL and its successor, MariaDB, have had multiple password authentication methods: the original SHA-1-based hashing, later double SHA-1-based mysql_native_password, followed by sha256_password and caching_sha256_password in MySQL and ed25519 in MariaDB. The MariaDB.org blog post by Sergei Golubchik recaps the history of these well. Even though most modern MariaDB installations should be using TLS to encrypt all remote connections in 2025, having the authentication method be as secure as possible still matters, because authentication is done before the TLS connection is fully established. To further harden the authentication agains man-in-the-middle attacks, a new password the authentication method PARSEC was introduced in MariaDB 11.8, which builds upon the previous ed25519 public-key-based verification (similar to how modern SSH does), and also combines key derivation with PBKDF2 with hash functions (SHA512,SHA256) and a high iteration count. At first it may seem like a disadvantage to not wrap all connections in a TLS tunnel like HTTPS does, but actually not having the authentication done in a MitM resistant way regardless of the connection encryption status allows a clever extra capability that is now available in MariaDB: as the database server and client already have a shared secret that is being used by the server to authenticate the user, it can also be used by the client to validate the server s TLS certificate and no third parties like CAs or root certificates are needed. MariaDB 11.8 was the first LTS version to ship with this capability for zero-configuration TLS. Note that the zero-configuration TLS also works on older password authentication methods and does not require users to have PARSEC enabled. As PARSEC is not yet the default authentication method in MariaDB, it is recommended to enable it in installations that use zero-configuration TLS encryption to maximize the security of the TLS certificate validation.

Why the root user in MariaDB has no password and how it makes the database more secure Relying on passwords for security is problematic, as there is always a risk that they could leak, and a malicious user could access the system using the leaked password. It is unfortunately far too common for database passwords to be stored in plaintext in configuration files that are accidentally committed into version control and published on GitHub and similar platforms. Every application or administrative password that exists should be tracked to ensure only people who need it know it, and rotated at regular intervals to ensure old employees etc won t be able to use old passwords. This password management is complex and error-prone. Replacing passwords with other authentication methods is always advisable when possible. On a database server, whoever installed the database by running e.g. apt install mariadb-server, and configured it with e.g. nano /etc/mysql/mariadb.cnf, already has full root access to the operating system, and asking them for a password to access the MariaDB database shell is moot, since they could circumvent any checks by directly accessing the files on the system anyway. Therefore, MariaDB, since version 10.4 stopped requiring the root user to enter a password when connecting locally, and instead checks using socket authentication whether the user is the operating-system root user or equivalent (e.g. running sudo). This is an elegant way to get rid of a password that was actually unnecessary to begin with. As there is no root password anymore, the risk of an external user accessing the database as root with a leaked password is fully eliminated. Note that socket authentication only works for local connections on the same server. If you want to access a MariaDB server remotely as the root user, you would need to configure a password for it first. This is not generally recommended, as explained in the next section.

Create separate database users for normal use and keep root for administrative use only Out of the box a MariaDB installation is already secure by default, and only the local root user can connect to it. This account is intended for administrative use only, and for regular daily use you should create separate database users with access limited to the databases they need and the permissions required. The most typical commands needed to create a new database for an app and a user the app can use to connect to the database would be the following:
sql
CREATE DATABASE app_db;
CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%';
FLUSH PRIVILEGES;
Alternatively, if you want to use the parsec authentication method, run this to create the user:
sql
CREATE OR REPLACE USER 'app_user'@'%'
 IDENTIFIED VIA parsec
 USING PASSWORD('your_secure_password');
Note that the plugin auth_parsec is not enabled by default. If you see the error message ERROR 1524 (HY000): Plugin 'parsec' is not loaded fix this by running INSTALL SONAME 'auth_parsec';. In the CREATE USER statements, the @'%' means that the user is allowed to connect from any host. This needs to be defined, as MariaDB always checks permissions based on both the username and the remote IP address or hostname of the user, combined with the authentication method. Note that it is possible to have multiple user@remote combinations, and they can have different authentication methods. A user could, for example, be allowed to log in locally using the socket authentication and over the network using a password. If you are running a custom application and you know exactly what permissions are sufficient for the database users, replace the ALL PRIVILEGES with a subset of privileges listed in the MariaDB documentation. For new permissions to take effect, restart the database or run FLUSH PRIVILEGES.

Allow MariaDB to accept remote connections and enforce TLS Using the above 'app_user'@'%' is not enough on its own to allow remote connections to MariaDB. The MariaDB server also needs to be configured to listen on a network interface to accept remote connections. As MariaDB is secure by default, it only accepts connections from localhost until the administrator updates its configuration. On a typical Debian/Ubuntu system, the recommended way is to drop a new custom config in e.g. /etc/mysql/mariadb.conf.d/99-server-customizations.cnf, with the contents:
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
For settings to take effect, restart the server with systemctl restart mariadb. After this, the server will accept connections on any network interface. If the system is using a firewall, the port 3306 would additionally need to be allow-listed. To confirm that the settings took effect, run e.g. mariadb -e "SHOW VARIABLES LIKE 'bind_address';" , which should now show 0.0.0.0. When allowing remote connections, it is important to also always define require-secure-transport = on to enforce that only TLS-encrypted connections are allowed. If the server is running MariaDB 11.8 and the clients are also MariaDB 11.8 or newer, no additional configuration is needed thanks to MariaDB automatically providing TLS certificates and appropriate certificate validation in recent versions. On older long-term-supported versions of the MariaDB server one would have had to manually create the certificates and configure the ssl_key, ssl_cert and ssl_ca values on the server, and distribute the certificate to the clients as well, which was cumbersome, so good it is not required anymore. In MariaDB 11.8 the only additional related config that might still be worth setting is tls_version = TLSv1.3 to ensure only the latest TLS protocol version is used. Finally, test connections to ensure they work and to confirm that TLS is used by running e.g.:
shell
mariadb --user=app_user --password=your_secure_password \
 --host=192.168.1.66 -e '\s'
The result should show something along:
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
If running a Debian/Ubuntu system, see the bundled README with zcat /usr/share/doc/mariadb-server/README.Debian.gz to read more configuration tips.

Should TLS encryption be used also on internal networks? If a database server and app are running on the same private network, the chances that the connection gets eavesdropped on or man-in-the-middle attacked by a malicious user are low. However, it is not zero, and if it happens, it can be difficult to detect or prove that it didn t happen. The benefit of using end-to-end encryption is that both the database server and the client can validate the certificates and keys used, log it, and later have logs audited to prove that connections were indeed encrypted and show how they were encrypted. If all the computers on an internal network already have centralized user account management and centralized log collection that includes all database sessions, reusing existing SSH connections, SOCKS proxies, dedicated HTTPS tunnels, point-to-point VPNs, or similar solutions might also be a practical option. Note that the zero-configuration TLS only works with password validation methods. This means that systems configured to use PAM or Kerberos/GSSAPI can t use it, but again those systems are typically part of a centrally configured network anyway and are likely to have certificate authorities and key distribution or network encryption facilities already set up. In a typical software app stac however, the simplest solution is often the best and I recommend DBAs use the end-to-end TLS encryption in MariaDB 11.8 in most cases. Hopefully with these tips you can enjoy having your MariaDB deployments both simpler and more secure than before!

31 August 2025

Otto Kek l inen: Managing procrastination and distractions

Featured image of post Managing procrastination and distractionsI ve noticed that procrastination and inability to be consistently productive at work has become quite common in recent years. This is clearly visible in younger people who have grown up with an endless stream of entertainment literally at their fingertips, on their mobile phone. It is however a trap one can escape from with a little bit of help. Procrastination is natural they say humans are lazy by nature after all. Probably all of us have had moments when we choose to postpone a task we know we should be working on, and instead spent our time doing secondary tasks (valorisation). Classic example is cleaning your apartment when you should be preparing for an exam. Some may procrastinate by not doing any work at all, and just watching YouTube videos or the like. To some people, typically those who are in their 20s and early in their career, procrastination can be a big challenge and finding the discipline to stick to planned work may need intentional extra effort, and perhaps even external help. During my 20+ year career in software development I ve been blessed to work with engineers of various backgrounds and each with their unique set of strengths. I have also helped many grow in various areas and overcome challenges, such as lack of intrinsic motivation and managing procrastination, and some might be able to get it in check with some simple advice.

Distance yourself from the digital distractions The key to avoiding distractions and procrastination is to make it inconvenient enough that you rarely do it. If continuing to do work is easier than switching to procrastination, work is more likely to continue. Tips to minimize digital distractions, listed in order of importance:
  1. Put your phone away. Just like when you go to a movie and turn off your phone for two hours, you can put the phone away completely when starting to work. Put the phone in a different room to ensure there is enough physical distance between you and the distraction, so it is impossible for you to just take a quick peek .
  2. Turn off notifications from apps. Don t let the apps call you like sirens luring Odysseus. You don t need to have all the notifications. You will see what the apps have when you eventually open them at a time you choose to use them.
  3. Remove or disable social media apps, games and the like from your phone and your computer. You can install them back when you have vacation. You can probably live without them for some time. If you can t remove them, explore your phone s screen time restriction features to limit your own access to apps that most often waste your time. These features are sometimes listed in the phone settings under digital health .
  4. Have a separate work computer and work phone. Having dedicated ones just for work that are void of all unnecessary temptations helps keep distance from the devices that could derail your focus.
  5. Listen to music. If you feel your brain needs a dose of dopamine to get you going, listening to music helps satisfy your brain s cravings while still being able to simultaneously keep working.
Doing a full digital detox is probably not practical, or not sustainable for an extended time. One needs apps to stay in touch with friends and family, and staying current in software development probably requires spending some time reading news online and such. However the tips above can help contain the distractions and minimize the spontaneous attention the distractions get. Some of the distractions may ironically be from the work itself, for example Slack notifications or new email notifications. I recommend turning them off for a couple of hours every day to have some distraction free time. It should be enough to check work mail a couple times a day. Checking them every hour probably does not add much overall value for the company unless your work is in sales or support where the main task itself is responding to emails.

Distraction free work environment Following the same principle of distancing yourself from distractions, try to use a dedicated physical space for working. If you don t have a spare room to dedicate to work, use a neighborhood caf or sign up for a local co-working space or start commuting to the company office to find a space to be focused on work in.

Break down tasks into smaller steps Sometimes people postpone tasks because they feel intimidated by the size or complexity of a task. In particular in software engineering problems may be vague and appear large until one reaches the breakthrough that brings the vision of how to tackle it. Breaking down problems into smaller more manageable pieces has many advantages in software engineering. Not only can it help with task-avoidance, but it can also make the problem easier to analyze, suggest solutions and test them and build a solid foundation to expand upon to ultimately later reach a full solution on the entire larger problem. Working on big problems as a chain of smaller tasks may also offer more opportunities to celebrate success on completing each subtask and help getting in a suitable cadence of solving a single thing, taking a break and then tackling the next issue. Breaking down a task into concrete steps may also help with getting more realistic time estimations. Sometimes procrastination isn t real someone could just be overly ambitious and feel bad about themselves for not doing an unrealistic amount of work.

Intrinsic motivation Of course, you should follow your passion when possible. Strive to pick a career that you enjoy, and thus maximize the intrinsic motivation you experience. However, even a dream job is still a job. Nobody is ever paid to do whatever they want. Any work will include at least some tasks that feel like a chore or otherwise like something you would not do unless paid to. Some would say that the definition of work itself is having to do things one would otherwise not do. You can only fully do whatever you want while on vacation or when you choose to not have a job at all. But if you have a job, you simply need to find the intrinsic motivation to do it. Simply put, some tasks are just unpleasant or boring. Our natural inclination is to avoid them in favor of more enjoyable activities. For these situations we just have to find the discipline to force ourselves to do the tasks and figuratively speaking whip ourselves into being motivated to complete the tasks.

Extrinsic motivation As the name implies, this is something people external to you need to provide, such as your employer or manager. If you have challenges in managing yourself and delivering results on a regular basis, somebody else needs to set goals and deadlines and keep you accountable for them. At the end of the day this means that eventually you will stop receiving salary or other payments unless you did your job. Forcing people to do something isn t nice, but eventually it needs to be done. It would not be fair for an employer to pay those who did their work the same salary as those who procrastinated and fell short on their tasks. If you work solo, you can also simulate the extrinsic motivation by publicly announcing milestones and deadlines to build up pressure for yourself to meet them and avoid publicly humiliation. It is a well-studied and scientifically proven phenomenon that most university students procrastinate at the start of assignments, and truly start working on them only once the deadline is imminent.

External help for addictions If procrastination is mainly due to a single distraction that is always on your mind, it may be a sign of an addiction. For example, constantly thinking about a computer game or staying up late playing a computer game, to the extent that it seriously affects your ability to work, may be a symptom of an addiction, and getting out of it may be easier with external help.

Discipline and structure Most of the time procrastination is not due to an addiction, but simply due to lack of self-discipline and structure. The good thing is that those things can be learned. It is mostly a matter of getting into new habits, which most young software engineers pick up more or less automatically while working along the more senior ones. Hopefully these tips can help you stay on track and ensure you do everything you are expected to do with clear focus, and on time!

28 August 2025

Samuel Henrique: Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release). If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced. Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project. I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release. Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new. The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes. Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl wcurl logo Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did. Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters." Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more. Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0. I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later). Debian was the first stable Linux distro to enable it, and within rolling-release-based distros; Gentoo enabled it first in their non-default flavor of the package and Arch Linux did it three months before we pushed it to Debian Unstable/Testing/Stable-backports, kudos to them! HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only. Try it out:
curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel. You can read the announcement from the systemd v254 GitHub release Try it out:
# This will reboot your machine!
systemctl soft-reboot

4) apt --update Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I! The new --update option lets you do both things in a single command:
sudo apt --update upgrade
sudo apt --update install $PACKAGE
I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14? Try it out:
sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:
podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline. powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more. A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function _update_ps1()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p   wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi
Or this to .zshrc:
function powerline_precmd()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $ $ (%):%j :-0 )"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 
If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension Tips number 6 and 7 are for Gnome users. Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar. Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process. The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them. Try it out:
sudo apt install gnome-system-monitor gnome-shell-extension-manager
And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time. There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health". A screenshot of the Gnome settings for Power showing the options for Battery Charging On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings. To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling. What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage. There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users. In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow. If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS). And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit Emacs users are already familiar with the legendary magit; a terminal-based UI for git. Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly. I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience. Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI You should check out the demos from the lazygit GitHub page. Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.

9) neovim neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years. If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:
  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;
  • Better spellchecking due to treesitter integration (spellsitter);
  • Mouse support enabled by default;
  • Commenting support out-of-the-box; Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.
  • OSC52 support. Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel. Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one". Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him! There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64. Try it out:
sudo apt install podman

podman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

Changes since publication

2025-08-30
  • Mention that Arch also enabled HTTP/3.

Samuel Henrique: Debian 13: My list of exciting new features

A bunch of screenshots overlaid on top of each other showing different tools: lazygit, gnome settings, gnome system monitor, powerline-go, and the wcurl logo, the text at the top says 'Debian 13: My list of exciting new features', and there's a Debian logo in the middle of image

Beyond Debian: Useful for other distros too Every two years Debian releases a new major version of its Stable series, meaning the differences between consecutive Debian Stable releases represent two years of new developments both in Debian as an organization and its native packages, but also in all other packages which are also shipped by other distributions (which are getting into this new Stable release). If you're not paying close attention to everything that's going on all the time in the Linux world, you miss a lot of the nice new features and tools. It's common for people to only realize there's a cool new trick available only years after it was first introduced. Given these considerations, the tips that I'm describing will eventually be available in whatever other distribution you use, be it because it's a Debian derivative or because it just got the same feature from the upstream project. I'm not going to list "passive" features (as good as they can be), the focus here is on new features that might change how you configure and use your machine, with a mix between productivity and performance.

Debian 13 - Trixie I have been a Debian Testing user for longer than 10 years now (and I recommend it for non-server users), so I'm not usually keeping track of all the cool features arriving in the new Stable releases because I'm continuously receiving them through the Debian Testing rolling release. Nonetheless, as a Debian Developer I'm in a good position to point out the ones I can remember. I would also like other Debian Developers to do the same as I'm sure I would learn something new. The Debian 13 release notes contain a "What's new" section , which lists the first two items here and a few other things, in other words, take my list as an addition to the release notes. Debian 13 was released on 2025-08-09, and these are nice things you shouldn't miss in the new release, with a bonus one not tied to the Debian 13 release.

1) wcurl wcurl logo Have you ever had to download a file from your terminal using curl and didn't remember the parameters needed? I did. Nowadays you can use wcurl; "a command line tool which lets you download URLs without having to remember any parameters." Simply call wcurl with one or more URLs as parameters and it will download all of them in parallel, performing retries, choosing the correct output file name, following redirects, and more. Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other distribution you can imagine, starting with curl 8.14.0. I've written more about wcurl in its release announcement and I've done a lightning talk presentation in DebConf24, which is linked in the release announcement.

2) HTTP/3 support in curl Debian has become the first stable Linux distribution to ship curl with support for HTTP/3. I've written about this in July 2024, when we first enabled it. Note that we first switched the curl CLI to GnuTLS, but then ended up releasing the curl CLI linked with OpenSSL (as support arrived later). Debian was the first Linux distro to enable it in the default build of the curl package, but Gentoo enabled it a few weeks earlier in their non-default flavor of the package, kudos to them! HTTP/3 is not used by default by the curl CLI, you have to enable it with --http3 or --http3-only. Try it out:
curl --http3 https://www.example.org
curl --http3-only https://www.example.org

3) systemd soft-reboot Starting with systemd v254, there's a new soft-reboot option, it's an userspace-only reboot, much faster than a full reboot if you don't need to reboot the kernel. You can read the announcement from the systemd v254 GitHub release Try it out:
# This will reboot your machine!
systemctl soft-reboot

4) apt --update Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I! The new --update option lets you do both things in a single command:
sudo apt --update upgrade
sudo apt --update install $PACKAGE
I love this, but it's still not yet where it should be, fingers crossed for a simple apt upgrade to behave like other package managers by updating its cache as part of the task, maybe in Debian 14? Try it out:
sudo apt upgrade --update
# The order doesn't matter
sudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt cache before installing anything, for example:
podman run debian:stable bin/bash -c 'apt install --update -y curl'

5) powerline-go powerline-go is a powerline-style prompt written in Golang, so it's much more performant than its Python alternative powerline. powerline-style prompts are quite useful to show things like the current status of the git repo in your working directory, exit code of the previous command, presence of jobs in the background, whether or not you're in an ssh session, and more. A screenshot of a terminal with powerline-go enabled, showing how the PS1 changes inside a git repository and when the last command fails Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function _update_ps1()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $(jobs -p   wc -l))"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 

if [ "$TERM" != "linux" ] && [ -f "/usr/bin/powerline-go" ]; then
    PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi
Or this to .zshrc:
function powerline_precmd()  
    PS1="$(/usr/bin/powerline-go -error $? -jobs $ $ (%):%j :-0 )"

    # Uncomment the following line to automatically clear errors after showing
    # them once. This not only clears the error for powerline-go, but also for
    # everything else you run in that shell. Don't enable this if you're not
    # sure this is what you want.

    #set "?"
 
If you'd like to have your prompt start in a newline, like I have in the screenshot above, you just need to set -newline in the powerline-go invocation in your .bashrc/.zshrc.

6) Gnome System Monitor Extension Tips number 6 and 7 are for Gnome users. Gnome is now shipping a system monitor extension which lets you get a glance of the current load of your machine from the top bar. Screenshot of the top bar of Gnome with the system monitor extension enabled, showing the load of: CPU, memory, network and disk I've found this quite useful for machines where I'm required to install third-party monitoring software that tends to randomly consume more resources than it should. If I feel like my machine is struggling, I can quickly glance at its load to verify if it's getting overloaded by some process. The extension is not as complete as system-monitor-next, not showing temperatures or histograms, but at least it's officially part of Gnome, easy to install and supported by them. Try it out:
sudo apt install gnome-system-monitor gnome-shell-extension-manager
And then enable the extension from the "Extension Manager" application.

7) Gnome setting for battery charging profile After having to learn more about batteries in order to get into FPV drones, I've come to have a bigger appreciation for solutions that minimize the inevitable loss of capacity that accrues over time. There's now a "Battery Charging" setting (under the "Power") section which lets you choose between two different profiles: "Maximize Charge" and "Preserve Battery Health". A screenshot of the Gnome settings for Power showing the options for Battery Charging On supported laptops, this setting is an easy way to set thresholds for when charging should start and stop, just like you could do it with the tlp package, but now from the Gnome settings. To increase the longevity of my laptop battery, I always keep it at "Preserve Battery Health" unless I'm traveling. What I would like to see next is support for choosing different "Power Modes" based on whether the laptop is plugged-in, and based on the battery charge percentage. There's a GNOME issue tracking this feature, but there's some pushback on whether this is the right thing to expose to users. In the meantime, there are some workarounds mentioned in that issue which people who really want this feature can follow. If you would like to learn more about batteries; Battery University is a great starting point, besides getting into FPV drones and being forced to handle batteries without a Battery Management System (BMS). And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's YouTube channel is a great resource: @JoshuaBardwell.

8) Lazygit Emacs users are already familiar with the legendary magit; a terminal-based UI for git. Lazygit is an alternative for non-emacs users, you can integrate it with neovim or just use it directly. I'm still playing with lazygit and haven't integrated it into my workflows, but so far it has been a pleasant experience. Screenshot of lazygit from the Debian curl repository, showing a selected commit and its diff, besides the other things from the lazygit UI You should check out the demos from the lazygit GitHub page. Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.

9) neovim neovim has been shipped in Debian since 2016, but upstream has been doing a lot of work to improve the experience out-of-the-box in the last couple of years. If you're a neovim poweruser, you're likely not installing it from the official repositories, but for those that are, Debian 13 comes with version 0.10.4, which brings the following improvements compared to the version in Debian 12:
  • Treesitter support for C, Lua, Markdown, with the possibility of adding any other languages as needed;
  • Better spellchecking due to treesitter integration (spellsitter);
  • Mouse support enabled by default;
  • Commenting support out-of-the-box; Check :h commenting for details, but the tl;dr is that you can use gcc to comment the current line and gc to comment the current selection.
  • OSC52 support. Especially handy for those using neovim over an ssh connection, this protocol lets you copy something from within the neovim process into the clipboard of the machine you're using to connect through ssh. In other words, you can copy from neovim running in a host over ssh and paste it in the "outside" machine.

10) [Bonus] Running old Debian releases The bonus tip is not specific to the Debian 13 release, but something I've recently learned in the #debian-devel IRC channel. Did you know there are usable container images for all past Debian releases? I'm not talking "past" as in "some of the older releases", I'm talking past as in "literally every Debian release, including the very first one". Tianon Gravi "tianon" is the Debian Developer responsible for making this happen, kudos to him! There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a 32-bit host, otherwise you will get the error Out of virtual memory!, but starting with Bo (1.3) all should work in amd64/arm64. Try it out:
sudo apt install podman

podman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the container, that's because apt first appeared in Debian Slink (2.1).

18 August 2025

Otto Kek l inen: Best Practices for Submitting and Reviewing Merge Requests in Debian

Featured image of post Best Practices for Submitting and Reviewing Merge Requests in DebianHistorically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org the GitLab instance of Debian more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I ve found the best practice to be, presented in the natural workflow from forking to merging.

Why use Merge Requests? Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:
  • Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
  • Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
  • Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
  • It is easy for anyone to comment on a Merge Request and participate in the review.
  • Integrating CI testing is easy in Merge Requests by activating Salsa CI.
  • Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged patch , and the cycle of submit review re-submit re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
  • Merge Requests can have extra metadata, such as Approved , and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.
Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.

Finding the Debian packaging source repository and preparing to make a contribution Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian. Packaging source code repository links at tracker.debian.org Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches. View after pressing Fork Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git. Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:
git remote add go-team https://salsa.debian.org/go-team/packages/glow.git
gbp pull --verbose --track-missing go-team
The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution. It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.

Submitting a Merge Request for a Debian packaging improvement Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch. When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits. If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in. If you don t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):
git fetch go-team
git rebase -i go-team/debian/latest
Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made. When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves. When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.

Respect the review feedback, respond quickly and avoid Merge Requests getting stale Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.

Reviewing Merge Requests This section about reviewing is not exclusive to Debian package maintainers anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, given enough eyeballs, all bugs are shallow . On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance. Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted. Change notification settings from Global to Watch to get an email on new Merge Requests When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response. Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.

Reviewing commit-by-commit in the web interface Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next. Example review to demonstrate location of buttons and functionality When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.

Reviewing and testing on your own computer locally For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later. Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.

Investing enough time in writing feedback, but not too much See my other post for more in-depth advice on how to structure your code review feedback. In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it. If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback. There might also be contributors who just dump the code , ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author). Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.

Approving and merging Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the Approve button to show that you approve the change but leave it unmerged. The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver. If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.

Making a Merge Request for a new upstream version import Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git. Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch. There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only. It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.

Reviewing a Merge Request for a new upstream version import Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto s fork. As the maintainer, you would run the commands:
git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto
If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter s version is needed:
for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done
Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.

Please allow enough time for everyone to participate When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time. Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the sleep on it phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people s feedback!

Contribute reviews! The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves. For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.

Why aren t 100% of all Debian source packages hosted on Salsa? As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word Salsa anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control. I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.

17 July 2025

Otto Kek l inen: Debcraft Easiest way to modify and build Debian packages

Featured image of post Debcraft   Easiest way to modify and build Debian packagesDebian packaging is notoriously hard. Far too many new contributors give up while trying, and many long-time contributors leave due to burnout from having to do too many thankless maintenance tasks. Some just skip testing their changes properly because it feels like too much toil. Debcraft is my attempt to solve this by automating all the boring stuff, and making it easier to learn the correct practices and helping new and old packagers better track changes in both source code and build artifacts.

The challenge of declarative packaging code Unlike how rpm or apk packages are done, the deb package sources by design avoid having one massive procedural packaging recipe. Instead, the packaging is defined in multiple declarative files in the debian/ subdirectory. For example, instead of a script running install -m 755 bin/btop /usr/bin/btop there is a file debian/btop.install containing the line usr/bin/btop. This makes the overall system more robust and reliable, and allows, for example, extensive static analysis to find problems without having to build the package. The notable exception is the debian/rules file, which contains procedural code that can modify any aspect of the package build. Almost all other files are declarative. Benefits include, among others, that the effect of a Debian-wide policy change can be relatively easily predicted by scanning what attributes and configurations all packages have declared. The drawback is that to understand the syntax and meaning of each file, one must understand which build tools read which files and traverse potentially multiple layers of abstraction. In my view, this is the root cause for most of the perceived complexity.

Common complaints about .deb packaging Related to the above, people learning Debian packaging frequently voice the following complaints:
  • Debian has too many tools to learn, often with overlapping or duplicate functionality.
  • Too much outdated and inconsistent documentation that makes learning the numerous tools needlessly hard.
  • Lack of documentation of the generally agreed best practices, mainly due to Debian s reluctance as a project to pick one tool and deprecate the alternatives.
  • Multiple layers of abstraction and lack of clarity on what any single change in the debian/ subdirectory leads to in the final package.
  • Requirement of Debian packages to be developed on a Debian system.

How Debcraft solves (some of) this Debcraft is intentionally opinionated for the sake of simplicity, and makes heavy use of git, git-buildpackage, and most importantly Linux containers, supporting both Docker and Podman. By using containers, Debcraft frees the user from the requirement of having to run Debian. This makes .deb packaging more accessible to developers running some other Linux distro or even Mac or Windows (with WSL). Of course we want developers to run Debian (or a derivative like Ubuntu) but we want them even more to build, test and ship their software as .deb. Even for Debian/Ubuntu users having everything done inside clean hermetic containers of the latest target distribution version will yield more robust, secure and reproducible builds and tests. All containers are built automatically on-the-fly using best practices for layer caching, making everything easy and fast. Debcraft has simple commands to make it easy to build, rebuild, test and update packages. The most fundamental command is debcraft build, which will not only build the package but also fetch the sources if not already present, and with flags such as --distribution or --source-only build for any requested Debian or Ubuntu release, or generate source packages only for Debian or PPA upload purposes. For ease of use, the output is colored and includes helpful explanations on what is being done, and suggests relevant Debian documentation for more information. Most importantly, the build artifacts, along with various logs, are stored in separate directories, making it easy to compare before and after to see what changed as a result of the code or dependency updates (utilizing diffoscope among others). While the above helps to debug successful builds, there is also the debcraft shell command to make debugging failed builds significantly easier by dropping into a shell where one can run various dh commands one-by-one. Once the build works, testing autopkgtests is as easy as running debcraft test. As with all other commands, Debcraft is smart enough to read information like the target distribution from the debian/changelog entry. When the package is ready to be released, there is the debcraft release command that will create the Debian source package in the correct format and facilitate uploading it either to your Personal Package Archive (PPA) or if you are a Debian Developer to the official Debian archive.

Automatically improve and update packages Additionally, the command debcraft improve will try to fix all issues that are possible to address automatically. It utilizes, among others, lintian-brush, codespell and debputy. This makes repetitive Debian maintenance tasks easier, such as updating the package to follow the latest Debian policies. To update the package to the latest upstream version there is also debcraft update. It will read the package configuration files such as debian/gbp.conf and debian/watch and attempts to import the latest upstream version, refresh patches, build and run autopkgtests. If everything passes, the new version is committed. This helps automate the process of updating to new upstream versions.

Try out Debcraft now! On a recent version of Debian and Ubuntu, Debcraft can be installed simply by running apt install debcraft. To use Debcraft on some other distribution or to get the latest features available in the development version install it using:
git clone https://salsa.debian.org/debian/debcraft.git
cd debcraft
make install-local
To see exact usage instructions run debcraft --help.

Contributions welcome Current Debcraft version 0.5 still has some rough edges and missing features, but I have personally been using it for over a year to maintain all my packages in Debian. If you come across some issue, feel free to file a report at https://salsa.debian.org/debian/debcraft/-/issues or submit an improvement at https://salsa.debian.org/debian/debcraft/-/merge_requests. The code is intentionally written entirely in shell script to keep the barrier to code contribution as low as possible.
By the way, if you aspire to become a Debian Developer, and want to follow my examples in using state-of-the-art tooling and collaborate using salsa.debian.org, feel free to reach out for mentorship. I am glad to see more people contribute to Debian!

30 June 2025

Otto Kek l inen: Corporate best practices for upstream open source contributions

Featured image of post Corporate best practices for upstream open source contributions
This post is based on presentation given at the Validos annual members meeting on June 25th, 2025.
When I started getting into Linux and open source over 25 years ago, the majority of the software development in this area was done by academics and hobbyists. The number of companies participating in open source has since exploded in parallel with the growth of mobile and cloud software, the majority of which is built on top of open source. For example, Android powers most mobile phones today and is based on Linux. Almost all software used to operate large cloud provider data centers, such as AWS or Google, is either open source or made in-house by the cloud provider. Pretty much all companies, regardless of the industry, have been using open source software at least to some extent for years. However, the degree to which they collaborate with the upstream origins of the software varies. I encourage all companies in a technical industry to start contributing upstream. There are many benefits to having a good relationship with your upstream open source software vendors, both for the short term and especially for the long term. Moreover, with the rollout of CRA in EU in 2025-2027, the law will require software companies to contribute security fixes upstream to the open source projects their products use. To ensure the process is well managed, business-aligned and legally compliant, there are a few do s and don t do s that are important to be aware of.

Maintain your SBOMs For every piece of software, regardless of whether the code was done in-house, from an open source project, or a combination of these, every company needs to produce a Software Bill of Materials (SBOM). The SBOMs provide a standardized and interoperable way to track what software and which versions are used where, what software licenses apply, who holds the copyright of which component, which security fixes have been applied and so forth. A catalog of SBOMs, or equivalent, forms the backbone of software supply-chain management in corporations.

Identify your strategic upstream vendors The SBOMs are likely to reveal that for any piece of non-trivial software, there are hundreds or thousands of upstream open source projects in use. Few organizations have resources to contribute to all of their upstreams. If your organization is just starting to organize upstream contribution activities, identify the key projects that have the largest impact on your business and prioritize forming a relationship with them first. Organizations with a mature contribution process will be collaborating with tens or hundreds of upstreams.

Appoint an internal coordinator and champions Having a written policy on how to contribute upstream will help ensure a consistent process and avoid common pitfalls. However, a written policy alone does not automatically translate into a well-running process. It is highly recommended to appoint at least one internal coordinator who is knowledgeable about how open source communities work, how software licensing and patents work, and is senior enough to have a good sense of what business priorities to optimize for. In small organizations it can be a single person, while larger organizations typically have a full Open Source Programs Office. This coordinator should oversee the contribution process, track all contributions made across the organization, and further optimize the process by working with stakeholders across the business, including legal experts, business owners and CTOs. The marketing and recruiting folks should also be involved, as upstream contributions will have a reputation-building aspect as well, which can be enhanced with systematic tracking and publishing of activities. Additionally, at least in the beginning, the organization should also appoint key staff members as open source champions. Implementing a new process always includes some obstacles and occasional setbacks, which may discourage employees from putting in the extra effort to reap the full long-term benefits for the company. Having named champions will empower them to make the first few contributions themselves, setting a good example and encouraging and mentoring others to contribute upstream as well.

Avoid excessive approvals To maintain a high quality bar, it is always good to have all outgoing submissions reviewed by at least one or two people. Two or three pairs of eyeballs are significantly more likely to catch issues that might slip by someone working alone. The review also slows down the process by a day or two, which gives the author time to sleep on it , which usually helps to ensure the final submission is well-thought-out by the author. Do not require more than one or two reviewers. The marginal utility goes quickly to zero beyond a few reviewers, and at around four or five people the effect becomes negative, as the weight of each approval decreases and the reviewers begin to take less personal responsibility. Having too many people in the loop also makes each feedback round slow and expensive, to the extent that the author will hesitate to make updates and ask for re-reviews due to the costs involved. If the organization experiences setbacks due to mistakes slipping through the review process, do not respond by adding more reviewers, as it will just grind the contribution process to a halt. If there are quality concerns, invest in training for engineers, CI systems and perhaps an internal certification program for those making public upstream code submissions. A typical software engineer is more likely to seriously try to become proficient at their job and put effort into a one-off certification exam and then make multiple high-quality contributions, than it is for a low-skilled engineer to improve and even want to continue doing more upstream contributions if they are burdened by heavy review processes every time they try to submit an upstream contribution.

Don t expect upstream to accept all code contributions Sure, identifying the root cause of and fixing a tricky bug or writing a new feature requires significant effort. While an open source project will certainly appreciate the effort invested, it doesn t mean it will always welcome all contributions with open arms. Occasionally, the project won t agree that the code is correct or the feature is useful, and some contributions are bound to be rejected. You can minimize the chance of experiencing rejections by having a solid internal review process that includes assessing how the upstream community is likely to understand the proposal. Sometimes how things are communicated is more important than how they are coded. Polishing inline comments and git commit messages help ensure high-quality communication, along with a commitment to respond quickly to review feedback and conducting regular follow-ups until a contribution is finalized and accepted.

Start small to grow expertise and reputation In addition to keeping the open source contribution policy lean and nimble, it is also good to start practical contributions with small issues. Don t aim to contribute massive features until you have a track record of being able to make multiple small contributions. Keep in mind that not all open source projects are equal. Each has its own culture, written and unwritten rules, development process, documented requirements (which may be outdated) and more. Starting with a tiny contribution, even just a typo fix, is a good way to validate how code submissions, reviews and approvals work in a particular project. Once you have staff who have successfully landed smaller contributions, you can start planning larger proposals. The exact same proposal might be unsuccessful when proposed by a new person, and successful when proposed by a person who already has a reputation for prior high-quality work.

Embrace all and any publicity you get Some companies have concerns about their employees working in the open. Indeed, every email and code patch an employee submits, and all related discussions become public. This may initially sound scary, but is actually a potential source of good publicity. Employees need to be trained on how to conduct themselves publicly, and the discussions about code should contain only information strictly related to the code, without any references to actual production environments or other sensitive information. In the long run most employees contributing have a positive impact and the company should reap the benefits of positive publicity. If there are quality issues or employee judgment issues, hiding the activity or forcing employees to contribute with pseudonyms is not a proper solution. Instead, the problems should be addressed at the root, and bad behavior addressed rather than tolerated. When people are working publicly, there tends to also be some degree of additional pride involved, which motivates people to try their best. Contributions need to be public for the sponsoring corporation to later be able to claim copyright or licenses. Considering that thousands of companies participate in open source every day, the prevalence of bad publicity is quite low, and the benefits far exceed the risks.

Scratch your own itch When choosing what to contribute, select things that benefit your own company. This is not purely about being selfish - often people working on resolving a problem they suffer from are the same people with the best expertise of what the problem is and what kind of solution is optimal. Also, the issues that are most pressing to your company are more likely to be universally useful to solve than any random bug or feature request in the upstream project s issue tracker.

Remember there are many ways to help upstream While submitting code is often considered the primary way to contribute, please keep in mind there are also other highly impactful ways to contribute. Submitting high-quality bug reports will help developers quickly identify and prioritize issues to fix. Providing good research, benchmarks, statistics or feedback helps guide development and the project make better design decisions. Documentation, translations, organizing events and providing marketing support can help increase adoption and strengthen long-term viability for the project. In some of the largest open source projects there are already far more pending contributions than the core maintainers can process. Therefore, developers who contribute code should also get into the habit of contributing reviews. As Linus law states, given enough eyeballs, all bugs are shallow. Reviewing other contributors submissions will help improve quality, and also alleviate the pressure on core maintainers who are the only ones providing feedback. Reviewing code submitted by others is also a great learning opportunity for the reviewer. The reviewer does not need to be better than the submitter - any feedback is useful; merely posting review feedback is not the same thing as making an approval decision. Many projects are also happy to accept monetary support and sponsorships. Some offer specific perks in return. By human nature, the largest sponsors always get their voice heard in important decisions, as no open source project wants to take actions that scare away major financial contributors.

Starting is the hardest part Long-term success in open source comes from a positive feedback loop of an ever-increasing number of users and collaborators. As seen in the examples of countless corporations contributing open source, the benefits are concrete, and the process usually runs well after the initial ramp-up and organizational learning phase has passed. In open source ecosystems, contributing upstream should be as natural as paying vendors in any business. If you are using open source and not contributing at all, you likely have latent business risks without realizing it. You don t want to wake up one morning to learn that your top talent left because they were forbidden from participating in open source for the company s benefit, or that you were fined due to CRA violations and mismanagement in sharing security fixes with the correct parties. The faster you start with the process, the less likely those risks will materialize.

26 May 2025

Otto Kek l inen: Creating Debian packages from upstream Git

Featured image of post Creating Debian packages from upstream GitIn this post, I demonstrate the optimal workflow for creating new Debian packages in 2025, preserving the upstream git history. The motivation for this is to lower the barrier for sharing improvements to and from upstream, and to improve software provenance and supply-chain security by making it easy to inspect every change at any level using standard git tooling. Key elements of this workflow include: To make the instructions so concrete that anyone can repeat all the steps themselves on a real package, I demonstrate the steps by packaging the command-line tool Entr. It is written in C, has very few dependencies, and its final Debian source package structure is simple, yet exemplifies all the important parts that go into a complete Debian package:
  1. Creating a new packaging repository and publishing it under your personal namespace on salsa.debian.org.
  2. Using dh_make to create the initial Debian packaging.
  3. Posting the first draft of the Debian packaging as a Merge Request (MR) and using Salsa CI to verify Debian packaging quality.
  4. Running local builds efficiently and iterating on the packaging process.

Create new Debian packaging repository from the existing upstream project git repository First, create a new empty directory, then clone the upstream Git repository inside it:
shell
mkdir debian-entr
cd debian-entr
git clone --origin upstreamvcs --branch master \
 --single-branch https://github.com/eradman/entr.git
Using a clean directory makes it easier to inspect the build artifacts of a Debian package, which will be output in the parent directory of the Debian source directory. The extra parameters given to git clone lay the foundation for the Debian packaging git repository structure where the upstream git remote name is upstreamvcs. Only the upstream main branch is tracked to avoid cluttering git history with upstream development branches that are irrelevant for packaging in Debian. Next, enter the git repository directory and list the git tags. Pick the latest upstream release tag as the commit to start the branch upstream/latest. This latest refers to the upstream release, not the upstream development branch. Immediately after, branch off the debian/latest branch, which will have the actual Debian packaging files in the debian/ subdirectory.
shell
cd entr
git tag # shows the latest upstream release tag was '5.6'
git checkout -b upstream/latest 5.6
git checkout -b debian/latest
%% init:   'gitGraph':   'mainBranchName': 'master'      %%
gitGraph:
checkout master
commit id: "Upstream 5.6 release" tag: "5.6"
branch upstream/latest
checkout upstream/latest
commit id: "New upstream version 5.6" tag: "upstream/5.6"
branch debian/latest
checkout debian/latest
commit id: "Initial Debian packaging"
commit id: "Additional change 1"
commit id: "Additional change 2"
commit id: "Additional change 3"
At this point, the repository is structured according to DEP-14 conventions, ensuring a clear separation between upstream and Debian packaging changes, but there are no Debian changes yet. Next, add the Salsa repository as a new remote which called origin, the same as the default remote name in git.
shell
git remote add origin git@salsa.debian.org:otto/entr-demo.git
git push --set-upstream origin debian/latest
This is an important preparation step to later be able to create a Merge Request on Salsa that targets the debian/latest branch, which does not yet have any debian/ directory.

Launch a Debian Sid (unstable) container to run builds in To ensure that all packaging tools are of the latest versions, run everything inside a fresh Sid container. This has two benefits: you are guaranteed to have the most up-to-date toolchain, and your host system stays clean without getting polluted by various extra packages. Additionally, this approach works even if your host system is not Debian/Ubuntu.
shell
cd ..
podman run --interactive --tty --rm --shm-size=1G --cap-add SYS_PTRACE \
 --env='DEB*' --volume=$PWD:/tmp/test --workdir=/tmp/test debian:sid bash
Note that the container should be started from the parent directory of the git repository, not inside it. The --volume parameter will loop-mount the current directory inside the container. Thus all files created and modified are on the host system, and will persist after the container shuts down. Once inside the container, install the basic dependencies:
shell
apt update -q && apt install -q --yes git-buildpackage dpkg-dev dh-make

Automate creating the debian/ files with dh-make To create the files needed for the actual Debian packaging, use dh_make:
shell
# dh_make --packagename entr_5.6 --single --createorig
Maintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]

Done. Please edit the files in the debian/ subdirectory now.
Due to how dh_make works, the package name and version need to be written as a single underscore separated string. In this case, you should choose --single to specify that the package type is a single binary package. Other options would be --library for library packages (see libgda5 sources as an example) or --indep (see dns-root-data sources as an example). The --createorig will create a mock upstream release tarball (entr_5.6.orig.tar.xz) from the current release directory, which is necessary due to historical reasons and how dh_make worked before git repositories became common and Debian source packages were based off upstream release tarballs (e.g. *.tar.gz). At this stage, a debian/ directory has been created with template files, and you can start modifying the files and iterating towards actual working packaging.
shell
git add debian/
git commit -a -m "Initial Debian packaging"

Review the files The full list of files after the above steps with dh_make would be:
 -- entr
   -- LICENSE
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- configure
   -- data.h
   -- debian
     -- README.Debian
     -- README.source
     -- changelog
     -- control
     -- copyright
     -- gbp.conf
     -- entr-docs.docs
     -- entr.cron.d.ex
     -- entr.doc-base.ex
     -- manpage.1.ex
     -- manpage.md.ex
     -- manpage.sgml.ex
     -- manpage.xml.ex
     -- postinst.ex
     -- postrm.ex
     -- preinst.ex
     -- prerm.ex
     -- rules
     -- salsa-ci.yml.ex
     -- source
       -- format
     -- upstream
       -- metadata.ex
     -- watch.ex
   -- entr.1
   -- entr.c
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- system_test.sh
 -- entr_5.6.orig.tar.xz
You can browse these files in the demo repository. The mandatory files in the debian/ directory are:
  • changelog,
  • control,
  • copyright,
  • and rules.
All the other files have been created for convenience so the packager has template files to work from. The files with the suffix .ex are example files that won t have any effect until their content is adjusted and the suffix removed. For detailed explanations of the purpose of each file in the debian/ subdirectory, see the following resources:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • Debhelper man pages: Detailed information of how the Debian package build system works, and how the contents of the various files in debian/ affect the end result.
As Entr, the package used in this example, is a real package that already exists in the Debian archive, you may want to browse the actual Debian packaging source at https://salsa.debian.org/debian/entr/-/tree/debian/latest/debian for reference. Most of these files have standardized formatting conventions to make collaboration easier. To automatically format the files following the most popular conventions, simply run wrap-and-sort -vast or debputy reformat --style=black.

Identify build dependencies The most common reason for builds to fail is missing dependencies. The easiest way to identify which Debian package ships the required dependency is using apt-file. If, for example, a build fails complaining that pcre2posix.h cannot be found or that libcre2-posix.so is missing, you can use these commands:
shell
$ apt install -q --yes apt-file && apt-file update
$ apt-file search pcre2posix.h
libpcre2-dev: /usr/include/pcre2posix.h
$ apt-file search libpcre2-posix.so
libpcre2-dev: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3.0.6
The output above implies that the debian/control should be extended to define a Build-Depends: libpcre2-dev relationship. There is also dpkg-depcheck that uses strace to trace the files the build process tries to access, and lists what Debian packages those files belong to. Example usage:
shell
dpkg-depcheck -b debian/rules build

Build the Debian sources to generate the .deb package After the first pass of refining the contents of the files in debian/, test the build by running dpkg-buildpackage inside the container:
shell
dpkg-buildpackage -uc -us -b
The options -uc -us will skip signing the resulting Debian source package and other build artifacts. The -b option will skip creating a source package and only build the (binary) *.deb packages. The output is very verbose and gives a large amount of context about what is happening during the build to make debugging build failures easier. In the build log of entr you will see for example the line dh binary --buildsystem=makefile. This and other dh commands can also be run manually if there is a need to quickly repeat only a part of the build while debugging build failures. To see what files were generated or modified by the build simply run git status --ignored:
shell
$ git status --ignored
On branch debian/latest

Untracked files:
 (use "git add <file>..." to include in what will be committed)
 debian/debhelper-build-stamp
 debian/entr.debhelper.log
 debian/entr.substvars
 debian/files

Ignored files:
 (use "git add -f <file>..." to include in what will be committed)
 Makefile
 compat.c
 compat.o
 debian/.debhelper/
 debian/entr/
 entr
 entr.o
 status.o
Re-running dpkg-buildpackage will include running the command dh clean, which assuming it is configured correctly in the debian/rules file will reset the source directory to the original pristine state. The same can of course also be done with regular git commands git reset --hard; git clean -fdx. To avoid accidentally committing unnecessary build artifacts in git, a debian/.gitignore can be useful and it would typically include all four files listed as untracked above. After a successful build you would have the following files:
shell
 -- entr
   -- LICENSE
   -- Makefile -> Makefile.linux
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- compat.c
   -- compat.o
   -- configure
   -- data.h
   -- debian
     -- README.source.md
     -- changelog
     -- control
     -- copyright
     -- debhelper-build-stamp
     -- docs
     -- entr
       -- DEBIAN
         -- control
         -- md5sums
       -- usr
       -- bin
         -- entr
       -- share
       -- doc
         -- entr
         -- NEWS.gz
         -- README.md
         -- changelog.Debian.gz
         -- copyright
       -- man
       -- man1
       -- entr.1.gz
     -- entr.debhelper.log
     -- entr.substvars
     -- files
     -- gbp.conf
     -- patches
       -- PR149-expand-aliases-in-system-test-script.patch
       -- series
       -- system-test-skip-no-tty.patch
       -- system-test-with-system-binary.patch
     -- rules
     -- salsa-ci.yml
     -- source
       -- format
     -- tests
       -- control
     -- upstream
       -- metadata
       -- signing-key.asc
     -- watch
   -- entr
   -- entr.1
   -- entr.c
   -- entr.o
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- status.o
   -- system_test.sh
 -- entr-dbgsym_5.6-1_amd64.deb
 -- entr_5.6-1.debian.tar.xz
 -- entr_5.6-1.dsc
 -- entr_5.6-1_amd64.buildinfo
 -- entr_5.6-1_amd64.changes
 -- entr_5.6-1_amd64.deb
 -- entr_5.6.orig.tar.xz
The contents of debian/entr are essentially what goes into the resulting entr_5.6-1_amd64.deb package. Familiarizing yourself with the majority of the files in the original upstream source as well as all the resulting build artifacts is time consuming, but it is a necessary investment to get high-quality Debian packages. There are also tools such as Debcraft that automate generating the build artifacts in separate output directories for each build, thus making it easy to compare the changes to correlate what change in the Debian packaging led to what change in the resulting build artifacts.

Re-run the initial import with git-buildpackage When upstreams publish releases as tarballs, they should also be imported for optimal software supply-chain security, in particular if upstream also publishes cryptographic signatures that can be used to verify the authenticity of the tarballs. To achieve this, the files debian/watch, debian/upstream/signing-key.asc, and debian/gbp.conf need to be present with the correct options. In the gbp.conf file, ensure you have the correct options based on:
  1. Does upstream release tarballs? If so, enforce pristine-tar = True.
  2. Does upstream sign the tarballs? If so, configure explicit signature checking with upstream-signatures = on.
  3. Does upstream have a git repository, and does it have release git tags? If so, configure the release git tag format, e.g. upstream-vcs-tag = %(version%~%.)s.
To validate that the above files are working correctly, run gbp import-orig with the current version explicitly defined:
shell
$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"
gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'
gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
As the original packaging was done based on the upstream release git tag, the above command will fetch the tarball release, create the pristine-tar branch, and store the tarball delta on it. This command will also attempt to create the tag upstream/5.6 on the upstream/latest branch.

Import new upstream versions in the future Forking the upstream git repository, creating the initial packaging, and creating the DEP-14 branch structure are all one-off work needed only when creating the initial packaging. Going forward, to import new upstream releases, one would simply run git fetch upstreamvcs; gbp import-orig --uscan, which fetches the upstream git tags, checks for new upstream tarballs, and automatically downloads, verifies, and imports the new version. See the galera-4-demo example in the Debian source packages in git explained post as a demo you can try running yourself and examine in detail. You can also try running gbp import-orig --uscan without specifying a version. It would fetch it, as it will notice there is now Entr version 5.7 available, and import it.

Build using git-buildpackage From this stage onwards you should build the package using gbp buildpackage, which will do a more comprehensive build.
shell
gbp buildpackage -uc -us
The git-buildpackage build also includes running Lintian to find potential Debian policy violations in the sources or in the resulting .deb binary packages. Many Debian Developers run lintian -EviIL +pedantic after every build to check that there are no new nags, and to validate that changes intended to previous Lintian nags were correct.

Open a Merge Request on Salsa for Debian packaging review Getting everything perfectly right takes a lot of effort, and may require reaching out to an experienced Debian Developers for review and guidance. Thus, you should aim to publish your initial packaging work on Salsa, Debian s GitLab instance, for review and feedback as early as possible. For somebody to be able to easily see what you have done, you should rename your debian/latest branch to another name, for example next/debian/latest, and open a Merge Request that targets the debian/latest branch on your Salsa fork, which still has only the unmodified upstream files. If you have followed the workflow in this post so far, you can simply run:
  1. git checkout -b next/debian/latest
  2. git push --set-upstream origin next/debian/latest
  3. Open in a browser the URL visible in the git remote response
  4. Write the Merge Request description in case the default text from your commit is not enough
  5. Mark the MR as Draft using the checkbox
  6. Publish the MR and request feedback
Once a Merge Request exists, discussion regarding what additional changes are needed can be conducted as MR comments. With an MR, you can easily iterate on the contents of next/debian/latest, rebase, force push, and request re-review as many times as you want. While at it, make sure the Settings > CI/CD page has under CI/CD configuration file the value debian/salsa-ci.yml so that the CI can run and give you immediate automated feedback. For an example of an initial packaging Merge Request, see https://salsa.debian.org/otto/entr-demo/-/merge_requests/1.

Open a Merge Request / Pull Request to fix upstream code Due to the high quality requirements in Debian, it is fairly common that while doing the initial Debian packaging of an open source project, issues are found that stem from the upstream source code. While it is possible to carry extra patches in Debian, it is not good practice to deviate too much from upstream code with custom Debian patches. Instead, the Debian packager should try to get the fixes applied directly upstream. Using git-buildpackage patch queues is the most convenient way to make modifications to the upstream source code so that they automatically convert into Debian patches (stored at debian/patches), and can also easily be submitted upstream as any regular git commit (and rebased and resubmitted many times over). First, decide if you want to work out of the upstream development branch and later cherry-pick to the Debian packaging branch, or work out of the Debian packaging branch and cherry-pick to an upstream branch. The example below starts from the upstream development branch and then cherry-picks the commit into the git-buildpackage patch queue:
shell
git checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expected
git commit -a -m "Commit title" -m "Commit body"
git push # submit upstream
gbp pq import --force --time-machine=10
git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadata
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
The example below starts by making the fix on a git-buildpackage patch queue branch, and then cherry-picking it onto the upstream development branch:
shell
gbp pq import --force --time-machine=10
nano entr.c
git commit -a -m "Commit title" -m "Commit body"
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
git checkout -b bugfix-branch master
git cherry-pick <commit id>
git commit --amend # prepare commit message for upstream submission
git push # submit upstream
The key git-buildpackage commands to enter and exit the patch-queue mode are:
shell
gbp pq import --force --time-machine=10
gbp pq export --drop --commit
%% init:   'gitGraph':   'mainBranchName': 'debian/latest'      %%
gitGraph
checkout debian/latest
commit id: "Initial packaging"
branch patch-queue/debian/latest
checkout patch-queue/debian/latest
commit id: "Delete debian/patches/..."
commit id: "Patch 1 title"
commit id: "Patch 2 title"
commit id: "Patch 3 title"
These can be run at any time, regardless if any debian/patches existed prior, or if existing patches applied cleanly or not, or if there were old patch queue branches around. Note that the extra -b in gbp buildpackage -uc -us -b instructs to build only binary packages, avoiding any nags from dpkg-source that there are modifications in the upstream sources while building in the patches-applied mode.

Programming-language specific dh-make alternatives As each programming language has its specific way of building the source code, and many other conventions regarding the file layout and more, Debian has multiple custom tools to create new Debian source packages for specific programming languages. Notably, Python does not have its own tool, but there is an dh_make --python option for Python support directly in dh_make itself. The list is not complete and many more tools exist. For some languages, there are even competing options, such as for Go there is in addition to dh-make-golang also Gophian. When learning Debian packaging, there is no need to learn these tools upfront. Being aware that they exist is enough, and one can learn them only if and when one starts to package a project in a new programming language.

The difference between source git repository vs source packages vs binary packages As seen in earlier example, running gbp buildpackage on the Entr packaging repository above will result in several files:
entr_5.6-1_amd64.changes
entr_5.6-1_amd64.deb
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The entr_5.6-1_amd64.deb is the binary package, which can be installed on a Debian/Ubuntu system. The rest of the files constitute the source package. To do a source-only build, run gbp buildpackage -S and note the files produced:
entr_5.6-1_source.changes
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The source package files can be used to build the binary .deb for amd64, or any architecture that the package supports. It is important to grasp that the Debian source package is the preferred form to be able to build the binary packages on various Debian build systems, and the Debian source package is not the same thing as the Debian packaging git repository contents.
flowchart LR
git[Git repository<br>branch debian/latest] --> gbp buildpackage -S  src[Source Package<br>.dsc + .tar.xz]
src --> dpkg-buildpackage  bin[Binary Packages<br>.deb]
If the package is large and complex, the build could result in multiple binary packages. One set of package definition files in debian/ will however only ever result in a single source package.

Option to repackage source packages with Files-Excluded lists in the debian/copyright file Some upstream projects may include binary files in their release, or other undesirable content that needs to be omitted from the source package in Debian. The easiest way to filter them out is by adding to the debian/copyright file a Files-Excluded field listing the undesired files. The debian/copyright file is read by uscan, which will repackage the upstream sources on-the-fly when importing new upstream releases. For a real-life example, see the debian/copyright files in the Godot package that lists:
debian
Files-Excluded: platform/android/java/gradle/wrapper/gradle-wrapper.jar
The resulting repackaged upstream source tarball, as well as the upstream version component, will have an extra +ds to signify that it is not the true original upstream source but has been modified by Debian:
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb

Creating one Debian source package from multiple upstream source packages also possible In some rare cases the upstream project may be split across multiple git repositories or the upstream release may consist of multiple components each in their own separate tarball. Usually these are very large projects that get some benefits from releasing components separately. If in Debian these are deemed to go into a single source package, it is technically possible using the component system in git-buildpackage and uscan. For an example see the gbp.conf and watch files in the node-cacache package. Using this type of structure should be a last resort, as it creates complexity and inter-dependencies that are bound to cause issues later on. It is usually better to work with upstream and champion universal best practices with clear releases and version schemes.

When not to start the Debian packaging repository as a fork of the upstream one Not all upstreams use Git for version control. It is by far the most popular, but there are still some that use e.g. Subversion or Mercurial. Who knows maybe in the future some new version control systems will start to compete with Git. There are also projects that use Git in massive monorepos and with complex submodule setups that invalidate the basic assumptions required to map an upstream Git repository into a Debian packaging repository. In those cases one can t use a debian/latest branch on a clone of the upstream git repository as the starting point for the Debian packaging, but one must revert the traditional way of starting from an upstream release tarball with gbp import-orig package-1.0.tar.gz.

Conclusion Created in August 1993, Debian is one of the oldest Linux distributions. In the 32 years since inception, the .deb packaging format and the tooling to work with it have evolved several generations. In the past 10 years, more and more Debian Developers have converged on certain core practices evidenced by https://trends.debian.net/, but there is still a lot of variance in workflows even for identical tasks. Hopefully, you find this post useful in giving practical guidance on how exactly to do the most common things when packaging software for Debian. Happy packaging!

25 May 2025

Otto Kek l inen: New Debian package creation from upstream git repository

Featured image of post New Debian package creation from upstream git repositoryIn this post, I demonstrate the optimal workflow for creating new Debian packages in 2025, preserving the upstream git history. The motivation for this is to lower the barrier for sharing improvements to and from upstream, and to improve software provenance and supply-chain security by making it easy to inspect every change at any level using standard git tooling. Key elements of this workflow include: To make the instructions so concrete that anyone can repeat all the steps themselves on a real package, I demonstrate the steps by packaging the command-line tool Entr. It is written in C, has very few dependencies, and its final Debian source package structure is simple, yet exemplifies all the important parts that go into a complete Debian package:
  1. Creating a new packaging repository and publishing it under your personal namespace on salsa.debian.org.
  2. Using dh_make to create the initial Debian packaging.
  3. Posting the first draft of the Debian packaging as a Merge Request (MR) and using Salsa CI to verify Debian packaging quality.
  4. Running local builds efficiently and iterating on the packaging process.

Create new Debian packaging repository from the existing upstream project git repository First, create a new empty directory, then clone the upstream Git repository inside it:
shell
mkdir debian-entr
cd debian-entr
git clone --origin upstreamvcs --branch master \
 --single-branch https://github.com/eradman/entr.git
Using a clean directory makes it easier to inspect the build artifacts of a Debian package, which will be output in the parent directory of the Debian source directory. The extra parameters given to git clone lay the foundation for the Debian packaging git repository structure where the upstream git remote name is upstreamvcs. Only the upstream main branch is tracked to avoid cluttering git history with upstream development branches that are irrelevant for packaging in Debian. Next, enter the git repository directory and list the git tags. Pick the latest upstream release tag as the commit to start the branch upstream/latest. This latest refers to the upstream release, not the upstream development branch. Immediately after, branch off the debian/latest branch, which will have the actual Debian packaging files in the debian/ subdirectory.
shell
cd entr
git tag # shows the latest upstream release tag was '5.6'
git checkout -b upstream/latest 5.6
git checkout -b debian/latest
%% init:   'gitGraph':   'mainBranchName': 'master'      %%
gitGraph:
checkout master
commit id: "Upstream 5.6 release" tag: "5.6"
branch upstream/latest
checkout upstream/latest
commit id: "New upstream version 5.6" tag: "upstream/5.6"
branch debian/latest
checkout debian/latest
commit id: "Initial Debian packaging"
commit id: "Additional change 1"
commit id: "Additional change 2"
commit id: "Additional change 3"
At this point, the repository is structured according to DEP-14 conventions, ensuring a clear separation between upstream and Debian packaging changes, but there are no Debian changes yet. Next, add the Salsa repository as a new remote which called origin, the same as the default remote name in git.
shell
git remote add origin git@salsa.debian.org:otto/entr-demo.git
git push --set-upstream origin debian/latest
This is an important preparation step to later be able to create a Merge Request on Salsa that targets the debian/latest branch, which does not yet have any debian/ directory.

Launch a Debian Sid (unstable) container to run builds in To ensure that all packaging tools are of the latest versions, run everything inside a fresh Sid container. This has two benefits: you are guaranteed to have the most up-to-date toolchain, and your host system stays clean without getting polluted by various extra packages. Additionally, this approach works even if your host system is not Debian/Ubuntu.
shell
cd ..
podman run --interactive --tty --rm --shm-size=1G --cap-add SYS_PTRACE \
 --env='DEB*' --volume=$PWD:/tmp/test --workdir=/tmp/test debian:sid bash
Note that the container should be started from the parent directory of the git repository, not inside it. The --volume parameter will loop-mount the current directory inside the container. Thus all files created and modified are on the host system, and will persist after the container shuts down. Once inside the container, install the basic dependencies:
shell
apt update -q && apt install -q --yes git-buildpackage dpkg-dev dh-make

Automate creating the debian/ files with dh-make To create the files needed for the actual Debian packaging, use dh_make:
shell
# dh_make --packagename entr_5.6 --single --createorig
Maintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]

Done. Please edit the files in the debian/ subdirectory now.
Due to how dh_make works, the package name and version need to be written as a single underscore separated string. In this case, you should choose --single to specify that the package type is a single binary package. Other options would be --library for library packages (see libgda5 sources as an example) or --indep (see dns-root-data sources as an example). The --createorig will create a mock upstream release tarball (entr_5.6.orig.tar.xz) from the current release directory, which is necessary due to historical reasons and how dh_make worked before git repositories became common and Debian source packages were based off upstream release tarballs (e.g. *.tar.gz). At this stage, a debian/ directory has been created with template files, and you can start modifying the files and iterating towards actual working packaging.
shell
git add debian/
git commit -a -m "Initial Debian packaging"

Review the files The full list of files after the above steps with dh_make would be:
 -- entr
   -- LICENSE
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- configure
   -- data.h
   -- debian
     -- README.Debian
     -- README.source
     -- changelog
     -- control
     -- copyright
     -- gbp.conf
     -- entr-docs.docs
     -- entr.cron.d.ex
     -- entr.doc-base.ex
     -- manpage.1.ex
     -- manpage.md.ex
     -- manpage.sgml.ex
     -- manpage.xml.ex
     -- postinst.ex
     -- postrm.ex
     -- preinst.ex
     -- prerm.ex
     -- rules
     -- salsa-ci.yml.ex
     -- source
       -- format
     -- upstream
       -- metadata.ex
     -- watch.ex
   -- entr.1
   -- entr.c
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- system_test.sh
 -- entr_5.6.orig.tar.xz
You can browse these files in the demo repository. The mandatory files in the debian/ directory are:
  • changelog,
  • control,
  • copyright,
  • and rules.
All the other files have been created for convenience so the packager has template files to work from. The files with the suffix .ex are example files that won t have any effect until their content is adjusted and the suffix removed. For detailed explanations of the purpose of each file in the debian/ subdirectory, see the following resources:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • Debhelper man pages: Detailed information of how the Debian package build system works, and how the contents of the various files in debian/ affect the end result.
As Entr, the package used in this example, is a real package that already exists in the Debian archive, you may want to browse the actual Debian packaging source at https://salsa.debian.org/debian/entr/-/tree/debian/latest/debian for reference. Most of these files have standardized formatting conventions to make collaboration easier. To automatically format the files following the most popular conventions, simply run wrap-and-sort -vast or debputy reformat --style=black.

Identify build dependencies The most common reason for builds to fail is missing dependencies. The easiest way to identify which Debian package ships the required dependency is using apt-file. If, for example, a build fails complaining that pcre2posix.h cannot be found or that libcre2-posix.so is missing, you can use these commands:
shell
$ apt install -q --yes apt-file && apt-file update
$ apt-file search pcre2posix.h
libpcre2-dev: /usr/include/pcre2posix.h
$ apt-file search libpcre2-posix.so
libpcre2-dev: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3.0.6
The output above implies that the debian/control should be extended to define a Build-Depends: libpcre2-dev relationship. There is also dpkg-depcheck that uses strace to trace the files the build process tries to access, and lists what Debian packages those files belong to. Example usage:
shell
dpkg-depcheck -b debian/rules build

Build the Debian sources to generate the .deb package After the first pass of refining the contents of the files in debian/, test the build by running dpkg-buildpackage inside the container:
shell
dpkg-buildpackage -uc -us -b
The options -uc -us will skip signing the resulting Debian source package and other build artifacts. The -b option will skip creating a source package and only build the (binary) *.deb packages. The output is very verbose and gives a large amount of context about what is happening during the build to make debugging build failures easier. In the build log of entr you will see for example the line dh binary --buildsystem=makefile. This and other dh commands can also be run manually if there is a need to quickly repeat only a part of the build while debugging build failures. To see what files were generated or modified by the build simply run git status --ignored:
shell
$ git status --ignored
On branch debian/latest

Untracked files:
 (use "git add <file>..." to include in what will be committed)
 debian/debhelper-build-stamp
 debian/entr.debhelper.log
 debian/entr.substvars
 debian/files

Ignored files:
 (use "git add -f <file>..." to include in what will be committed)
 Makefile
 compat.c
 compat.o
 debian/.debhelper/
 debian/entr/
 entr
 entr.o
 status.o
Re-running dpkg-buildpackage will include running the command dh clean, which assuming it is configured correctly in the debian/rules file will reset the source directory to the original pristine state. The same can of course also be done with regular git commands git reset --hard; git clean -fdx. To avoid accidentally committing unnecessary build artifacts in git, a debian/.gitignore can be useful and it would typically include all four files listed as untracked above. After a successful build you would have the following files:
shell
 -- entr
   -- LICENSE
   -- Makefile -> Makefile.linux
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- compat.c
   -- compat.o
   -- configure
   -- data.h
   -- debian
     -- README.source.md
     -- changelog
     -- control
     -- copyright
     -- debhelper-build-stamp
     -- docs
     -- entr
       -- DEBIAN
         -- control
         -- md5sums
       -- usr
       -- bin
         -- entr
       -- share
       -- doc
         -- entr
         -- NEWS.gz
         -- README.md
         -- changelog.Debian.gz
         -- copyright
       -- man
       -- man1
       -- entr.1.gz
     -- entr.debhelper.log
     -- entr.substvars
     -- files
     -- gbp.conf
     -- patches
       -- PR149-expand-aliases-in-system-test-script.patch
       -- series
       -- system-test-skip-no-tty.patch
       -- system-test-with-system-binary.patch
     -- rules
     -- salsa-ci.yml
     -- source
       -- format
     -- tests
       -- control
     -- upstream
       -- metadata
       -- signing-key.asc
     -- watch
   -- entr
   -- entr.1
   -- entr.c
   -- entr.o
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- status.o
   -- system_test.sh
 -- entr-dbgsym_5.6-1_amd64.deb
 -- entr_5.6-1.debian.tar.xz
 -- entr_5.6-1.dsc
 -- entr_5.6-1_amd64.buildinfo
 -- entr_5.6-1_amd64.changes
 -- entr_5.6-1_amd64.deb
 -- entr_5.6.orig.tar.xz
The contents of debian/entr are essentially what goes into the resulting entr_5.6-1_amd64.deb package. Familiarizing yourself with the majority of the files in the original upstream source as well as all the resulting build artifacts is time consuming, but it is a necessary investment to get high-quality Debian packages. There are also tools such as Debcraft that automate generating the build artifacts in separate output directories for each build, thus making it easy to compare the changes to correlate what change in the Debian packaging led to what change in the resulting build artifacts.

Re-run the initial import with git-buildpackage When upstreams publish releases as tarballs, they should also be imported for optimal software supply-chain security, in particular if upstream also publishes cryptographic signatures that can be used to verify the authenticity of the tarballs. To achieve this, the files debian/watch, debian/upstream/signing-key.asc, and debian/gbp.conf need to be present with the correct options. In the gbp.conf file, ensure you have the correct options based on:
  1. Does upstream release tarballs? If so, enforce pristine-tar = True.
  2. Does upstream sign the tarballs? If so, configure explicit signature checking with upstream-signatures = on.
  3. Does upstream have a git repository, and does it have release git tags? If so, configure the release git tag format, e.g. upstream-vcs-tag = %(version%~%.)s.
To validate that the above files are working correctly, run gbp import-orig with the current version explicitly defined:
shell
$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"
gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'
gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
As the original packaging was done based on the upstream release git tag, the above command will fetch the tarball release, create the pristine-tar branch, and store the tarball delta on it. This command will also attempt to create the tag upstream/5.6 on the upstream/latest branch.

Import new upstream versions in the future Forking the upstream git repository, creating the initial packaging, and creating the DEP-14 branch structure are all one-off work needed only when creating the initial packaging. Going forward, to import new upstream releases, one would simply run git fetch upstreamvcs; gbp import-orig --uscan, which fetches the upstream git tags, checks for new upstream tarballs, and automatically downloads, verifies, and imports the new version. See the galera-4-demo example in the Debian source packages in git explained post as a demo you can try running yourself and examine in detail. You can also try running gbp import-orig --uscan without specifying a version. It would fetch it, as it will notice there is now Entr version 5.7 available, and import it.

Build using git-buildpackage From this stage onwards you should build the package using gbp buildpackage, which will do a more comprehensive build.
shell
gbp buildpackage -uc -us
The git-buildpackage build also includes running Lintian to find potential Debian policy violations in the sources or in the resulting .deb binary packages. Many Debian Developers run lintian -EviIL +pedantic after every build to check that there are no new nags, and to validate that changes intended to previous Lintian nags were correct.

Open a Merge Request on Salsa for Debian packaging review Getting everything perfectly right takes a lot of effort, and may require reaching out to an experienced Debian Developers for review and guidance. Thus, you should aim to publish your initial packaging work on Salsa, Debian s GitLab instance, for review and feedback as early as possible. For somebody to be able to easily see what you have done, you should rename your debian/latest branch to another name, for example next/debian/latest, and open a Merge Request that targets the debian/latest branch on your Salsa fork, which still has only the unmodified upstream files. If you have followed the workflow in this post so far, you can simply run:
  1. git checkout -b next/debian/latest
  2. git push --set-upstream origin next/debian/latest
  3. Open in a browser the URL visible in the git remote response
  4. Write the Merge Request description in case the default text from your commit is not enough
  5. Mark the MR as Draft using the checkbox
  6. Publish the MR and request feedback
Once a Merge Request exists, discussion regarding what additional changes are needed can be conducted as MR comments. With an MR, you can easily iterate on the contents of next/debian/latest, rebase, force push, and request re-review as many times as you want. While at it, make sure the Settings > CI/CD page has under CI/CD configuration file the value debian/salsa-ci.yml so that the CI can run and give you immediate automated feedback. For an example of an initial packaging Merge Request, see https://salsa.debian.org/otto/entr-demo/-/merge_requests/1.

Open a Merge Request / Pull Request to fix upstream code Due to the high quality requirements in Debian, it is fairly common that while doing the initial Debian packaging of an open source project, issues are found that stem from the upstream source code. While it is possible to carry extra patches in Debian, it is not good practice to deviate too much from upstream code with custom Debian patches. Instead, the Debian packager should try to get the fixes applied directly upstream. Using git-buildpackage patch queues is the most convenient way to make modifications to the upstream source code so that they automatically convert into Debian patches (stored at debian/patches), and can also easily be submitted upstream as any regular git commit (and rebased and resubmitted many times over). First, decide if you want to work out of the upstream development branch and later cherry-pick to the Debian packaging branch, or work out of the Debian packaging branch and cherry-pick to an upstream branch. The example below starts from the upstream development branch and then cherry-picks the commit into the git-buildpackage patch queue:
shell
git checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expected
git commit -a -m "Commit title" -m "Commit body"
git push # submit upstream
gbp pq import --force --time-machine=10
git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadata
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
The example below starts by making the fix on a git-buildpackage patch queue branch, and then cherry-picking it onto the upstream development branch:
shell
gbp pq import --force --time-machine=10
nano entr.c
git commit -a -m "Commit title" -m "Commit body"
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
git checkout -b bugfix-branch master
git cherry-pick <commit id>
git commit --amend # prepare commit message for upstream submission
git push # submit upstream
The key git-buildpackage commands to enter and exit the patch-queue mode are:
shell
gbp pq import --force --time-machine=10
gbp pq export --drop --commit
%% init:   'gitGraph':   'mainBranchName': 'debian/latest'      %%
gitGraph
checkout debian/latest
commit id: "Initial packaging"
branch patch-queue/debian/latest
checkout patch-queue/debian/latest
commit id: "Delete debian/patches/..."
commit id: "Patch 1 title"
commit id: "Patch 2 title"
commit id: "Patch 3 title"
These can be run at any time, regardless if any debian/patches existed prior, or if existing patches applied cleanly or not, or if there were old patch queue branches around. Note that the extra -b in gbp buildpackage -uc -us -b instructs to build only binary packages, avoiding any nags from dpkg-source that there are modifications in the upstream sources while building in the patches-applied mode.

Programming-language specific dh-make alternatives As each programming language has its specific way of building the source code, and many other conventions regarding the file layout and more, Debian has multiple custom tools to create new Debian source packages for specific programming languages. Notably, Python does not have its own tool, but there is an dh_make --python option for Python support directly in dh_make itself. The list is not complete and many more tools exist. For some languages, there are even competing options, such as for Go there is in addition to dh-make-golang also Gophian. When learning Debian packaging, there is no need to learn these tools upfront. Being aware that they exist is enough, and one can learn them only if and when one starts to package a project in a new programming language.

The difference between source git repository vs source packages vs binary packages As seen in earlier example, running gbp buildpackage on the Entr packaging repository above will result in several files:
entr_5.6-1_amd64.changes
entr_5.6-1_amd64.deb
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The entr_5.6-1_amd64.deb is the binary package, which can be installed on a Debian/Ubuntu system. The rest of the files constitute the source package. To do a source-only build, run gbp buildpackage -S and note the files produced:
entr_5.6-1_source.changes
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The source package files can be used to build the binary .deb for amd64, or any architecture that the package supports. It is important to grasp that the Debian source package is the preferred form to be able to build the binary packages on various Debian build systems, and the Debian source package is not the same thing as the Debian packaging git repository contents.
flowchart LR
git[Git repository<br>branch debian/latest] --> gbp buildpackage -S  src[Source Package<br>.dsc + .tar.xz]
src --> dpkg-buildpackage  bin[Binary Packages<br>.deb]
If the package is large and complex, the build could result in multiple binary packages. One set of package definition files in debian/ will however only ever result in a single source package.

Option to repackage source packages with Files-Excluded lists in the debian/copyright file Some upstream projects may include binary files in their release, or other undesirable content that needs to be omitted from the source package in Debian. The easiest way to filter them out is by adding to the debian/copyright file a Files-Excluded field listing the undesired files. The debian/copyright file is read by uscan, which will repackage the upstream sources on-the-fly when importing new upstream releases. For a real-life example, see the debian/copyright files in the Godot package that lists:
debian
Files-Excluded: platform/android/java/gradle/wrapper/gradle-wrapper.jar
The resulting repackaged upstream source tarball, as well as the upstream version component, will have an extra +ds to signify that it is not the true original upstream source but has been modified by Debian:
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb

Creating one Debian source package from multiple upstream source packages also possible In some rare cases the upstream project may be split across multiple git repositories or the upstream release may consist of multiple components each in their own separate tarball. Usually these are very large projects that get some benefits from releasing components separately. If in Debian these are deemed to go into a single source package, it is technically possible using the component system in git-buildpackage and uscan. For an example see the gbp.conf and watch files in the node-cacache package. Using this type of structure should be a last resort, as it creates complexity and inter-dependencies that are bound to cause issues later on. It is usually better to work with upstream and champion universal best practices with clear releases and version schemes.

When not to start the Debian packaging repository as a fork of the upstream one Not all upstreams use Git for version control. It is by far the most popular, but there are still some that use e.g. Subversion or Mercurial. Who knows maybe in the future some new version control systems will start to compete with Git. There are also projects that use Git in massive monorepos and with complex submodule setups that invalidate the basic assumptions required to map an upstream Git repository into a Debian packaging repository. In those cases one can t use a debian/latest branch on a clone of the upstream git repository as the starting point for the Debian packaging, but one must revert the traditional way of starting from an upstream release tarball with gbp import-orig package-1.0.tar.gz.

Conclusion Created in August 1993, Debian is one of the oldest Linux distributions. In the 32 years since inception, the .deb packaging format and the tooling to work with it have evolved several generations. In the past 10 years, more and more Debian Developers have converged on certain core practices evidenced by https://trends.debian.net/, but there is still a lot of variance in workflows even for identical tasks. Hopefully, you find this post useful in giving practical guidance on how exactly to do the most common things when packaging software for Debian. Happy packaging!

22 May 2025

Scarlett Gately Moore: KDE Application Snaps 25.04.1 with Major Bug Fix!,Life ( Good news finally!)

Snaps!

I actually released last week  I haven t had time to blog, but today is my birthday and taking some time to myself!This release came with a major bugfix. As it turns out our applications were very crashy on non-KDE platforms including Ubuntu proper. Unfortunately, for years, and I didn t know. Developers were closing the bug reports as invalid because users couldn t provide a stacktrace. I have now convinced most developers to assign snap bugs to the Snap platform so I at least get a chance to try and fix them. So with that said, if you tried our snaps in the past and gave up in frustration, please do try them again! I also spent some time cleaning up our snaps to only have current releases in the store, as rumor has it snapcrafters will be responsible for any security issues. With 200+ snaps I maintain, that is a lot of responsibility. We ll see if I can pull it off.

Life!

My last surgery was a success! I am finally healing and out of a sling for the first time in almost a year. I have also lined up a good amount of web work for next month and hopefully beyond. I have decided to drop the piece work for donations and will only accept per project proposals for open source work. I will continue to maintain KDE snaps for as long as time allows. A big thank you to everyone that has donated over the last year to fund my survival during this broken arm fiasco. I truly appreciate it!
With that said, if you want to drop me a donation for my work, birthday or well-being until I get paid for the aforementioned web work please do so here:

13 May 2025

Sergio Talens-Oliag: Running dind with sysbox

When I configured forgejo-actions I used a docker-compose.yaml file to execute the runner and a dind container configured to run using privileged mode to be able to build images with it; as mentioned on my post about my setup, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the installation. On a work chat the other day someone mentioned that the GitLab documentation about using kaniko says it is no longer maintained (see the kaniko issue #3348) so we should look into alternatives for kubernetes clusters. I never liked kaniko too much, but it works without privileged mode and does not need a daemon, which is a good reason to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use with my forgejo-actions setup. I was going to try buildah and podman but it seems that they need to adjust things on the systems running them:
  • When I tried to use buildah inside a docker container in Ubuntu I found the problems described on the buildah issue #1901 so I moved on.
  • Reading the podman documentation I saw that I need to export the fuse device to run it inside a container and, as I found other option, I also skipped it.
As my runner was already configured to use dind I decided to look into sysbox as a way of removing the privileged flag to make things more secure but have the same functionality.

Installing the sysbox packageAs I use Debian and Ubuntu systems I used the .deb packages distributed from the sysbox release page to install it (in my case I used the one from the 0.6.7 version). On the machine running forgejo (a Debian 12 server) I downloaded the package, stopped the running containers (it is needed to install the package and the only ones running where the ones started by the docker-compose.yaml file) and installed the sysbox-ce_0.6.7.linux_amd64.deb package using dpkg.

Updating the docker-compose.yaml fileTo run the dind container without setting the privileged mode we set sysbox-runc as the runtime on the dind container definition and set the privileged flag to false (it is the same as removing the key, as it defaults to false):
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -2,7 +2,9 @@ services:
   dind:
     image: docker:dind
     container_name: 'dind'
-    privileged: 'true'
+    # use sysbox-runc instead of using privileged mode
+    runtime: 'sysbox-runc'
+    privileged: 'false'
     command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
     restart: 'unless-stopped'
     volumes:

Testing the changesAfter applying the changes to the docker-compose.yaml file we start the containers and to test things we re-run previously executed jobs to see if things work as before. In my case I re-executed the build-image-from-tag workflow #18 from the oci project and everything worked as expected.

ConclusionFor my current use case (docker + dind) seems that sysbox is a good solution but I m not sure if I ll be installing it on kubernetes anytime soon unless I find a valid reason to do it (last time we talked about it my co workers said that they are evaluating buildah and podman for kubernetes and probably we will use them to replace kaniko in our gitlab-ci pipelines and for those tools the use of sysbox seems an overkill).

Next.