I am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits.
Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, forge websites and CI systems.
Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each commit is a new upload to the archive, and the commit message is the debian/changelog entry. The commit log is available at snapshots.debian.org.
In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org the GitLab instance of Debian. This is, however, based on each DD s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.
Is collaborative software development possible without git and version control software?
Debian, however, has some peculiarities that may be surprising to people who have grown accustomed to GitHub, GitLab or various company-internal code review systems.
In Debian:
The source code of the next upload is not public but resides only on the developer s laptop.
Code contributions are plain patch files, based on the latest revision released in the Debian archive (where the unstable area is equivalent to the main development branch).
These patches are submitted by email to a bug tracker that does no validation or testing whatsoever.
Developers applying these patches typically have elaborate Mutt or Emacs setups to facilitate fetching patches from email.
There is no public staging area, no concept of rebasing patches or withdrawing a patch and replacing it with a better version.
The submitter won t see any progress information until a notification email arrives after a new version has been uploaded to the Debian archive.
This system has served Debian for three decades. It is not broken, but using the package archive just feels well, archaic.
There is a more efficient way, and indeed the majority of Debian packages have a metadata field Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them.
This also makes contributing to multiple packages in parallel hard. One can t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed.
To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.
Inefficient ways of working prevent Debian from flourishing
I would claim, based on my personal experiences from the past 10+ years as a Debian Developer, that the lack of high-quality and productive tooling is seriously harming Debian. The current methods of collaboration are cumbersome for aspiring contributors to learn, and suboptimal to use both for new and seasoned contributors.
There are no exit interviews for contributors who left Debian, no comprehensive data on reasons to contribute or stop contributing, nor are there any metrics tracking how many people tried but failed to contribute to Debian. Some data points to support my concerns do exist:
Most packages are maintained by one person working alone (just pick any package at random and look at the upload history).
Debian should embrace git, but decision-making is slow
Debian is all about community and collaboration. One would assume that Debian prioritized above all making collaboration tools and processes simpler, faster and less error-prone, as it would help both current and future package maintainers. Yet, it isn t so, due to some reasons unique to Debian.
There is no single company or entity running Debian, and it has managed to operate as a pure meritocracy and do-cracy for over 30 years. This is impressive and admirable. Unfortunately, some of the infrastructure and technical processes are also nearly 30 years old and very difficult to change due to the same reason: the nature of Debian s distributed decision-making process.
As a software developer and manager with 25+ years of experience, I strongly feel that developing software collaboratively using Git is a major step forward that Debian needs to take, in one form or another, and I hope to see other DDs voice their support if they agree.
Debian Enhancement Proposal 18
Following how consensus is achieved in Debian, I started drafting DEP-18 in 2024, and it is currently awaiting enough thumbs up at https://salsa.debian.org/dep-team/deps/-/merge_requests/21 to get into CANDIDATE status next.
In summary the DEP-18 proposes that everyone keen on collaborating should:
Maintain Debian packaging sources in Git on Salsa.
Use Merge Requests to show your work and to get reviews.
Run Salsa CI before upload.
The principles above are not novel. According to stats at e.g. trends.debian.net, and UDD, ~93% of all Debian source packages are already hosted on salsa.debian.org. As of June 1st, 2025, only 1640 source packages remain that are not hosted on Salsa. The purpose of DEP-18 is to state in writing what Debian is currently doing for most packages, and thus express what among others new contributors should be learning and doing, so basic collaboration is smooth and free from structural obstacles.
Most packages are also already allowing Merge Requests and using Salsa CI, but there hasn t been any written recommendation anywhere in Debian to do so. The Debian Policy (v.4.7.2) does not even mention the word Salsa a single time. The current process documentation on how to do non-maintainer uploads or salvaging packages are all based on uploading packages to the archive, without any consideration of using git-based collaboration such as posting a Merge Request first. Personally I feel posting a Merge Request would be a better approach, as it would invite collaborators to discuss and provide code reviews. If there are no responses, the submitter can proceed to merge, but compared to direct uploads to the Debian archive, the Merge Request practice at least tries to offer a time and place for discussions and reviews to happen.
It could very well be that in the future somebody comes up with a new packaging format that makes upstream source package management easier, or a monorepo with all packages, or some other future structures or processes. Having a DEP to state how to do things now does not prevent people from experimenting and innovating if they intentionally want to do that. The DEP is merely an expression of the minimal common denominators in the packaging workflow that maintainers and contributors should follow, unless they know better.
Transparency and collaboration
Among the DEP-18 recommendations is:
The recommended first step in contributing to a package is to use the built-in Fork feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of Forks enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users rights. Git supports this by making both temporary and permanent forks easy to create and maintain.
Further, it states:
Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work.
Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa, and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.
I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa which is based on GitLab the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and are license-compliant, but as they don t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.
Recommend, not force
Note that the Debian Enhancement Proposals are not binding. Only the Debian Policy and Technical Committee decisions carry that weight. The nature of collaboration is voluntary anyway, so the DEP does not need to force anything on people who don t want to use salsa.debian.org.
The DEP-18 is also not a guide for package maintainers. I have my own views and have written detailed guides in blog articles if you want to read more on, for example, how to do code reviews efficiently.
Within DEP-18, there is plenty of room to work in many different ways, and it does not try to force one single workflow. The goal here is to simply have agreed-upon minimal common denominators among those who are keen to collaborate using salsa.debian.org, not to dictate a complete code submission workflow.
Once we reach this, there will hopefully be less friction in the most basic and recurring collaboration tasks, giving DDs more energy to improve other processes or just invest in having more and newer packages for Debian users to enjoy.
Next steps
In addition to lengthy online discussions on mailing lists and DEP reviews, I also presented on this topic at DebConf 2025 in Brest, France. Unfortunately the recording is not yet up on Peertube.
The feedback has been overwhelmingly positive. However, there are a few loud and very negative voices that cannot be ignored. Maintaining a Linux distribution at the scale and complexity of Debian requires extraordinary talent and dedication, and people doing this kind of work often have strong views, and most of the time for good reasons. We do not want to alienate existing key contributors with new processes, so maximum consensus is desirable.
We also need more data on what the 1000+ current Debian Developers view as a good process to avoid being skewed by a loud minority. If you are a current or aspiring Debian Developer, please add a thumbs up if you think I should continue with this effort (or a thumbs down if not) on the Merge Request that would make DEP-18 have candidate status.
There is also technical work to do. Increased Git use will obviously lead to growing adoption of the new tag2upload feature, which will need to get full git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone.
Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution and so forth.
Details on this vision will be in a later blog post, so subscribe to updates!
The discovery of a backdoor in XZ Utils in the spring of 2024 shocked the open source community, raising critical questions about software supply chain security. This post explores whether better Debian packaging practices could have detected this threat, offering a guide to auditing packages and suggesting future improvements.
The XZ backdoor in versions 5.6.0/5.6.1 made its way briefly into many major Linux distributions such as Debian and Fedora, but luckily didn t reach that many actual users, as the backdoored releases were quickly removed thanks to the heroic diligence of Andres Freund. We are all extremely lucky that he detected a half a second performance regression in SSH, cared enough to trace it down, discovered malicious code in the XZ library loaded by SSH, and reported promtly to various security teams for quick coordinated actions.
This episode makes software engineers pondering the following questions:
Why didn t any Linux distro packagers notice anything odd when importing the new XZ version 5.6.0/5.6.1 from upstream?
Is the current software supply-chain in the most popular Linux distros easy to audit?
Could we have similar backdoors lurking that haven t been detected yet?
As a Debian Developer, I decided to audit the xz package in Debian, share my methodology and findings in this post, and also suggest some improvements on how the software supply-chain security could be tightened in Debian specifically.
Note that the scope here is only to inspect how Debian imports software from its upstreams, and how they are distributed to Debian s users. This excludes the whole story of how to assess if an upstream project is following software development security best practices. This post doesn t discuss how to operate an individual computer running Debian to ensure it remains untampered as there are plenty of guides on that already.
Downloading Debian and upstream source packages
Let s start by working backwards from what the Debian package repositories offer for download. As auditing binaries is extremely complicated, we skip that, and assume the Debian build hosts are trustworthy and reliably building binaries from the source packages, and the focus should be on auditing the source code packages.
As with everything in Debian, there are multiple tools and ways to do the same thing, but in this post only one (and hopefully the best) way to do something is presented for brevity.
The first step is to download the latest version and some past versions of the package from the Debian archive, which is easiest done with debsnap. The following command will download all Debian source packages of xz-utils from Debian release 5.2.4-1 onwards:
Verifying authenticity of upstream and Debian sources using OpenPGP signatures
As seen in the output of debsnap, it already automatically verifies that the downloaded files match the OpenPGP signatures. To have full clarity on what files were authenticated with what keys, we should verify the Debian packagers signature with:
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
The upstream tarball signature (if available) can be verified with:
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
Note that this only proves that there is a key that created a valid signature for this content. The authenticity of the keys themselves need to be validated separately before trusting they in fact are the keys of these people. That can be done by checking e.g. the upstream website for what key fingerprints they published, or the Debian keyring for Debian Developers and Maintainers, or by relying on the OpenPGP web-of-trust .
Verifying authenticity of upstream sources by comparing checksums
In case the upstream in question does not publish release signatures, the second best way to verify the authenticity of the sources used in Debian is to download the sources directly from upstream and compare that the sha256 checksums match.
This should be done using the debian/watch file inside the Debian packaging, which defines where the upstream source is downloaded from. Continuing on the example situation above, we can unpack the latest Debian sources, enter and then run uscan to download:
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
The original files downloaded from upstream are now in /tmp along with the files renamed to follow Debian conventions. Using everything downloaded so far the sha256 checksums can be compared across the files and also to what the .dsc file advertised:
In the example above the checksum 0b54f79df85... is the same across the files, so it is a match.
Repackaged upstream sources can t be verified as easily
Note that uscan may in rare cases repackage some upstream sources, for example to exclude files that don t adhere to Debian s copyright and licensing requirements. Those files and paths would be listed under the Files-Excluded section in the debian/copyright file. There are also other situations where the file that represents the upstream sources in Debian isn t bit-by-bit the same as what upstream published. If checksums don t match, an experienced Debian Developer should review all package settings (e.g. debian/source/options) to see if there was a valid and intentional reason for divergence.
Reviewing changes between two source packages using diffoscope
Diffoscope is an incredibly capable and handy tool to compare arbitrary files. For example, to view a report in HTML format of the differences between two XZ releases, run:
If the changes are extensive, and you want to use a LLM to help spot potential security issues, generate the report of both the upstream and Debian packaging differences in Markdown with:
The Markdown files created above can then be passed to your favorite LLM, along with a prompt such as:
Based on the attached diffoscope output for a new Debian package version compared with the previous one, list all suspicious changes that might have introduced a backdoor, followed by other potential security issues. If there are none, list a short summary of changes as the conclusion.
Reviewing Debian source packages in version control
As of today only 93% of all Debian source packages are tracked in git on Debian s GitLab instance at salsa.debian.org. Some key packages such as Coreutils and Bash are not using version control at all, as their maintainers apparently don t see value in using git for Debian packaging, and the Debian Policy does not require it. Thus, the only reliable and consistent way to audit changes in Debian packages is to compare the full versions from the archive as shown above.
However, for packages that are hosted on Salsa, one can view the git history to gain additional insight into what exactly changed, when and why. For packages that are using version control, their location can be found in the Git-Vcs header in the debian/control file. For xz-utils the location is salsa.debian.org/debian/xz-utils.
Note that the Debian policy does not state anything about how Salsa should be used, or what git repository layout or development practices to follow. In practice most packages follow the DEP-14 proposal, and use git-buildpackage as the tool for managing changes and pushing and pulling them between upstream and salsa.debian.org.
To get the XZ Utils source, run:
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
At the time of writing this post the git history shows:
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
* 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
* 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
* 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
* d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
* c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
* 831b55b9 liblzma: mt dec: Fix a comment
* b9d168ee liblzma: Add assertions to lzma_bufcpy()
* c8e0a489 DOS: Update Makefile to fix the build
* 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
* 7ce38b31 Update THANKS
* 688e51bd Translations: Update the Croatian translation
* a6b54dde Prepare 5.8.0-1.
* 77d9470f Add 5.8 symbols.
* 9268eb66 Import 5.8.0
* 6f85ef4f Update upstream source from tag 'upstream/5.8.0'
\ \
* afba662b New upstream version 5.8.0
/
* 173fb5c6 doc/SHA256SUMS: Add 5.8.0
* db9258e8 Bump version and soname for 5.8.0
* bfb752a3 Add NEWS for 5.8.0
* 6ccbb904 Translations: Run "make -C po update-po"
* 891a5f05 Translations: Run po4a/update-po
* 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
* ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
* 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
* 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
* 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
* 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
* d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
* c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
* 831b55b9 liblzma: mt dec: Fix a comment
* b9d168ee liblzma: Add assertions to lzma_bufcpy()
* c8e0a489 DOS: Update Makefile to fix the build
* 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
* 7ce38b31 Update THANKS
* 688e51bd Translations: Update the Croatian translation
* a6b54dde Prepare 5.8.0-1.
* 77d9470f Add 5.8 symbols.
* 9268eb66 Import 5.8.0
* 6f85ef4f Update upstream source from tag 'upstream/5.8.0'
\ \
* afba662b New upstream version 5.8.0
/
* 173fb5c6 doc/SHA256SUMS: Add 5.8.0
* db9258e8 Bump version and soname for 5.8.0
* bfb752a3 Add NEWS for 5.8.0
* 6ccbb904 Translations: Run "make -C po update-po"
* 891a5f05 Translations: Run po4a/update-po
* 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
* ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
* 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
This shows both the changes on the debian/unstable branch as well as the intermediate upstream import branch, and the actual real upstream development branch. See my Debian source packages in git explainer for details of what these branches are used for.
To only view changes on the Debian branch, run git log --graph --oneline --first-parent or git log --graph --oneline -- debian.
The Debian branch should only have changes inside the debian/ subdirectory, which is easy to check with:
If the upstream in question signs commits or tags, they can be verified with e.g.:
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
The main benefit of reviewing changes in git is the ability to see detailed information about each individual change, instead of just staring at a massive list of changes without any explanations. In this example, to view all the upstream commits since the previous import to Debian, one would view the commit range from afba662b New upstream version 5.8.0 to fa1e8796 New upstream version 5.8.1 with git log --reverse -p afba662b...fa1e8796. However, a far superior way to review changes would be to browse this range using a visual git history viewer, such as gitk. Either way, looking at one code change at a time and reading the git commit message makes the review much easier.
Comparing Debian source packages to git contents
As stated in the beginning of the previous section, and worth repeating, there is no guarantee that the contents in the Debian packaging git repository matches what was actually uploaded to Debian. While the tag2upload project in Debian is getting more and more popular, Debian is still far from having any system to enforce that the git repository would be in sync with the Debian archive contents.
To detect such differences we can run diff across the Debian source packages downloaded with debsnap earlier (path source-xz-utils/xz-utils_5.8.1-2.debian) and the git repository cloned in the previous section (path xz-utils):
diff$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
* Remove the symlinks from -dev, pointing to the lib package.
(Closes: #1109354)
- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
* Remove the symlinks from -dev, pointing to the lib package.
(Closes: #1109354)
- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
In the case above diff revealed that the timestamp in the changelog in the version uploaded to Debian is different from what was committed to git. This is not malicious, just a mistake by the maintainer who probably didn t run gbp tag immediately after upload, but instead some dch command and ended up with having a different timestamps in the git compared to what was actually uploaded to Debian.
Creating syntetic Debian packaging git repositories
If no Debian packaging git repository exists, or if it is lagging behind what was uploaded to Debian s archive, one can use git-buildpackage s import-dscs feature to create synthetic git commits based on the files downloaded by debsnap, ensuring the git contents fully matches what was uploaded to the archive. To import a single version there is gbp import-dsc (no s at the end), of which an example invocation would be:
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
Example commit history from a repository with commits added with gbp import-dsc:
An online example repository with only a few missing uploads added using gbp import-dsc can be viewed at salsa.debian.org/otto/xz-utils-2025-09-29/-/network/debian%2Funstable
An example repository that was fully crafted using gbp import-dscs can be viewed at salsa.debian.org/otto/xz-utils-gbp-import-dscs-debsnap-generated/-/network/debian%2Flatest.
There exists also dgit, which in a similar way creates a synthetic git history to allow viewing the Debian archive contents via git tools. However, its focus is on producing new package versions, so fetching a package with dgit that has not had the history recorded in dgit earlier will only show the latest versions:
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
\
* 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
\
* 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
Unlike git-buildpackage managed git repositories, the dgit managed repositories cannot incorporate the upstream git history and are thus less useful for auditing the full software supply-chain in git.
Comparing upstream source packages to git contents
Equally important to the note in the beginning of the previous section, one must also keep in mind that the upstream release source packages, often called release tarballs, are not guaranteed to have the exact same contents as the upstream git repository. Projects might strip out test data or extra development files from their release tarballs to avoid shipping unnecessary files to users, or projects might add documentation files or versioning information into the tarball that isn t stored in git. While a small minority, there are also upstreams that don t use git at all, so the plain files in a release tarball is still the lowest common denominator for all open source software projects, and exporting and importing source code needs to interface with it.
In the case of XZ, the release tarball has additional version info and also a sizeable amount of pregenerated compiler configuration files. Detecting and comparing differences between git contents and tarballs can of course be done manually by running diff across an unpacked tarball and a checked out git repository. If using git-buildpackage, the difference between the git contents and tarball contents can be made visible directly in the import commit.
In this XZ example, consider this git history:
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
\
* fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
* a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
* 1c462c2a Add NEWS for 5.8.1
The commit a522a226 was the upstream release commit, which upstream also tagged v5.8.1. The merge commit 2808ec2d applied the new upstream import branch contents on the Debian branch. Between these is the special commit fa1e8796 New upstream version 5.8.1 tagged upstream/v5.8. This commit and tag exists only in the Debian packaging repository, and they show what is the contents imported into Debian. This is generated automatically by git-buildpackage when running git import-orig --uscan for Debian packages with the correct settings in debian/gbp.conf. By viewing this commit one can see exactly how the upstream release tarball differs from the upstream git contents (if at all).
In the case of XZ, the difference is substantial, and shown below in full as it is very interesting:
To be able to easily inspect exactly what changed in the release tarball compared to git release tag contents, the best tool for the job is Meld, invoked via git difftool --dir-diff fa1e8796^..fa1e8796.
To compare changes across the new and old upstream tarball, one would need to compare commits afba662b New upstream version 5.8.0 and fa1e8796 New upstream version 5.8.1 by running git difftool --dir-diff afba662b..fa1e8796.
With all the above tips you can now go and try to audit your own favorite package in Debian and see if it is identical with upstream, and if not, how it differs.
Should the XZ backdoor have been detected using these tools?
The famous XZ Utils backdoor (CVE-2024-3094) consisted of two parts: the actual backdoor inside two binary blobs masqueraded as a test files (tests/files/bad-3-corrupt_lzma2.xz, tests/files/good-large_compressed.lzma), and a small modification in the build scripts (m4/build-to-host.m4) to extract the backdoor and plant it into the built binary. The build script was not tracked in version control, but generated with GNU Autotools at release time and only shipped as additional files in the release tarball.
The entire reason for me to write this post was to ponder if a diligent engineer using git-buildpackage best practices could have reasonably spotted this while importing the new upstream release into Debian. The short answer is no . The malicious actor here clearly anticipated all the typical ways anyone might inspect both git commits, and release tarball contents, and masqueraded the changes very well and over a long timespan.
First of all, XZ has for legitimate reasons for several carefully crafted .xz files as test data to help catch regressions in the decompression code path. The test files are shipped in the release so users can run the test suite and validate that the binary is built correctly and xz works properly. Debian famously runs massive amounts of testing in its CI and autopkgtest system across tens of thousands of packages to uphold high quality despite frequent upgrades of the build toolchain and while supporting more CPU architectures than any other distro. Test data is useful and should stay.
When git-buildpackage is used correctly, the upstream commits are visible in the Debian packaging for easy review, but the commit cf44e4b that introduced the test files does not deviate enough from regular sloppy coding practices to really stand out. It is unfortunately very common for git commit to lack a message body explaining why the change was done, and to not be properly atomic with test code and test data together in the same commit, and for commits to be pushed directly to mainline without using code reviews (the commit was not part of any PR in this case). Only another upstream developer could have spotted that this change is not on par to what the project expects, and that the test code was never added, only test data, and thus that this commit was not just a sloppy one but potentially malicious.
Secondly, the fact that a new Autotools file appeared (m4/build-to-host.m4) in the XZ Utils 5.6.0 is not suspicious. This is perfectly normal for Autotools. In fact, starting from XZ Utils version 5.8.1 it is now shipping a m4/build-to-host.m4 file that it actually uses now.
Spotting that there is anything fishy is practically impossible by simply reading the code, as Autotools files are full custom m4 syntax interwoven with shell script, and there are plenty of backticks () that spawn subshells and evals that execute variable contents further, which is just normal for Autotools. Russ Cox s XZ post explains how exactly the Autotools code fetched the actual backdoor from the test files and injected it into the build.
There is only one tiny thing that maybe a very experienced Autotools user could potentially have noticed: the serial 30 in the version header is way too high. In theory one could also have noticed this Autotools file deviates from what other packages in Debian ship with the same filename, such as e.g. the serial 3, serial 5a or 5b versions. That would however require and an insane amount extra checking work, and is not something we should plan to start doing. A much simpler solution would be to simply strongly recommend all open source projects to stop using Autotools to eventually get rid of it entirely.
Not detectable with reasonable effort
While planting backdoors is evil, it is hard not to feel some respect to the level of skill and dedication of the people behind this. I ve been involved in a bunch of security breach investigations during my IT career, and never have I seen anything this well executed.
If it hadn t slowed down SSH by ~500 milliseconds and been discovered due to that, it would most likely have stayed undetected for months or years. Hiding backdoors in closed source software is relatively trivial, but hiding backdoors in plain sight in a popular open source project requires some unusual amount of expertise and creativity as shown above.
Is the software supply-chain in Debian easy to audit?
While maintaining a Debian package source using git-buildpackage can make the package history a lot easier to inspect, most packages have incomplete configurations in their debian/gbp.conf, and thus their package development histories are not always correctly constructed or uniform and easy to compare. The Debian Policy does not mandate git usage at all, and there are many important packages that are not using git at all. Additionally the Debian Policy also allows for non-maintainers to upload new versions to Debian without committing anything in git even for packages where the original maintainer wanted to use git. Uploads that bypass git unfortunately happen surpisingly often.
Because of the situation, I am afraid that we could have multiple similar backdoors lurking that simply haven t been detected yet. More audits, that hopefully also get published openly, would be welcome! More people auditing the contents of the Debian archives would probably also help surface what tools and policies Debian might be missing to make the work easier, and thus help improve the security of Debian s users, and improve trust in Debian.
Is Debian currently missing some software that could help detect similar things?
To my knowledge there is currently no system in place as part of Debian s QA or security infrastructure to verify that the upstream source packages in Debian are actually from upstream. I ve come across a lot of packages where the debian/watch or other configs are incorrect and even cases where maintainers have manually created upstream tarballs as it was easier than configuring automation to work. It is obvious that for those packages the source tarball now in Debian is not at all the same as upstream. I am not aware of any malicious cases though (if I was, I would report them of course).
I am also aware of packages in the Debian repository that are misconfigured to be of type 1.0 (native) packages, mixing the upstream files and debian/ contents and having patches applied, while they actually should be configured as 3.0 (quilt), and not hide what is the true upstream sources. Debian should extend the QA tools to scan for such things. If I find a sponsor, I might build it myself as my next major contribution to Debian.
In addition to better tooling for finding mismatches in the source code, Debian could also have better tooling for tracking in built binaries what their source files were, but solutions like Fraunhofer-AISEC s supply-graph or Sony s ESSTRA are not practical yet. Julien Malka s post about NixOS discusses the role of reproducible builds, which may help in some cases across all distros.
Or, is Debian missing some policies or practices to mitigate this?
Perhaps more importantly than more security scanning, the Debian Developer community should switch the general mindset from anyone is free to do anything to valuing having more shared workflows. The ability to audit anything is severely hampered by the fact that there are so many ways to do the same thing, and distinguishing what is a normal deviation from a malicious deviation is too hard, as the normal can basically be almost anything.
Also, as there is no documented and recommended default workflow, both those who are old and new to Debian packaging might never learn any one optimal workflow, and end up doing many steps in the packaging process in a way that kind of works, but is actually wrong or unnecessary, causing process deviations that look malicious, but turn out to just be a result of not fully understanding what would have been the right way to do something.
In the long run, once individual developers workflows are more aligned, doing code reviews will become a lot easier and smoother as the excess noise of workflow differences diminishes and reviews will feel much more productive to all participants. Debian fostering a culture of code reviews would allow us to slowly move from the current practice of mainly solo packaging work towards true collaboration forming around those code reviews.
I have been promoting increased use of Merge Requests in Debian already for some time, for example by proposing DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are involved in Debian development, please give a thumbs up in dep-team/deps!21 if you want me to continue promoting it.
Can we trust open source software?
Yes and I would argue that we can only trust open source software. There is no way to audit closed source software, and anyone using e.g. Windows or MacOS just have to trust the vendor s word when they say they have no intentional or accidental backdoors in their software. Or, when the news gets out that the systems of a closed source vendor was compromised, like Crowdstrike some weeks ago, we can t audit anything, and time after time we simply need to take their word when they say they have properly cleaned up their code base.
In theory, a vendor could give some kind of contractual or financial guarantee to its customer that there are no preventable security issues, but in practice that never happens. I am not aware of a single case of e.g. Microsoft or Oracle would have paid damages to their customers after a security flaw was found in their software. In theory you could also pay a vendor more to have them focus more effort in security, but since there is no way to verify what they did, or to get compensation when they didn t, any increased fees are likely just pocketed as increased profit.
Open source is clearly better overall. You can, if you are an individual with the time and skills, audit every step in the supply-chain, or you could as an organization make investments in open source security improvements and actually verify what changes were made and how security improved.
If your organisation is using Debian (or derivatives, such as Ubuntu) and you are interested in sponsoring my work to improve Debian, please reach out.
Locking down database access is probably the single most important thing for a system administrator or software developer to prevent their application from leaking its data. As MariaDB 11.8 is the first long-term supported version with a few new key security features, let s recap what the most important things are every DBA should know about MariaDB in 2025.
Back in the old days, MySQL administrators had a habit of running the clumsy mysql-secure-installation script, but it has long been obsolete. A modern MariaDB database server is already secure by default and locked down out of the box, and no such extra scripts are needed. On the contrary, the database administrator is expected to open up access to MariaDB according to the specific needs of each server. Therefore, it is important that the DBA can understand and correctly configure three things:
Separate application-specific users with granular permissions allowing only necessary access and no more.
Distributing and storing passwords and credentials securely
Ensuring all remote connections are properly encrypted
For holistic security, one should also consider proper auditing, logging, backups, regular security updates and more, but in this post we will focus only on the above aspects related to securing database access.
How encrypting database connections with TLS differs from web server HTTP(S)
Even though MariaDB (and other databases) use the same SSL/TLS protocol for encrypting remote connections as web servers and HTTPS, the way it is implemented is significantly different, and the different security assumptions are important for a database administrator to grasp.
Firstly, most HTTP requests to a web server are unauthenticated, meaning the web server serves public web pages and does not require users to log in. Traditionally, when a user logs in over a HTTP connection, the username and password were transmitted in plaintext as a HTTP POST request. Modern TLS, which was previously called SSL, does not change how HTTP works but simply encapsulates it. When using HTTPS, a web browser and a web server will start an encrypted TLS connection as the very first thing, and only once established, do they send HTTP requests and responses inside it. There are no passwords or other shared secrets needed to form the TLS connection. Instead, the web server relies on a trusted third party, a Certificate Authority (CA), to vet that the TLS certificate offered by the web server can be trusted by the web browser.
For a database server like MariaDB, the situation is quite different. All users need to first authenticate and log in to the server before getting being allowed to run any SQL and getting any data out of the server. The database server and client programs have built-in authentication methods, and passwords are not, and have never been, sent in plaintext. Over the years, MySQL and its successor, MariaDB, have had multiple password authentication methods: the original SHA-1-based hashing, later double SHA-1-based mysql_native_password, followed by sha256_password and caching_sha256_password in MySQL and ed25519 in MariaDB. The MariaDB.org blog post by Sergei Golubchik recaps the history of these well.
Even though most modern MariaDB installations should be using TLS to encrypt all remote connections in 2025, having the authentication method be as secure as possible still matters, because authentication is done before the TLS connection is fully established.
To further harden the authentication agains man-in-the-middle attacks, a new password the authentication method PARSEC was introduced in MariaDB 11.8, which builds upon the previous ed25519 public-key-based verification (similar to how modern SSH does), and also combines key derivation with PBKDF2 with hash functions (SHA512,SHA256) and a high iteration count.
At first it may seem like a disadvantage to not wrap all connections in a TLS tunnel like HTTPS does, but actually not having the authentication done in a MitM resistant way regardless of the connection encryption status allows a clever extra capability that is now available in MariaDB: as the database server and client already have a shared secret that is being used by the server to authenticate the user, it can also be used by the client to validate the server s TLS certificate and no third parties like CAs or root certificates are needed. MariaDB 11.8 was the first LTS version to ship with this capability for zero-configuration TLS.
Note that the zero-configuration TLS also works on older password authentication methods and does not require users to have PARSEC enabled. As PARSEC is not yet the default authentication method in MariaDB, it is recommended to enable it in installations that use zero-configuration TLS encryption to maximize the security of the TLS certificate validation.
Why the root user in MariaDB has no password and how it makes the database more secure
Relying on passwords for security is problematic, as there is always a risk that they could leak, and a malicious user could access the system using the leaked password. It is unfortunately far too common for database passwords to be stored in plaintext in configuration files that are accidentally committed into version control and published on GitHub and similar platforms. Every application or administrative password that exists should be tracked to ensure only people who need it know it, and rotated at regular intervals to ensure old employees etc won t be able to use old passwords. This password management is complex and error-prone.
Replacing passwords with other authentication methods is always advisable when possible. On a database server, whoever installed the database by running e.g. apt install mariadb-server, and configured it with e.g. nano /etc/mysql/mariadb.cnf, already has full root access to the operating system, and asking them for a password to access the MariaDB database shell is moot, since they could circumvent any checks by directly accessing the files on the system anyway. Therefore, MariaDB, since version 10.4 stopped requiring the root user to enter a password when connecting locally, and instead checks using socket authentication whether the user is the operating-system root user or equivalent (e.g. running sudo). This is an elegant way to get rid of a password that was actually unnecessary to begin with. As there is no root password anymore, the risk of an external user accessing the database as root with a leaked password is fully eliminated.
Note that socket authentication only works for local connections on the same server. If you want to access a MariaDB server remotely as the root user, you would need to configure a password for it first. This is not generally recommended, as explained in the next section.
Create separate database users for normal use and keep root for administrative use only
Out of the box a MariaDB installation is already secure by default, and only the local root user can connect to it. This account is intended for administrative use only, and for regular daily use you should create separate database users with access limited to the databases they need and the permissions required.
The most typical commands needed to create a new database for an app and a user the app can use to connect to the database would be the following:
sqlCREATE DATABASE app_db;
CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%';
FLUSH PRIVILEGES;
Alternatively, if you want to use the parsec authentication method, run this to create the user:
sqlCREATE OR REPLACE USER 'app_user'@'%'
IDENTIFIED VIA parsec
USING PASSWORD('your_secure_password');
CREATEORREPLACEUSER'app_user'@'%' IDENTIFIED VIA parsec
USING PASSWORD('your_secure_password');
Note that the plugin auth_parsec is not enabled by default. If you see the error message ERROR 1524 (HY000): Plugin 'parsec' is not loaded fix this by running INSTALL SONAME 'auth_parsec';.
In the CREATE USER statements, the @'%' means that the user is allowed to connect from any host. This needs to be defined, as MariaDB always checks permissions based on both the username and the remote IP address or hostname of the user, combined with the authentication method. Note that it is possible to have multiple user@remote combinations, and they can have different authentication methods. A user could, for example, be allowed to log in locally using the socket authentication and over the network using a password.
If you are running a custom application and you know exactly what permissions are sufficient for the database users, replace the ALL PRIVILEGES with a subset of privileges listed in the MariaDB documentation.
For new permissions to take effect, restart the database or run FLUSH PRIVILEGES.
Allow MariaDB to accept remote connections and enforce TLS
Using the above 'app_user'@'%' is not enough on its own to allow remote connections to MariaDB. The MariaDB server also needs to be configured to listen on a network interface to accept remote connections. As MariaDB is secure by default, it only accepts connections from localhost until the administrator updates its configuration. On a typical Debian/Ubuntu system, the recommended way is to drop a new custom config in e.g. /etc/mysql/mariadb.conf.d/99-server-customizations.cnf, with the contents:
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
For settings to take effect, restart the server with systemctl restart mariadb. After this, the server will accept connections on any network interface. If the system is using a firewall, the port 3306 would additionally need to be allow-listed.
To confirm that the settings took effect, run e.g. mariadb -e "SHOW VARIABLES LIKE 'bind_address';" , which should now show 0.0.0.0.
When allowing remote connections, it is important to also always define require-secure-transport = on to enforce that only TLS-encrypted connections are allowed. If the server is running MariaDB 11.8 and the clients are also MariaDB 11.8 or newer, no additional configuration is needed thanks to MariaDB automatically providing TLS certificates and appropriate certificate validation in recent versions.
On older long-term-supported versions of the MariaDB server one would have had to manually create the certificates and configure the ssl_key, ssl_cert and ssl_ca values on the server, and distribute the certificate to the clients as well, which was cumbersome, so good it is not required anymore. In MariaDB 11.8 the only additional related config that might still be worth setting is tls_version = TLSv1.3 to ensure only the latest TLS protocol version is used.
Finally, test connections to ensure they work and to confirm that TLS is used by running e.g.:
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
If running a Debian/Ubuntu system, see the bundled README with zcat /usr/share/doc/mariadb-server/README.Debian.gz to read more configuration tips.
Should TLS encryption be used also on internal networks?
If a database server and app are running on the same private network, the chances that the connection gets eavesdropped on or man-in-the-middle attacked by a malicious user are low. However, it is not zero, and if it happens, it can be difficult to detect or prove that it didn t happen. The benefit of using end-to-end encryption is that both the database server and the client can validate the certificates and keys used, log it, and later have logs audited to prove that connections were indeed encrypted and show how they were encrypted.
If all the computers on an internal network already have centralized user account management and centralized log collection that includes all database sessions, reusing existing SSH connections, SOCKS proxies, dedicated HTTPS tunnels, point-to-point VPNs, or similar solutions might also be a practical option. Note that the zero-configuration TLS only works with password validation methods. This means that systems configured to use PAM or Kerberos/GSSAPI can t use it, but again those systems are typically part of a centrally configured network anyway and are likely to have certificate authorities and key distribution or network encryption facilities already set up.
In a typical software app stac however, the simplest solution is often the best and I recommend DBAs use the end-to-end TLS encryption in MariaDB 11.8 in most cases.
Hopefully with these tips you can enjoy having your MariaDB deployments both simpler and more secure than before!
I ve noticed that procrastination and inability to be consistently productive at work has become quite common in recent years. This is clearly visible in younger people who have grown up with an endless stream of entertainment literally at their fingertips, on their mobile phone. It is however a trap one can escape from with a little bit of help.
Procrastination is natural they say humans are lazy by nature after all. Probably all of us have had moments when we choose to postpone a task we know we should be working on, and instead spent our time doing secondary tasks (valorisation). Classic example is cleaning your apartment when you should be preparing for an exam. Some may procrastinate by not doing any work at all, and just watching YouTube videos or the like. To some people, typically those who are in their 20s and early in their career, procrastination can be a big challenge and finding the discipline to stick to planned work may need intentional extra effort, and perhaps even external help.
During my 20+ year career in software development I ve been blessed to work with engineers of various backgrounds and each with their unique set of strengths. I have also helped many grow in various areas and overcome challenges, such as lack of intrinsic motivation and managing procrastination, and some might be able to get it in check with some simple advice.
Distance yourself from the digital distractions
The key to avoiding distractions and procrastination is to make it inconvenient enough that you rarely do it. If continuing to do work is easier than switching to procrastination, work is more likely to continue.
Tips to minimize digital distractions, listed in order of importance:
Put your phone away. Just like when you go to a movie and turn off your phone for two hours, you can put the phone away completely when starting to work. Put the phone in a different room to ensure there is enough physical distance between you and the distraction, so it is impossible for you to just take a quick peek .
Turn off notifications from apps. Don t let the apps call you like sirens luring Odysseus. You don t need to have all the notifications. You will see what the apps have when you eventually open them at a time you choose to use them.
Remove or disable social media apps, games and the like from your phone and your computer. You can install them back when you have vacation. You can probably live without them for some time. If you can t remove them, explore your phone s screen time restriction features to limit your own access to apps that most often waste your time. These features are sometimes listed in the phone settings under digital health .
Have a separate work computer and work phone. Having dedicated ones just for work that are void of all unnecessary temptations helps keep distance from the devices that could derail your focus.
Listen to music. If you feel your brain needs a dose of dopamine to get you going, listening to music helps satisfy your brain s cravings while still being able to simultaneously keep working.
Doing a full digital detox is probably not practical, or not sustainable for an extended time. One needs apps to stay in touch with friends and family, and staying current in software development probably requires spending some time reading news online and such. However the tips above can help contain the distractions and minimize the spontaneous attention the distractions get.
Some of the distractions may ironically be from the work itself, for example Slack notifications or new email notifications. I recommend turning them off for a couple of hours every day to have some distraction free time. It should be enough to check work mail a couple times a day. Checking them every hour probably does not add much overall value for the company unless your work is in sales or support where the main task itself is responding to emails.
Distraction free work environment
Following the same principle of distancing yourself from distractions, try to use a dedicated physical space for working. If you don t have a spare room to dedicate to work, use a neighborhood caf or sign up for a local co-working space or start commuting to the company office to find a space to be focused on work in.
Break down tasks into smaller steps
Sometimes people postpone tasks because they feel intimidated by the size or complexity of a task. In particular in software engineering problems may be vague and appear large until one reaches the breakthrough that brings the vision of how to tackle it. Breaking down problems into smaller more manageable pieces has many advantages in software engineering. Not only can it help with task-avoidance, but it can also make the problem easier to analyze, suggest solutions and test them and build a solid foundation to expand upon to ultimately later reach a full solution on the entire larger problem.
Working on big problems as a chain of smaller tasks may also offer more opportunities to celebrate success on completing each subtask and help getting in a suitable cadence of solving a single thing, taking a break and then tackling the next issue.
Breaking down a task into concrete steps may also help with getting more realistic time estimations. Sometimes procrastination isn t real someone could just be overly ambitious and feel bad about themselves for not doing an unrealistic amount of work.
Intrinsic motivation
Of course, you should follow your passion when possible. Strive to pick a career that you enjoy, and thus maximize the intrinsic motivation you experience. However, even a dream job is still a job. Nobody is ever paid to do whatever they want. Any work will include at least some tasks that feel like a chore or otherwise like something you would not do unless paid to.
Some would say that the definition of work itself is having to do things one would otherwise not do. You can only fully do whatever you want while on vacation or when you choose to not have a job at all. But if you have a job, you simply need to find the intrinsic motivation to do it.
Simply put, some tasks are just unpleasant or boring. Our natural inclination is to avoid them in favor of more enjoyable activities. For these situations we just have to find the discipline to force ourselves to do the tasks and figuratively speaking whip ourselves into being motivated to complete the tasks.
Extrinsic motivation
As the name implies, this is something people external to you need to provide, such as your employer or manager. If you have challenges in managing yourself and delivering results on a regular basis, somebody else needs to set goals and deadlines and keep you accountable for them. At the end of the day this means that eventually you will stop receiving salary or other payments unless you did your job.
Forcing people to do something isn t nice, but eventually it needs to be done. It would not be fair for an employer to pay those who did their work the same salary as those who procrastinated and fell short on their tasks.
If you work solo, you can also simulate the extrinsic motivation by publicly announcing milestones and deadlines to build up pressure for yourself to meet them and avoid publicly humiliation. It is a well-studied and scientifically proven phenomenon that most university students procrastinate at the start of assignments, and truly start working on them only once the deadline is imminent.
External help for addictions
If procrastination is mainly due to a single distraction that is always on your mind, it may be a sign of an addiction. For example, constantly thinking about a computer game or staying up late playing a computer game, to the extent that it seriously affects your ability to work, may be a symptom of an addiction, and getting out of it may be easier with external help.
Discipline and structure
Most of the time procrastination is not due to an addiction, but simply due to lack of self-discipline and structure. The good thing is that those things can be learned. It is mostly a matter of getting into new habits, which most young software engineers pick up more or less automatically while working along the more senior ones.
Hopefully these tips can help you stay on track and ensure you do everything you are expected to do with clear focus, and on time!
Beyond Debian: Useful for other distros too
Every two years Debian releases a new major version of its Stable series,
meaning the differences between consecutive Debian Stable releases represent
two years of new developments both in Debian as an organization and its native
packages, but also in all other packages which are also shipped by other
distributions (which are getting into this new Stable release).
If you're not paying close attention to everything that's going on all the time
in the Linux world, you miss a lot of the nice new features and tools. It's
common for people to only realize there's a cool new trick available only years
after it was first introduced.
Given these considerations, the tips that I'm describing will eventually be
available in whatever other distribution you use, be it because it's a Debian
derivative or because it just got the same feature from the upstream project.
I'm not going to list "passive" features (as good as they can be), the focus
here is on new features that might change how you configure and use your
machine, with a mix between productivity and performance.
Debian 13 - Trixie
I have been a Debian Testing user for longer than 10 years now (and I recommend
it for non-server users), so I'm not usually keeping track of all the cool
features arriving in the new Stable releases because I'm continuously receiving
them through the Debian Testing rolling release.
Nonetheless, as a Debian Developer I'm in a good position to point out the ones
I can remember. I would also like other Debian Developers to do the same as I'm
sure I would learn something new.
The Debian 13 release notes contain a "What's new" section
, which
lists the first two items here and a few other things, in other words, take my
list as an addition to the release notes.
Debian 13 was released on 2025-08-09, and these are nice things you shouldn't
miss in the new release, with a bonus one not tied to the Debian 13 release.
1) wcurl
Have you ever had to download a file from your terminal using curl and didn't
remember the parameters needed? I did.
Nowadays you can use wcurl; "a command line tool which lets you download URLs
without having to remember any parameters."
Simply call wcurl with one or more URLs as parameters and it will download
all of them in parallel, performing retries, choosing the correct output file
name, following redirects, and more.
Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other
distribution you can imagine, starting with curl 8.14.0.
I've written more about wcurl in its release
announcement
and I've done a lightning talk presentation in DebConf24, which is linked in
the release announcement.
2) HTTP/3 support in curl
Debian has become the first stable Linux distribution to ship curl with support
for HTTP/3. I've written about this in July
2024, when we
first enabled it. Note that we first switched the curl CLI to GnuTLS, but then
ended up releasing the curl CLI linked with OpenSSL (as support arrived later).
Debian was the first stable Linux distro to enable it, and within
rolling-release-based distros; Gentoo enabled it first in their non-default
flavor of the package and Arch Linux did it three months before we pushed it to
Debian Unstable/Testing/Stable-backports, kudos to them!
HTTP/3 is not used by default by the curl CLI, you have to enable it with
--http3 or --http3-only.
Try it out:
3) systemd soft-reboot
Starting with systemd v254, there's a new soft-reboot option, it's an
userspace-only reboot, much faster than a full reboot if you don't need to
reboot the kernel.
You can read the announcement from the systemd v254 GitHub
release
Try it out:
# This will reboot your machine!systemctl soft-reboot
4) apt --update
Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!
The new --update option lets you do both things in a single command:
I love this, but it's still not yet where it should be, fingers crossed for a
simple apt upgrade to behave like other package managers by updating its
cache as part of the task, maybe in Debian 14?
Try it out:
sudo apt upgrade --update# The order doesn't mattersudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt
cache before installing anything, for example:
podman run debian:stable bin/bash -c'apt install --update -y curl'
5) powerline-go
powerline-go is a powerline-style prompt written in Golang, so it's much more
performant than its Python alternative powerline.
powerline-style prompts are quite useful to show things like the current status
of the git repo in your working directory, exit code of the previous command,
presence of jobs in the background, whether or not you're in an ssh session,
and more.
Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function_update_ps1()PS1="$(/usr/bin/powerline-go -error$? -jobs$(jobs -pwc -l))"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"if["$TERM"!="linux"]&&[-f"/usr/bin/powerline-go"];thenPROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"fi
Or this to .zshrc:
functionpowerline_precmd()PS1="$(/usr/bin/powerline-go -error$? -jobs$$(%):%j:-0)"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"
If you'd like to have your prompt start in a newline, like I have in the
screenshot above, you just need to set -newline in the powerline-go
invocation in your .bashrc/.zshrc.
6) Gnome System Monitor Extension
Tips number 6 and 7 are for Gnome users.
Gnome is now shipping a system monitor extension which lets you get a glance of
the current load of your machine from the top bar.
I've found this quite useful for machines where I'm required to install
third-party monitoring software that tends to randomly consume more resources
than it should. If I feel like my machine is struggling, I can quickly glance
at its load to verify if it's getting overloaded by some process.
The extension is not as complete as
system-monitor-next,
not showing temperatures or histograms, but at least it's officially part of
Gnome, easy to install and supported by them.
Try it out:
And then enable the extension from the "Extension Manager" application.
7) Gnome setting for battery charging profile
After having to learn more about batteries in order to get into FPV drones,
I've come to have a bigger appreciation for solutions that minimize the
inevitable loss of capacity that accrues over time.
There's now a "Battery Charging" setting (under the "Power") section which lets
you choose between two different profiles: "Maximize Charge" and "Preserve
Battery Health".
On supported laptops, this setting is an easy way to set thresholds for when
charging should start and stop, just like you could do it with the tlp package,
but now from the Gnome settings.
To increase the longevity of my laptop battery, I always keep it at "Preserve
Battery Health" unless I'm traveling.
What I would like to see next is support for choosing different "Power Modes"
based on whether the laptop is plugged-in, and based on the battery
charge percentage.
There's a GNOME
issue
tracking this feature, but there's some pushback on whether this is the right
thing to expose to users.
In the meantime, there are some workarounds mentioned in that issue which
people who really want this feature can follow.
If you would like to learn more about batteries; Battery
University is a great starting point, besides
getting into FPV drones and being forced to handle batteries without a Battery
Management System (BMS).
And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's
YouTube channel is a great resource:
@JoshuaBardwell.
8) Lazygit
Emacs users are already familiar with the legendary magit; a terminal-based
UI for git.
Lazygit is an alternative for non-emacs users, you can integrate it with neovim
or just use it directly.
I'm still playing with lazygit and haven't integrated it into my workflows,
but so far it has been a pleasant experience.
You should check out the demos from the lazygit GitHub
page.
Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.
9) neovim
neovim has been shipped in Debian since 2016, but upstream has been doing a lot of
work to improve the experience out-of-the-box in the last couple of years.
If you're a neovim poweruser, you're likely not installing it from the official
repositories, but for those that are, Debian 13 comes with version 0.10.4,
which brings the following improvements compared to the version in Debian 12:
Treesitter support for C, Lua, Markdown, with the possibility of adding any
other languages as needed;
Better spellchecking due to treesitter integration (spellsitter);
Mouse support enabled by default;
Commenting support out-of-the-box;
Check :h commenting for details, but the
tl;dr is that you can use gcc to comment the current line and gc to comment
the current selection.
OSC52 support.
Especially handy for those using neovim over an ssh
connection, this protocol lets you copy something from within the neovim
process into the clipboard of the machine you're using to connect through ssh.
In other words, you can copy from neovim running in a host over ssh and paste
it in the "outside" machine.
10) [Bonus] Running old Debian releases
The bonus tip is not specific to the Debian 13 release, but something I've
recently learned in the #debian-devel IRC channel.
Did you know there are usable container images for all past Debian releases?
I'm not talking "past" as in "some of the older releases", I'm talking past as
in "literally every Debian release, including the very first one".
Tianon Gravi "tianon" is the Debian Developer responsible for making this
happen, kudos to him!
There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a
32-bit host, otherwise you will get the error Out of virtual memory!, but
starting with Bo (1.3) all should work in amd64/arm64.
Try it out:
sudo apt install podmanpodman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the
container, that's because apt first appeared in Debian Slink (2.1).
Beyond Debian: Useful for other distros too
Every two years Debian releases a new major version of its Stable series,
meaning the differences between consecutive Debian Stable releases represent
two years of new developments both in Debian as an organization and its native
packages, but also in all other packages which are also shipped by other
distributions (which are getting into this new Stable release).
If you're not paying close attention to everything that's going on all the time
in the Linux world, you miss a lot of the nice new features and tools. It's
common for people to only realize there's a cool new trick available only years
after it was first introduced.
Given these considerations, the tips that I'm describing will eventually be
available in whatever other distribution you use, be it because it's a Debian
derivative or because it just got the same feature from the upstream project.
I'm not going to list "passive" features (as good as they can be), the focus
here is on new features that might change how you configure and use your
machine, with a mix between productivity and performance.
Debian 13 - Trixie
I have been a Debian Testing user for longer than 10 years now (and I recommend
it for non-server users), so I'm not usually keeping track of all the cool
features arriving in the new Stable releases because I'm continuously receiving
them through the Debian Testing rolling release.
Nonetheless, as a Debian Developer I'm in a good position to point out the ones
I can remember. I would also like other Debian Developers to do the same as I'm
sure I would learn something new.
The Debian 13 release notes contain a "What's new" section
, which
lists the first two items here and a few other things, in other words, take my
list as an addition to the release notes.
Debian 13 was released on 2025-08-09, and these are nice things you shouldn't
miss in the new release, with a bonus one not tied to the Debian 13 release.
1) wcurl
Have you ever had to download a file from your terminal using curl and didn't
remember the parameters needed? I did.
Nowadays you can use wcurl; "a command line tool which lets you download URLs
without having to remember any parameters."
Simply call wcurl with one or more URLs as parameters and it will download
all of them in parallel, performing retries, choosing the correct output file
name, following redirects, and more.
Try it out:
wcurl example.com
wcurl comes installed as part of the curl package on Debian 13 and in any other
distribution you can imagine, starting with curl 8.14.0.
I've written more about wcurl in its release
announcement
and I've done a lightning talk presentation in DebConf24, which is linked in
the release announcement.
2) HTTP/3 support in curl
Debian has become the first stable Linux distribution to ship curl with support
for HTTP/3. I've written about this in July
2024, when we
first enabled it. Note that we first switched the curl CLI to GnuTLS, but then
ended up releasing the curl CLI linked with OpenSSL (as support arrived later).
Debian was the first Linux distro to enable it in the default build of the curl
package, but Gentoo enabled it a few weeks earlier in their non-default flavor
of the package, kudos to them!
HTTP/3 is not used by default by the curl CLI, you have to enable it with
--http3 or --http3-only.
Try it out:
3) systemd soft-reboot
Starting with systemd v254, there's a new soft-reboot option, it's an
userspace-only reboot, much faster than a full reboot if you don't need to
reboot the kernel.
You can read the announcement from the systemd v254 GitHub
release
Try it out:
# This will reboot your machine!systemctl soft-reboot
4) apt --update
Are you tired of being required to run sudo apt update just before sudo apt upgrade or sudo apt install $PACKAGE? So am I!
The new --update option lets you do both things in a single command:
I love this, but it's still not yet where it should be, fingers crossed for a
simple apt upgrade to behave like other package managers by updating its
cache as part of the task, maybe in Debian 14?
Try it out:
sudo apt upgrade --update# The order doesn't mattersudo apt --update upgrade
This is especially handy for container usage, where you have to update the apt
cache before installing anything, for example:
podman run debian:stable bin/bash -c'apt install --update -y curl'
5) powerline-go
powerline-go is a powerline-style prompt written in Golang, so it's much more
performant than its Python alternative powerline.
powerline-style prompts are quite useful to show things like the current status
of the git repo in your working directory, exit code of the previous command,
presence of jobs in the background, whether or not you're in an ssh session,
and more.
Try it out:
sudo apt install powerline-go
Then add this to your .bashrc:
function_update_ps1()PS1="$(/usr/bin/powerline-go -error$? -jobs$(jobs -pwc -l))"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"if["$TERM"!="linux"]&&[-f"/usr/bin/powerline-go"];thenPROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"fi
Or this to .zshrc:
functionpowerline_precmd()PS1="$(/usr/bin/powerline-go -error$? -jobs$$(%):%j:-0)"# Uncomment the following line to automatically clear errors after showing# them once. This not only clears the error for powerline-go, but also for# everything else you run in that shell. Don't enable this if you're not# sure this is what you want.#set "?"
If you'd like to have your prompt start in a newline, like I have in the
screenshot above, you just need to set -newline in the powerline-go
invocation in your .bashrc/.zshrc.
6) Gnome System Monitor Extension
Tips number 6 and 7 are for Gnome users.
Gnome is now shipping a system monitor extension which lets you get a glance of
the current load of your machine from the top bar.
I've found this quite useful for machines where I'm required to install
third-party monitoring software that tends to randomly consume more resources
than it should. If I feel like my machine is struggling, I can quickly glance
at its load to verify if it's getting overloaded by some process.
The extension is not as complete as
system-monitor-next,
not showing temperatures or histograms, but at least it's officially part of
Gnome, easy to install and supported by them.
Try it out:
And then enable the extension from the "Extension Manager" application.
7) Gnome setting for battery charging profile
After having to learn more about batteries in order to get into FPV drones,
I've come to have a bigger appreciation for solutions that minimize the
inevitable loss of capacity that accrues over time.
There's now a "Battery Charging" setting (under the "Power") section which lets
you choose between two different profiles: "Maximize Charge" and "Preserve
Battery Health".
On supported laptops, this setting is an easy way to set thresholds for when
charging should start and stop, just like you could do it with the tlp package,
but now from the Gnome settings.
To increase the longevity of my laptop battery, I always keep it at "Preserve
Battery Health" unless I'm traveling.
What I would like to see next is support for choosing different "Power Modes"
based on whether the laptop is plugged-in, and based on the battery
charge percentage.
There's a GNOME
issue
tracking this feature, but there's some pushback on whether this is the right
thing to expose to users.
In the meantime, there are some workarounds mentioned in that issue which
people who really want this feature can follow.
If you would like to learn more about batteries; Battery
University is a great starting point, besides
getting into FPV drones and being forced to handle batteries without a Battery
Management System (BMS).
And if by any chance this sparks your interest in FPV drones, Joshua Bardwell's
YouTube channel is a great resource:
@JoshuaBardwell.
8) Lazygit
Emacs users are already familiar with the legendary magit; a terminal-based
UI for git.
Lazygit is an alternative for non-emacs users, you can integrate it with neovim
or just use it directly.
I'm still playing with lazygit and haven't integrated it into my workflows,
but so far it has been a pleasant experience.
You should check out the demos from the lazygit GitHub
page.
Try it out:
sudo apt install lazygit
And then call lazygit from within a git repository.
9) neovim
neovim has been shipped in Debian since 2016, but upstream has been doing a lot of
work to improve the experience out-of-the-box in the last couple of years.
If you're a neovim poweruser, you're likely not installing it from the official
repositories, but for those that are, Debian 13 comes with version 0.10.4,
which brings the following improvements compared to the version in Debian 12:
Treesitter support for C, Lua, Markdown, with the possibility of adding any
other languages as needed;
Better spellchecking due to treesitter integration (spellsitter);
Mouse support enabled by default;
Commenting support out-of-the-box;
Check :h commenting for details, but the
tl;dr is that you can use gcc to comment the current line and gc to comment
the current selection.
OSC52 support.
Especially handy for those using neovim over an ssh
connection, this protocol lets you copy something from within the neovim
process into the clipboard of the machine you're using to connect through ssh.
In other words, you can copy from neovim running in a host over ssh and paste
it in the "outside" machine.
10) [Bonus] Running old Debian releases
The bonus tip is not specific to the Debian 13 release, but something I've
recently learned in the #debian-devel IRC channel.
Did you know there are usable container images for all past Debian releases?
I'm not talking "past" as in "some of the older releases", I'm talking past as
in "literally every Debian release, including the very first one".
Tianon Gravi "tianon" is the Debian Developer responsible for making this
happen, kudos to him!
There's a small gotcha that the releases Buzz (1.1) and Rex (1.2) require a
32-bit host, otherwise you will get the error Out of virtual memory!, but
starting with Bo (1.3) all should work in amd64/arm64.
Try it out:
sudo apt install podmanpodman run -it docker.io/debian/eol:bo
Don't be surprised when noticing that apt/apt-get is not available inside the
container, that's because apt first appeared in Debian Slink (2.1).
Historically the primary way to contribute to Debian has been to email the Debian bug tracker with a code patch. Now that 92% of all Debian source packages are hosted at salsa.debian.org the GitLab instance of Debian more and more developers are using Merge Requests, but not necessarily in the optimal way. In this post I share what I ve found the best practice to be, presented in the natural workflow from forking to merging.
Why use Merge Requests?
Compared to sending patches back and forth in email, using a git forge to review code contributions brings several benefits:
Contributors can see the latest version of the code immediately when the maintainer pushes it to git, without having to wait for an upload to Debian archives.
Contributors can fork the development version and easily base their patches on the correct version and help test that the software continues to function correctly at that specific version.
Both maintainer and other contributors can easily see what was already submitted and avoid doing duplicate work.
It is easy for anyone to comment on a Merge Request and participate in the review.
Integrating CI testing is easy in Merge Requests by activating Salsa CI.
Tracking the state of a Merge Request is much easier than browsing Debian bug reports tagged patch , and the cycle of submit review re-submit re-review is much easier to manage in the dedicated Merge Request view compared to participants setting up their own email plugins for code reviews.
Merge Requests can have extra metadata, such as Approved , and the metadata often updates automatically, such as a Merge Request being closed automatically when the Git commit ID from it is pushed to the target branch.
Keeping these benefits in mind will help ensure that the best practices make sense and are aligned with maximizing these benefits.
Finding the Debian packaging source repository and preparing to make a contribution
Before sinking any effort into a package, start by checking its overall status at the excellent Debian Package Tracker. This provides a clear overview of the package s general health in Debian, when it was last uploaded and by whom, and if there is anything special affecting the package right now. This page also has quick links to the Debian bug tracker of the package, the build status overview and more. Most importantly, in the General section, the VCS row links to the version control repository the package advertises. Before opening that page, note the version most recently uploaded to Debian. This is relevant because nothing in Debian currently enforces that the package in version control is actually the same as the latest uploaded to Debian.
Following the Browse link opens the Debian package source repository, which is usually a project page on Salsa. To contribute, start by clicking the Fork button, select your own personal namespace and, under Branches to include, pick Only the default branch to avoid including unnecessary temporary development branches.
Once forking is complete, clone it with git-buildpackage. For this example repository, the exact command would be gbp clone --verbose git@salsa.debian.org:otto/glow.git.
Next, add the original repository as a new remote and pull from it to make sure you have all relevant branches. Using the same fork as an example, the commands would be:
The gbp pull command can be repeated whenever you want to make sure the main branches are in sync with the original repository. Finally, run gitk --all & to visually browse the Git history and note the various branches and their states in the two remotes. Note the style in comments and repository structure the project has and make sure your contributions follow the same conventions to maximize the chances of the maintainer accepting your contribution.
It may also be good to build the source package to establish a baseline of the current state and what kind of binaries and .deb packages it produces. If using Debcraft, one can simply run debcraft build in the Git repository.
Submitting a Merge Request for a Debian packaging improvement
Always start by making a development branch by running git checkout -b <branch name> to clearly separate your work from the main branch.
When making changes, remember to follow the conventions you already see in the package. It is also important to be aware of general guidelines on how to make good Git commits.
If you are not able to immediately finish coding, it may be useful to publish the Merge Request as a draft so that the maintainer and others can see that you started working on something and what general direction your change is heading in.
If you don t finish the Merge Request in one sitting and return to it another day, you should remember to pull the Debian branch from the original Debian repository in case it has received new commits. This can be done easily with these commands (assuming the same remote and branch names as in the example above):
git fetch go-team
git rebase -i go-team/debian/latest
git fetch go-team
git rebase -i go-team/debian/latest
Frequent rebasing is a great habit to help keep the Git history linear, and restructuring and rewording your commits will make the Git history easier to follow and understand why the changes were made.
When pushing improved versions of your branch, use git push --force. While GitLab does allow squashing, I recommend against it. It is better that the submitter makes sure the final version is a neat and clean set of commits that the receiver can easily merge without having to do any rebasing or squashing themselves.
When ready, remove the draft status of the Merge Request and wait patiently for review. If the maintainer does not respond in several days, try sending an email to <source package name>@packages.debian.org, which is the official way to contact maintainers. You could also post a comment on the MR and tag the last few committers in the same repository so that a notification email is triggered. As a last resort, submit a bug report to the Debian bug tracker to announce that a Merge Request is pending review. This leaves a permanent record for posterity (or the Debian QA team) of your contribution. However, most of the time simply posting the Merge Request in Salsa is enough; excessive communication might be perceived as spammy, and someone needs to remember to check that the bug report is closed.
Respect the review feedback, respond quickly and avoid Merge Requests getting stale
Once you get feedback, try to respond as quickly as possible. When people participating have everything fresh in their minds, it is much easier for the submitter to rework it and for the reviewer to re-review. If the Merge Request becomes stale, it can be challenging to revive it. Also, if it looks like the MR is only waiting for re-review but nothing happens, re-read the previous feedback and make sure you actually address everything. After that, post a friendly comment where you explicitly say you have addressed all feedback and are only waiting for re-review.
Reviewing Merge Requests
This section about reviewing is not exclusive to Debian package maintainers anyone can contribute to Debian by reviewing open Merge Requests. Typically, the larger an open source project gets, the more help is needed in reviewing and testing changes to avoid regressions, and all diligently done work is welcome. As the famous Linus quote goes, given enough eyeballs, all bugs are shallow .
On salsa.debian.org, you can browse open Merge Requests per project or for a whole group, just like on any GitLab instance.
Reviewing Merge Requests is, however, most fun when they are fresh and the submitter is active. Thus, the best strategy is to ensure you have subscribed to email notifications in the repositories you care about so you get an email for any new Merge Request (or Issue) immediately when posted.
When you see a new Merge Request, try to review it within a couple of days. If you cannot review in a reasonable time, posting a small note that you intend to review it later will feel better to the submitter compared to not getting any response.
Personally, I have a habit of assigning myself as a reviewer so that I can keep track of my whole review queue at https://salsa.debian.org/dashboard/merge_requests?reviewer_username=otto, and I recommend the same to others. Seeing the review assignment happen is also a good way to signal to the submitter that their submission was noted.
Reviewing commit-by-commit in the web interface
Reviewing using the web interface works well in general, but I find that the way GitLab designed it is not ideal. In my ideal review workflow, I first read the Git commit message to understand what the submitter tried to do and why; only then do I look at the code changes in the commit. In GitLab, to do this one must first open the Commits tab and then click on the last commit in the list, as it is sorted in reverse chronological order with the first commit at the bottom. Only after that do I see the commit message and contents. Getting to the next commit is easy by simply clicking Next.
When adding the first comment, I choose Start review and for the following remarks Add to review. Finally, I click Finish review and Submit review, which will trigger one single email to the submitter with all my feedback. I try to avoid using the Add comment now option, as each such comment triggers a separate notification email to the submitter.
Reviewing and testing on your own computer locally
For the most thorough review, I pull the code to my laptop for local review with git pull <remote url> <branch name>. There is no need to run git remote add as pulling using a URL directly works too and saves from needing to clean up old remotes later.
Pulling the Merge Request contents locally allows me to build, run and inspect the code deeply and review the commits with full metadata in gitk or equivalent.
Investing enough time in writing feedback, but not too much
See my other post for more in-depth advice on how to structure your code review feedback.
In Debian, I would emphasize patience, to allow the submitter time to rework their submission. Debian packaging is notoriously complex, and even experienced developers often need more feedback and time to get everything right. Avoid the temptation to rush the fix in yourself. In open source, Git credits are often the only salary the submitter gets. If you take the idea from the submission and implement it yourself, you rob the submitter of the opportunity to get feedback, try to improve and finally feel accomplished. Sure, it takes extra effort to give feedback, but the contributor is likely to feel ownership of their work and later return to further improve it.
If a submission looks hopelessly low quality and you feel that giving feedback is a waste of time, you can simply respond with something along the lines of: Thanks for your contribution and interest in helping Debian. Unfortunately, looking at the commits, I see several shortcomings, and it is unlikely a normal review process is enough to help you finalize this. Please reach out to Debian Mentors to get a mentor who can give you more personalized feedback.
There might also be contributors who just dump the code , ignore your feedback and never return to finalize their submission. If a contributor does not return to finalize their submission in 3-6 months, I will in my own projects simply finalize it myself and thank the contributor in the commit message (but not mark them as the author).
Despite best practices, you will occasionally still end up doing some things in vain, but that is how volunteer collaboration works. We all just need to accept that some communication will inevitably feel like wasted effort, but it should be viewed as a necessary investment in order to get the benefits from the times when the communication led to real and valuable collaboration. Please just do not treat all contributors as if they are unlikely to ever contribute again; otherwise, your behavior will cause them not to contribute again. If you want to grow a tree, you need to plant several seeds.
Approving and merging
Assuming review goes well and you are ready to approve, and if you are the only maintainer, you can proceed to merge right away. If there are multiple maintainers, or if you otherwise think that someone else might want to chime in before it is merged, use the Approve button to show that you approve the change but leave it unmerged.
The person who approved does not necessarily have to be the person who merges. The point of the Merge Request review is not separation of duties in committing and merging the main purpose of a code review is to have a different set of eyeballs looking at the change before it is committed into the main development branch for all eternity. In some packages, the submitter might actually merge themselves once they see another developer has approved. In some rare Debian projects, there might even be separate people taking the roles of submitting, approving and merging, but most of the time these three roles are filled by two people either as submitter and approver+merger or submitter+merger and approver.
If you are not a maintainer at all and do not have permissions to click Approve, simply post a comment summarizing your review and that you approve it and support merging it. This can help the maintainers review and merge faster.
Making a Merge Request for a new upstream version import
Unlike many other Linux distributions, in Debian each source package has its own version control repository. The Debian sources consist of the upstream sources with an additional debian/ subdirectory that contains the actual Debian packaging. For the same reason, a typical Debian packaging Git repository has a debian/latest branch that has changes only in the debian/ subdirectory while the surrounding upstream files are the actual upstream files and have the actual upstream Git history. For details, see my post explaining Debian source packages in Git.
Because of this Git branch structure, importing a new upstream version will typically modify three branches: debian/latest, upstream/latest and pristine-tar. When doing a Merge Request for a new upstream import, only submit one Merge Request for one branch: which means merging your new changes to the debian/latest branch.
There is no need to submit the upstream/latest branch or the pristine-tar branch. Their contents are fixed and mechanically imported into Debian. There are no changes that the reviewer in Debian can request the submitter to do on these branches, so asking for feedback and comments on them is useless. All review, comments and re-reviews concern the content of the debian/latest branch only.
It is not even necessary to use the debian/latest branch for a new upstream version. Personally, I always execute the new version import (with gbp import-orig --verbose --uscan) and prepare and test everything on debian/latest, but when it is time to submit it for review, I run git checkout -b import/$(dpkg-parsechangelog -SVersion) to get a branch named e.g. import/1.0.1 and then push that for review.
Reviewing a Merge Request for a new upstream version import
Reviewing and testing a new upstream version import is a bit tricky currently, but possible. The key is to use gbp pull to automate fetching all branches from the submitter s fork. Assume you are reviewing a submission targeting the Glow package repository and there is a Merge Request from user otto s fork. As the maintainer, you would run the commands:
git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto
git remote add otto https://salsa.debian.org/otto/glow.git
gbp pull --verbose otto
If there was feedback in the first round and you later need to pull a new version for re-review, running gbp pull --force will not suffice, and this trick of manually fetching each branch and resetting them to the submitter s version is needed:
for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done
for BRANCH in pristine-tar upstream debian/latest
do
git checkout $BRANCH
git reset --hard origin/$BRANCH
git pull --force https://salsa.debian.org/otto/glow.git $BRANCH
done
Once review is done, either click Approve and let the submitter push everything, or alternatively, push all the branches you pulled locally yourself. In GitLab and other forges, the Merge Request will automatically be marked as Merged once the commit ID that was the head of the Merge Request is pushed to the target branch.
Please allow enough time for everyone to participate
When working on Debian, keep in mind that it is a community of volunteers. It is common for people to do Debian stuff only on weekends, so you should patiently wait for at least a week so that enough workdays and weekend days have passed for the people you interact with to have had time to respond on their own Debian time.
Having to wait may feel annoying and disruptive, but try to look at the upside: you do not need to do extra work simply while waiting for others. In some cases, that waiting can be useful thanks to the sleep on it phenomenon: when you yourself look at your own submission some days later with fresh eyes, you might notice something you overlooked earlier and improve your code change even without other people s feedback!
Contribute reviews!
The last but not least suggestion is to make a habit of contributing reviews to packages you do not maintain. As we already see in large open source projects, such as the Linux kernel, they have far more code submissions than they can handle. The bottleneck for progress and maintaining quality becomes the reviews themselves.
For Debian, as an organization and as a community, to be able to renew and grow new contributors, we need more of the senior contributors to shift focus from merely maintaining their packages and writing code to also intentionally interact with new contributors and guide them through the process of creating great open source software. Reviewing code is an effective way to both get tangible progress on individual development items and to transfer culture to a new generation of developers.
Why aren t 100% of all Debian source packages hosted on Salsa?
As seen at trends.debian.net, more and more packages are using Salsa. Debian does not, however, have any policy about it. In fact, the Debian Policy Manual does not even mention the word Salsa anywhere. Adoption of Salsa has so far been purely organic, as in Debian each package maintainer has full freedom to choose whatever preferences they have regarding version control.
I hope the trend to use Salsa will continue and more shared workflows emerge so that collaboration gets easier. To drive the culture of using Merge Requests and more, I drafted the Debian proposal DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are active in Debian and you think DEP-18 is beneficial for Debian, please give a thumbs up at dep-team/deps!21.
Debian packaging is notoriously hard. Far too many new contributors give up while trying, and many long-time contributors leave due to burnout from having to do too many thankless maintenance tasks. Some just skip testing their changes properly because it feels like too much toil.
Debcraft is my attempt to solve this by automating all the boring stuff, and making it easier to learn the correct practices and helping new and old packagers better track changes in both source code and build artifacts.
The challenge of declarative packaging code
Unlike how rpm or apk packages are done, the deb package sources by design avoid having one massive procedural packaging recipe. Instead, the packaging is defined in multiple declarative files in the debian/ subdirectory. For example, instead of a script running install -m 755 bin/btop /usr/bin/btop there is a file debian/btop.install containing the line usr/bin/btop.
This makes the overall system more robust and reliable, and allows, for example, extensive static analysis to find problems without having to build the package. The notable exception is the debian/rules file, which contains procedural code that can modify any aspect of the package build. Almost all other files are declarative.
Benefits include, among others, that the effect of a Debian-wide policy change can be relatively easily predicted by scanning what attributes and configurations all packages have declared.
The drawback is that to understand the syntax and meaning of each file, one must understand which build tools read which files and traverse potentially multiple layers of abstraction. In my view, this is the root cause for most of the perceived complexity.
Common complaints about .deb packaging
Related to the above, people learning Debian packaging frequently voice the following complaints:
Debian has too many tools to learn, often with overlapping or duplicate functionality.
Too much outdated and inconsistent documentation that makes learning the numerous tools needlessly hard.
Lack of documentation of the generally agreed best practices, mainly due to Debian s reluctance as a project to pick one tool and deprecate the alternatives.
Multiple layers of abstraction and lack of clarity on what any single change in the debian/ subdirectory leads to in the final package.
Requirement of Debian packages to be developed on a Debian system.
How Debcraft solves (some of) this
Debcraft is intentionally opinionated for the sake of simplicity, and makes heavy use of git, git-buildpackage, and most importantly Linux containers, supporting both Docker and Podman.
By using containers, Debcraft frees the user from the requirement of having to run Debian. This makes .deb packaging more accessible to developers running some other Linux distro or even Mac or Windows (with WSL). Of course we want developers to run Debian (or a derivative like Ubuntu) but we want them even more to build, test and ship their software as .deb. Even for Debian/Ubuntu users having everything done inside clean hermetic containers of the latest target distribution version will yield more robust, secure and reproducible builds and tests. All containers are built automatically on-the-fly using best practices for layer caching, making everything easy and fast.
Debcraft has simple commands to make it easy to build, rebuild, test and update packages. The most fundamental command is debcraft build, which will not only build the package but also fetch the sources if not already present, and with flags such as --distribution or --source-only build for any requested Debian or Ubuntu release, or generate source packages only for Debian or PPA upload purposes.
For ease of use, the output is colored and includes helpful explanations on what is being done, and suggests relevant Debian documentation for more information.
Most importantly, the build artifacts, along with various logs, are stored in separate directories, making it easy to compare before and after to see what changed as a result of the code or dependency updates (utilizing diffoscope among others).
While the above helps to debug successful builds, there is also the debcraft shell command to make debugging failed builds significantly easier by dropping into a shell where one can run various dh commands one-by-one.
Once the build works, testing autopkgtests is as easy as running debcraft test. As with all other commands, Debcraft is smart enough to read information like the target distribution from the debian/changelog entry.
When the package is ready to be released, there is the debcraft release command that will create the Debian source package in the correct format and facilitate uploading it either to your Personal Package Archive (PPA) or if you are a Debian Developer to the official Debian archive.
Automatically improve and update packages
Additionally, the command debcraft improve will try to fix all issues that are possible to address automatically. It utilizes, among others, lintian-brush, codespell and debputy. This makes repetitive Debian maintenance tasks easier, such as updating the package to follow the latest Debian policies.
To update the package to the latest upstream version there is also debcraft update. It will read the package configuration files such as debian/gbp.conf and debian/watch and attempts to import the latest upstream version, refresh patches, build and run autopkgtests. If everything passes, the new version is committed. This helps automate the process of updating to new upstream versions.
Try out Debcraft now!
On a recent version of Debian and Ubuntu, Debcraft can be installed simply by running apt install debcraft. To use Debcraft on some other distribution or to get the latest features available in the development version install it using:
git clone https://salsa.debian.org/debian/debcraft.git
cd debcraft
make install-local
To see exact usage instructions run debcraft --help.
Contributions welcome
Current Debcraft version 0.5 still has some rough edges and missing features, but I have personally been using it for over a year to maintain all my packages in Debian. If you come across some issue, feel free to file a report at https://salsa.debian.org/debian/debcraft/-/issues or submit an improvement at https://salsa.debian.org/debian/debcraft/-/merge_requests. The code is intentionally written entirely in shell script to keep the barrier to code contribution as low as possible.
By the way, if you aspire to become a Debian Developer, and want to follow my examples in using state-of-the-art tooling and collaborate using salsa.debian.org, feel free to reach out for mentorship. I am glad to see more people contribute to Debian!
This post is based on presentation given at the Validos annual members meeting on June 25th, 2025.
When I started getting into Linux and open source over 25 years ago, the majority of the software development in this area was done by academics and hobbyists. The number of companies participating in open source has since exploded in parallel with the growth of mobile and cloud software, the majority of which is built on top of open source. For example, Android powers most mobile phones today and is based on Linux. Almost all software used to operate large cloud provider data centers, such as AWS or Google, is either open source or made in-house by the cloud provider.
Pretty much all companies, regardless of the industry, have been using open source software at least to some extent for years. However, the degree to which they collaborate with the upstream origins of the software varies. I encourage all companies in a technical industry to start contributing upstream. There are many benefits to having a good relationship with your upstream open source software vendors, both for the short term and especially for the long term. Moreover, with the rollout of CRA in EU in 2025-2027, the law will require software companies to contribute security fixes upstream to the open source projects their products use.
To ensure the process is well managed, business-aligned and legally compliant, there are a few do s and don t do s that are important to be aware of.
Maintain your SBOMs
For every piece of software, regardless of whether the code was done in-house, from an open source project, or a combination of these, every company needs to produce a Software Bill of Materials (SBOM). The SBOMs provide a standardized and interoperable way to track what software and which versions are used where, what software licenses apply, who holds the copyright of which component, which security fixes have been applied and so forth.
A catalog of SBOMs, or equivalent, forms the backbone of software supply-chain management in corporations.
Identify your strategic upstream vendors
The SBOMs are likely to reveal that for any piece of non-trivial software, there are hundreds or thousands of upstream open source projects in use. Few organizations have resources to contribute to all of their upstreams.
If your organization is just starting to organize upstream contribution activities, identify the key projects that have the largest impact on your business and prioritize forming a relationship with them first. Organizations with a mature contribution process will be collaborating with tens or hundreds of upstreams.
Create a written policy with input from business owners, legal and marketing
An upstream contribution policy typically covers things such as who decides what can be contributed upstream from a business point of view, what licenses are allowed or to avoid, how to document copyright, how to deal with projects that require signing copyright assignments (e.g. contributor license agreements), other potential legal guidelines to follow. Additionally, the technical steps on how to prepare a contribution should be outlined, including how to internally review and re-review them, who the technical approvers are to ensure high quality and good reputation and so on.
The policy does not have to be static or difficult to produce. Start with a small policy and a few trusted senior developers following it, and update its contents as you run into new situations that need internal company alignment. For example, don t require staff to create new GitHub accounts merely for the purpose of doing one open source contribution. Initially, do things with minimal overhead and add requirements to the policy only if they have clear and strong benefits. The purpose of a policy should be to make it obvious and easy for employees to do the right thing, not to add obstacles and stop progress or encourage people to break the policy.
Appoint an internal coordinator and champions
Having a written policy on how to contribute upstream will help ensure a consistent process and avoid common pitfalls. However, a written policy alone does not automatically translate into a well-running process. It is highly recommended to appoint at least one internal coordinator who is knowledgeable about how open source communities work, how software licensing and patents work, and is senior enough to have a good sense of what business priorities to optimize for. In small organizations it can be a single person, while larger organizations typically have a full Open Source Programs Office.
This coordinator should oversee the contribution process, track all contributions made across the organization, and further optimize the process by working with stakeholders across the business, including legal experts, business owners and CTOs. The marketing and recruiting folks should also be involved, as upstream contributions will have a reputation-building aspect as well, which can be enhanced with systematic tracking and publishing of activities.
Additionally, at least in the beginning, the organization should also appoint key staff members as open source champions. Implementing a new process always includes some obstacles and occasional setbacks, which may discourage employees from putting in the extra effort to reap the full long-term benefits for the company. Having named champions will empower them to make the first few contributions themselves, setting a good example and encouraging and mentoring others to contribute upstream as well.
Avoid excessive approvals
To maintain a high quality bar, it is always good to have all outgoing submissions reviewed by at least one or two people. Two or three pairs of eyeballs are significantly more likely to catch issues that might slip by someone working alone. The review also slows down the process by a day or two, which gives the author time to sleep on it , which usually helps to ensure the final submission is well-thought-out by the author.
Do not require more than one or two reviewers. The marginal utility goes quickly to zero beyond a few reviewers, and at around four or five people the effect becomes negative, as the weight of each approval decreases and the reviewers begin to take less personal responsibility. Having too many people in the loop also makes each feedback round slow and expensive, to the extent that the author will hesitate to make updates and ask for re-reviews due to the costs involved.
If the organization experiences setbacks due to mistakes slipping through the review process, do not respond by adding more reviewers, as it will just grind the contribution process to a halt. If there are quality concerns, invest in training for engineers, CI systems and perhaps an internal certification program for those making public upstream code submissions. A typical software engineer is more likely to seriously try to become proficient at their job and put effort into a one-off certification exam and then make multiple high-quality contributions, than it is for a low-skilled engineer to improve and even want to continue doing more upstream contributions if they are burdened by heavy review processes every time they try to submit an upstream contribution.
Don t expect upstream to accept all code contributions
Sure, identifying the root cause of and fixing a tricky bug or writing a new feature requires significant effort. While an open source project will certainly appreciate the effort invested, it doesn t mean it will always welcome all contributions with open arms. Occasionally, the project won t agree that the code is correct or the feature is useful, and some contributions are bound to be rejected.
You can minimize the chance of experiencing rejections by having a solid internal review process that includes assessing how the upstream community is likely to understand the proposal. Sometimes how things are communicated is more important than how they are coded. Polishing inline comments and git commit messages help ensure high-quality communication, along with a commitment to respond quickly to review feedback and conducting regular follow-ups until a contribution is finalized and accepted.
Start small to grow expertise and reputation
In addition to keeping the open source contribution policy lean and nimble, it is also good to start practical contributions with small issues. Don t aim to contribute massive features until you have a track record of being able to make multiple small contributions.
Keep in mind that not all open source projects are equal. Each has its own culture, written and unwritten rules, development process, documented requirements (which may be outdated) and more. Starting with a tiny contribution, even just a typo fix, is a good way to validate how code submissions, reviews and approvals work in a particular project. Once you have staff who have successfully landed smaller contributions, you can start planning larger proposals. The exact same proposal might be unsuccessful when proposed by a new person, and successful when proposed by a person who already has a reputation for prior high-quality work.
Embrace all and any publicity you get
Some companies have concerns about their employees working in the open. Indeed, every email and code patch an employee submits, and all related discussions become public. This may initially sound scary, but is actually a potential source of good publicity. Employees need to be trained on how to conduct themselves publicly, and the discussions about code should contain only information strictly related to the code, without any references to actual production environments or other sensitive information. In the long run most employees contributing have a positive impact and the company should reap the benefits of positive publicity. If there are quality issues or employee judgment issues, hiding the activity or forcing employees to contribute with pseudonyms is not a proper solution. Instead, the problems should be addressed at the root, and bad behavior addressed rather than tolerated.
When people are working publicly, there tends to also be some degree of additional pride involved, which motivates people to try their best. Contributions need to be public for the sponsoring corporation to later be able to claim copyright or licenses. Considering that thousands of companies participate in open source every day, the prevalence of bad publicity is quite low, and the benefits far exceed the risks.
Scratch your own itch
When choosing what to contribute, select things that benefit your own company. This is not purely about being selfish - often people working on resolving a problem they suffer from are the same people with the best expertise of what the problem is and what kind of solution is optimal. Also, the issues that are most pressing to your company are more likely to be universally useful to solve than any random bug or feature request in the upstream project s issue tracker.
Remember there are many ways to help upstream
While submitting code is often considered the primary way to contribute, please keep in mind there are also other highly impactful ways to contribute. Submitting high-quality bug reports will help developers quickly identify and prioritize issues to fix. Providing good research, benchmarks, statistics or feedback helps guide development and the project make better design decisions. Documentation, translations, organizing events and providing marketing support can help increase adoption and strengthen long-term viability for the project.
In some of the largest open source projects there are already far more pending contributions than the core maintainers can process. Therefore, developers who contribute code should also get into the habit of contributing reviews. As Linus law states, given enough eyeballs, all bugs are shallow. Reviewing other contributors submissions will help improve quality, and also alleviate the pressure on core maintainers who are the only ones providing feedback. Reviewing code submitted by others is also a great learning opportunity for the reviewer. The reviewer does not need to be better than the submitter - any feedback is useful; merely posting review feedback is not the same thing as making an approval decision.
Many projects are also happy to accept monetary support and sponsorships. Some offer specific perks in return. By human nature, the largest sponsors always get their voice heard in important decisions, as no open source project wants to take actions that scare away major financial contributors.
Starting is the hardest part
Long-term success in open source comes from a positive feedback loop of an ever-increasing number of users and collaborators. As seen in the examples of countless corporations contributing open source, the benefits are concrete, and the process usually runs well after the initial ramp-up and organizational learning phase has passed.
In open source ecosystems, contributing upstream should be as natural as paying vendors in any business. If you are using open source and not contributing at all, you likely have latent business risks without realizing it. You don t want to wake up one morning to learn that your top talent left because they were forbidden from participating in open source for the company s benefit, or that you were fined due to CRA violations and mismanagement in sharing security fixes with the correct parties. The faster you start with the process, the less likely those risks will materialize.
In this post, I demonstrate the optimal workflow for creating new Debian packages in 2025, preserving the upstream git history. The motivation for this is to lower the barrier for sharing improvements to and from upstream, and to improve software provenance and supply-chain security by making it easy to inspect every change at any level using standard git tooling.
Key elements of this workflow include:
Using a Git fork/clone of the upstream repository as the starting point for creating Debian packaging repositories.
Consistent use of the same git-buildpackage commands, with all package-specific options in gbp.conf.
Pristine-tar and upstream signatures for supply-chain security.
Use of Files-Excluded in the debian/copyright file to filter out unwanted files in Debian.
Patch queues to easily rebase and cherry-pick changes across Debian and upstream branches.
Efficient use of Salsa, Debian s GitLab instance, for both automated feedback from CI systems and human feedback from peer reviews.
To make the instructions so concrete that anyone can repeat all the steps themselves on a real package, I demonstrate the steps by packaging the command-line tool Entr. It is written in C, has very few dependencies, and its final Debian source package structure is simple, yet exemplifies all the important parts that go into a complete Debian package:
Creating a new packaging repository and publishing it under your personal namespace on salsa.debian.org.
Using dh_make to create the initial Debian packaging.
Posting the first draft of the Debian packaging as a Merge Request (MR) and using Salsa CI to verify Debian packaging quality.
Running local builds efficiently and iterating on the packaging process.
Create new Debian packaging repository from the existing upstream project git repository
First, create a new empty directory, then clone the upstream Git repository inside it:
Using a clean directory makes it easier to inspect the build artifacts of a Debian package, which will be output in the parent directory of the Debian source directory.
The extra parameters given to git clone lay the foundation for the Debian packaging git repository structure where the upstream git remote name is upstreamvcs. Only the upstream main branch is tracked to avoid cluttering git history with upstream development branches that are irrelevant for packaging in Debian.
Next, enter the git repository directory and list the git tags. Pick the latest upstream release tag as the commit to start the branch upstream/latest. This latest refers to the upstream release, not the upstream development branch. Immediately after, branch off the debian/latest branch, which will have the actual Debian packaging files in the debian/ subdirectory.
shellcd entr
git tag # shows the latest upstream release tag was '5.6'
git checkout -b upstream/latest 5.6
git checkout -b debian/latest
cd entr
git tag # shows the latest upstream release tag was '5.6'git checkout -b upstream/latest 5.6
git checkout -b debian/latest
At this point, the repository is structured according to DEP-14 conventions, ensuring a clear separation between upstream and Debian packaging changes, but there are no Debian changes yet. Next, add the Salsa repository as a new remote which called origin, the same as the default remote name in git.
This is an important preparation step to later be able to create a Merge Request on Salsa that targets the debian/latest branch, which does not yet have any debian/ directory.
Launch a Debian Sid (unstable) container to run builds in
To ensure that all packaging tools are of the latest versions, run everything inside a fresh Sid container. This has two benefits: you are guaranteed to have the most up-to-date toolchain, and your host system stays clean without getting polluted by various extra packages. Additionally, this approach works even if your host system is not Debian/Ubuntu.
cd ..
podman run --interactive --tty --rm --shm-size=1G --cap-add SYS_PTRACE \
--env='DEB*' --volume=$PWD:/tmp/test --workdir=/tmp/test debian:sid bash
Note that the container should be started from the parent directory of the git repository, not inside it. The --volume parameter will loop-mount the current directory inside the container. Thus all files created and modified are on the host system, and will persist after the container shuts down.
Once inside the container, install the basic dependencies:
Automate creating the debian/ files with dh-make
To create the files needed for the actual Debian packaging, use dh_make:
shell# dh_make --packagename entr_5.6 --single --createorig
Maintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]
Done. Please edit the files in the debian/ subdirectory now.
# dh_make --packagename entr_5.6 --single --createorigMaintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]Done. Please edit the files in the debian/ subdirectory now.
Due to how dh_make works, the package name and version need to be written as a single underscore separated string. In this case, you should choose --single to specify that the package type is a single binary package. Other options would be --library for library packages (see libgda5 sources as an example) or --indep (see dns-root-data sources as an example). The --createorig will create a mock upstream release tarball (entr_5.6.orig.tar.xz) from the current release directory, which is necessary due to historical reasons and how dh_make worked before git repositories became common and Debian source packages were based off upstream release tarballs (e.g. *.tar.gz).
At this stage, a debian/ directory has been created with template files, and you can start modifying the files and iterating towards actual working packaging.
shellgit add debian/
git commit -a -m "Initial Debian packaging"
git add debian/
git commit -a -m "Initial Debian packaging"
Review the files
The full list of files after the above steps with dh_make would be:
You can browse these files in the demo repository.
The mandatory files in the debian/ directory are:
changelog,
control,
copyright,
and rules.
All the other files have been created for convenience so the packager has template files to work from. The files with the suffix .ex are example files that won t have any effect until their content is adjusted and the suffix removed.
For detailed explanations of the purpose of each file in the debian/ subdirectory, see the following resources:
The Debian Policy Manual: Describes the structure of the operating system, the package archive and requirements for packages to be included in the Debian archive.
The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
Debhelper man pages: Detailed information of how the Debian package build system works, and how the contents of the various files in debian/ affect the end result.
As Entr, the package used in this example, is a real package that already exists in the Debian archive, you may want to browse the actual Debian packaging source at https://salsa.debian.org/debian/entr/-/tree/debian/latest/debian for reference.
Most of these files have standardized formatting conventions to make collaboration easier. To automatically format the files following the most popular conventions, simply run wrap-and-sort -vast or debputy reformat --style=black.
Identify build dependencies
The most common reason for builds to fail is missing dependencies. The easiest way to identify which Debian package ships the required dependency is using apt-file. If, for example, a build fails complaining that pcre2posix.h cannot be found or that libcre2-posix.so is missing, you can use these commands:
The output above implies that the debian/control should be extended to define a Build-Depends: libpcre2-dev relationship.
There is also dpkg-depcheck that uses strace to trace the files the build process tries to access, and lists what Debian packages those files belong to. Example usage:
shelldpkg-depcheck -b debian/rules build
dpkg-depcheck -b debian/rules build
Build the Debian sources to generate the .deb package
After the first pass of refining the contents of the files in debian/, test the build by running dpkg-buildpackage inside the container:
shelldpkg-buildpackage -uc -us -b
dpkg-buildpackage -uc -us -b
The options -uc -us will skip signing the resulting Debian source package and other build artifacts. The -b option will skip creating a source package and only build the (binary) *.deb packages.
The output is very verbose and gives a large amount of context about what is happening during the build to make debugging build failures easier. In the build log of entr you will see for example the line dh binary --buildsystem=makefile. This and other dh commands can also be run manually if there is a need to quickly repeat only a part of the build while debugging build failures.
To see what files were generated or modified by the build simply run git status --ignored:
shell$ git status --ignored
On branch debian/latest
Untracked files:
(use "git add <file>..." to include in what will be committed)
debian/debhelper-build-stamp
debian/entr.debhelper.log
debian/entr.substvars
debian/files
Ignored files:
(use "git add -f <file>..." to include in what will be committed)
Makefile
compat.c
compat.o
debian/.debhelper/
debian/entr/
entr
entr.o
status.o
$ git status --ignored
On branch debian/latest
Untracked files:
(use "git add <file>..." to include in what will be committed) debian/debhelper-build-stamp
debian/entr.debhelper.log
debian/entr.substvars
debian/files
Ignored files:
(use "git add -f <file>..." to include in what will be committed) Makefile
compat.c
compat.o
debian/.debhelper/
debian/entr/
entr
entr.o
status.o
Re-running dpkg-buildpackage will include running the command dh clean, which assuming it is configured correctly in the debian/rules file will reset the source directory to the original pristine state. The same can of course also be done with regular git commands git reset --hard; git clean -fdx. To avoid accidentally committing unnecessary build artifacts in git, a debian/.gitignore can be useful and it would typically include all four files listed as untracked above.
After a successful build you would have the following files:
The contents of debian/entr are essentially what goes into the resulting entr_5.6-1_amd64.deb package. Familiarizing yourself with the majority of the files in the original upstream source as well as all the resulting build artifacts is time consuming, but it is a necessary investment to get high-quality Debian packages.
There are also tools such as Debcraft that automate generating the build artifacts in separate output directories for each build, thus making it easy to compare the changes to correlate what change in the Debian packaging led to what change in the resulting build artifacts.
Re-run the initial import with git-buildpackage
When upstreams publish releases as tarballs, they should also be imported for optimal software supply-chain security, in particular if upstream also publishes cryptographic signatures that can be used to verify the authenticity of the tarballs.
To achieve this, the files debian/watch, debian/upstream/signing-key.asc, and debian/gbp.conf need to be present with the correct options. In the gbp.conf file, ensure you have the correct options based on:
Does upstream release tarballs? If so, enforce pristine-tar = True.
Does upstream sign the tarballs? If so, configure explicit signature checking with upstream-signatures = on.
Does upstream have a git repository, and does it have release git tags? If so, configure the release git tag format, e.g. upstream-vcs-tag = %(version%~%.)s.
To validate that the above files are working correctly, run gbp import-orig with the current version explicitly defined:
shell$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"
gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'
gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
As the original packaging was done based on the upstream release git tag, the above command will fetch the tarball release, create the pristine-tar branch, and store the tarball delta on it. This command will also attempt to create the tag upstream/5.6 on the upstream/latest branch.
Import new upstream versions in the future
Forking the upstream git repository, creating the initial packaging, and creating the DEP-14 branch structure are all one-off work needed only when creating the initial packaging.
Going forward, to import new upstream releases, one would simply run git fetch upstreamvcs; gbp import-orig --uscan, which fetches the upstream git tags, checks for new upstream tarballs, and automatically downloads, verifies, and imports the new version. See the galera-4-demo example in the Debian source packages in git explained post as a demo you can try running yourself and examine in detail.
You can also try running gbp import-orig --uscan without specifying a version. It would fetch it, as it will notice there is now Entr version 5.7 available, and import it.
Build using git-buildpackage
From this stage onwards you should build the package using gbp buildpackage, which will do a more comprehensive build.
shellgbp buildpackage -uc -us
gbp buildpackage -uc -us
The git-buildpackage build also includes running Lintian to find potential Debian policy violations in the sources or in the resulting .deb binary packages. Many Debian Developers run lintian -EviIL +pedantic after every build to check that there are no new nags, and to validate that changes intended to previous Lintian nags were correct.
Open a Merge Request on Salsa for Debian packaging review
Getting everything perfectly right takes a lot of effort, and may require reaching out to an experienced Debian Developers for review and guidance. Thus, you should aim to publish your initial packaging work on Salsa, Debian s GitLab instance, for review and feedback as early as possible.
For somebody to be able to easily see what you have done, you should rename your debian/latest branch to another name, for example next/debian/latest, and open a Merge Request that targets the debian/latest branch on your Salsa fork, which still has only the unmodified upstream files.
If you have followed the workflow in this post so far, you can simply run:
git checkout -b next/debian/latest
git push --set-upstream origin next/debian/latest
Open in a browser the URL visible in the git remote response
Write the Merge Request description in case the default text from your commit is not enough
Mark the MR as Draft using the checkbox
Publish the MR and request feedback
Once a Merge Request exists, discussion regarding what additional changes are needed can be conducted as MR comments. With an MR, you can easily iterate on the contents of next/debian/latest, rebase, force push, and request re-review as many times as you want.
While at it, make sure the Settings > CI/CD page has under CI/CD configuration file the value debian/salsa-ci.yml so that the CI can run and give you immediate automated feedback.
For an example of an initial packaging Merge Request, see https://salsa.debian.org/otto/entr-demo/-/merge_requests/1.
Open a Merge Request / Pull Request to fix upstream code
Due to the high quality requirements in Debian, it is fairly common that while doing the initial Debian packaging of an open source project, issues are found that stem from the upstream source code. While it is possible to carry extra patches in Debian, it is not good practice to deviate too much from upstream code with custom Debian patches. Instead, the Debian packager should try to get the fixes applied directly upstream.
Using git-buildpackage patch queues is the most convenient way to make modifications to the upstream source code so that they automatically convert into Debian patches (stored at debian/patches), and can also easily be submitted upstream as any regular git commit (and rebased and resubmitted many times over).
First, decide if you want to work out of the upstream development branch and later cherry-pick to the Debian packaging branch, or work out of the Debian packaging branch and cherry-pick to an upstream branch.
The example below starts from the upstream development branch and then cherry-picks the commit into the git-buildpackage patch queue:
shellgit checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expected
git commit -a -m "Commit title" -m "Commit body"
git push # submit upstream
gbp pq import --force --time-machine=10
git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadata
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
git checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expectedgit commit -a -m "Commit title" -m "Commit body"git push # submit upstreamgbp pq import --force --time-machine=10git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadatagbp buildpackage -uc -us -b
./entr # verify change works as expectedgbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
The example below starts by making the fix on a git-buildpackage patch queue branch, and then cherry-picking it onto the upstream development branch:
These can be run at any time, regardless if any debian/patches existed prior, or if existing patches applied cleanly or not, or if there were old patch queue branches around. Note that the extra -b in gbp buildpackage -uc -us -b instructs to build only binary packages, avoiding any nags from dpkg-source that there are modifications in the upstream sources while building in the patches-applied mode.
Programming-language specific dh-make alternatives
As each programming language has its specific way of building the source code, and many other conventions regarding the file layout and more, Debian has multiple custom tools to create new Debian source packages for specific programming languages.
Notably, Python does not have its own tool, but there is an dh_make --python option for Python support directly in dh_make itself. The list is not complete and many more tools exist. For some languages, there are even competing options, such as for Go there is in addition to dh-make-golang also Gophian.
When learning Debian packaging, there is no need to learn these tools upfront. Being aware that they exist is enough, and one can learn them only if and when one starts to package a project in a new programming language.
The difference between source git repository vs source packages vs binary packages
As seen in earlier example, running gbp buildpackage on the Entr packaging repository above will result in several files:
The entr_5.6-1_amd64.deb is the binary package, which can be installed on a Debian/Ubuntu system. The rest of the files constitute the source package. To do a source-only build, run gbp buildpackage -S and note the files produced:
The source package files can be used to build the binary .deb for amd64, or any architecture that the package supports. It is important to grasp that the Debian source package is the preferred form to be able to build the binary packages on various Debian build systems, and the Debian source package is not the same thing as the Debian packaging git repository contents.
If the package is large and complex, the build could result in multiple binary packages. One set of package definition files in debian/ will however only ever result in a single source package.
Option to repackage source packages with Files-Excluded lists in the debian/copyright file
Some upstream projects may include binary files in their release, or other undesirable content that needs to be omitted from the source package in Debian. The easiest way to filter them out is by adding to the debian/copyright file a Files-Excluded field listing the undesired files. The debian/copyright file is read by uscan, which will repackage the upstream sources on-the-fly when importing new upstream releases.
For a real-life example, see the debian/copyright files in the Godot package that lists:
The resulting repackaged upstream source tarball, as well as the upstream version component, will have an extra +ds to signify that it is not the true original upstream source but has been modified by Debian:
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb
Creating one Debian source package from multiple upstream source packages also possible
In some rare cases the upstream project may be split across multiple git repositories or the upstream release may consist of multiple components each in their own separate tarball. Usually these are very large projects that get some benefits from releasing components separately. If in Debian these are deemed to go into a single source package, it is technically possible using the component system in git-buildpackage and uscan. For an example see the gbp.conf and watch files in the node-cacache package.
Using this type of structure should be a last resort, as it creates complexity and inter-dependencies that are bound to cause issues later on. It is usually better to work with upstream and champion universal best practices with clear releases and version schemes.
When not to start the Debian packaging repository as a fork of the upstream one
Not all upstreams use Git for version control. It is by far the most popular, but there are still some that use e.g. Subversion or Mercurial. Who knows maybe in the future some new version control systems will start to compete with Git. There are also projects that use Git in massive monorepos and with complex submodule setups that invalidate the basic assumptions required to map an upstream Git repository into a Debian packaging repository.
In those cases one can t use a debian/latest branch on a clone of the upstream git repository as the starting point for the Debian packaging, but one must revert the traditional way of starting from an upstream release tarball with gbp import-orig package-1.0.tar.gz.
Conclusion
Created in August 1993, Debian is one of the oldest Linux distributions. In the 32 years since inception, the .deb packaging format and the tooling to work with it have evolved several generations. In the past 10 years, more and more Debian Developers have converged on certain core practices evidenced by https://trends.debian.net/, but there is still a lot of variance in workflows even for identical tasks. Hopefully, you find this post useful in giving practical guidance on how exactly to do the most common things when packaging software for Debian.
Happy packaging!
In this post, I demonstrate the optimal workflow for creating new Debian packages in 2025, preserving the upstream git history. The motivation for this is to lower the barrier for sharing improvements to and from upstream, and to improve software provenance and supply-chain security by making it easy to inspect every change at any level using standard git tooling.
Key elements of this workflow include:
Using a Git fork/clone of the upstream repository as the starting point for creating Debian packaging repositories.
Consistent use of the same git-buildpackage commands, with all package-specific options in gbp.conf.
Pristine-tar and upstream signatures for supply-chain security.
Use of Files-Excluded in the debian/copyright file to filter out unwanted files in Debian.
Patch queues to easily rebase and cherry-pick changes across Debian and upstream branches.
Efficient use of Salsa, Debian s GitLab instance, for both automated feedback from CI systems and human feedback from peer reviews.
To make the instructions so concrete that anyone can repeat all the steps themselves on a real package, I demonstrate the steps by packaging the command-line tool Entr. It is written in C, has very few dependencies, and its final Debian source package structure is simple, yet exemplifies all the important parts that go into a complete Debian package:
Creating a new packaging repository and publishing it under your personal namespace on salsa.debian.org.
Using dh_make to create the initial Debian packaging.
Posting the first draft of the Debian packaging as a Merge Request (MR) and using Salsa CI to verify Debian packaging quality.
Running local builds efficiently and iterating on the packaging process.
Create new Debian packaging repository from the existing upstream project git repository
First, create a new empty directory, then clone the upstream Git repository inside it:
Using a clean directory makes it easier to inspect the build artifacts of a Debian package, which will be output in the parent directory of the Debian source directory.
The extra parameters given to git clone lay the foundation for the Debian packaging git repository structure where the upstream git remote name is upstreamvcs. Only the upstream main branch is tracked to avoid cluttering git history with upstream development branches that are irrelevant for packaging in Debian.
Next, enter the git repository directory and list the git tags. Pick the latest upstream release tag as the commit to start the branch upstream/latest. This latest refers to the upstream release, not the upstream development branch. Immediately after, branch off the debian/latest branch, which will have the actual Debian packaging files in the debian/ subdirectory.
shellcd entr
git tag # shows the latest upstream release tag was '5.6'
git checkout -b upstream/latest 5.6
git checkout -b debian/latest
cd entr
git tag # shows the latest upstream release tag was '5.6'git checkout -b upstream/latest 5.6
git checkout -b debian/latest
At this point, the repository is structured according to DEP-14 conventions, ensuring a clear separation between upstream and Debian packaging changes, but there are no Debian changes yet. Next, add the Salsa repository as a new remote which called origin, the same as the default remote name in git.
This is an important preparation step to later be able to create a Merge Request on Salsa that targets the debian/latest branch, which does not yet have any debian/ directory.
Launch a Debian Sid (unstable) container to run builds in
To ensure that all packaging tools are of the latest versions, run everything inside a fresh Sid container. This has two benefits: you are guaranteed to have the most up-to-date toolchain, and your host system stays clean without getting polluted by various extra packages. Additionally, this approach works even if your host system is not Debian/Ubuntu.
cd ..
podman run --interactive --tty --rm --shm-size=1G --cap-add SYS_PTRACE \
--env='DEB*' --volume=$PWD:/tmp/test --workdir=/tmp/test debian:sid bash
Note that the container should be started from the parent directory of the git repository, not inside it. The --volume parameter will loop-mount the current directory inside the container. Thus all files created and modified are on the host system, and will persist after the container shuts down.
Once inside the container, install the basic dependencies:
Automate creating the debian/ files with dh-make
To create the files needed for the actual Debian packaging, use dh_make:
shell# dh_make --packagename entr_5.6 --single --createorig
Maintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]
Done. Please edit the files in the debian/ subdirectory now.
# dh_make --packagename entr_5.6 --single --createorigMaintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]Done. Please edit the files in the debian/ subdirectory now.
Due to how dh_make works, the package name and version need to be written as a single underscore separated string. In this case, you should choose --single to specify that the package type is a single binary package. Other options would be --library for library packages (see libgda5 sources as an example) or --indep (see dns-root-data sources as an example). The --createorig will create a mock upstream release tarball (entr_5.6.orig.tar.xz) from the current release directory, which is necessary due to historical reasons and how dh_make worked before git repositories became common and Debian source packages were based off upstream release tarballs (e.g. *.tar.gz).
At this stage, a debian/ directory has been created with template files, and you can start modifying the files and iterating towards actual working packaging.
shellgit add debian/
git commit -a -m "Initial Debian packaging"
git add debian/
git commit -a -m "Initial Debian packaging"
Review the files
The full list of files after the above steps with dh_make would be:
You can browse these files in the demo repository.
The mandatory files in the debian/ directory are:
changelog,
control,
copyright,
and rules.
All the other files have been created for convenience so the packager has template files to work from. The files with the suffix .ex are example files that won t have any effect until their content is adjusted and the suffix removed.
For detailed explanations of the purpose of each file in the debian/ subdirectory, see the following resources:
The Debian Policy Manual: Describes the structure of the operating system, the package archive and requirements for packages to be included in the Debian archive.
The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
Debhelper man pages: Detailed information of how the Debian package build system works, and how the contents of the various files in debian/ affect the end result.
As Entr, the package used in this example, is a real package that already exists in the Debian archive, you may want to browse the actual Debian packaging source at https://salsa.debian.org/debian/entr/-/tree/debian/latest/debian for reference.
Most of these files have standardized formatting conventions to make collaboration easier. To automatically format the files following the most popular conventions, simply run wrap-and-sort -vast or debputy reformat --style=black.
Identify build dependencies
The most common reason for builds to fail is missing dependencies. The easiest way to identify which Debian package ships the required dependency is using apt-file. If, for example, a build fails complaining that pcre2posix.h cannot be found or that libcre2-posix.so is missing, you can use these commands:
The output above implies that the debian/control should be extended to define a Build-Depends: libpcre2-dev relationship.
There is also dpkg-depcheck that uses strace to trace the files the build process tries to access, and lists what Debian packages those files belong to. Example usage:
shelldpkg-depcheck -b debian/rules build
dpkg-depcheck -b debian/rules build
Build the Debian sources to generate the .deb package
After the first pass of refining the contents of the files in debian/, test the build by running dpkg-buildpackage inside the container:
shelldpkg-buildpackage -uc -us -b
dpkg-buildpackage -uc -us -b
The options -uc -us will skip signing the resulting Debian source package and other build artifacts. The -b option will skip creating a source package and only build the (binary) *.deb packages.
The output is very verbose and gives a large amount of context about what is happening during the build to make debugging build failures easier. In the build log of entr you will see for example the line dh binary --buildsystem=makefile. This and other dh commands can also be run manually if there is a need to quickly repeat only a part of the build while debugging build failures.
To see what files were generated or modified by the build simply run git status --ignored:
shell$ git status --ignored
On branch debian/latest
Untracked files:
(use "git add <file>..." to include in what will be committed)
debian/debhelper-build-stamp
debian/entr.debhelper.log
debian/entr.substvars
debian/files
Ignored files:
(use "git add -f <file>..." to include in what will be committed)
Makefile
compat.c
compat.o
debian/.debhelper/
debian/entr/
entr
entr.o
status.o
$ git status --ignored
On branch debian/latest
Untracked files:
(use "git add <file>..." to include in what will be committed) debian/debhelper-build-stamp
debian/entr.debhelper.log
debian/entr.substvars
debian/files
Ignored files:
(use "git add -f <file>..." to include in what will be committed) Makefile
compat.c
compat.o
debian/.debhelper/
debian/entr/
entr
entr.o
status.o
Re-running dpkg-buildpackage will include running the command dh clean, which assuming it is configured correctly in the debian/rules file will reset the source directory to the original pristine state. The same can of course also be done with regular git commands git reset --hard; git clean -fdx. To avoid accidentally committing unnecessary build artifacts in git, a debian/.gitignore can be useful and it would typically include all four files listed as untracked above.
After a successful build you would have the following files:
The contents of debian/entr are essentially what goes into the resulting entr_5.6-1_amd64.deb package. Familiarizing yourself with the majority of the files in the original upstream source as well as all the resulting build artifacts is time consuming, but it is a necessary investment to get high-quality Debian packages.
There are also tools such as Debcraft that automate generating the build artifacts in separate output directories for each build, thus making it easy to compare the changes to correlate what change in the Debian packaging led to what change in the resulting build artifacts.
Re-run the initial import with git-buildpackage
When upstreams publish releases as tarballs, they should also be imported for optimal software supply-chain security, in particular if upstream also publishes cryptographic signatures that can be used to verify the authenticity of the tarballs.
To achieve this, the files debian/watch, debian/upstream/signing-key.asc, and debian/gbp.conf need to be present with the correct options. In the gbp.conf file, ensure you have the correct options based on:
Does upstream release tarballs? If so, enforce pristine-tar = True.
Does upstream sign the tarballs? If so, configure explicit signature checking with upstream-signatures = on.
Does upstream have a git repository, and does it have release git tags? If so, configure the release git tag format, e.g. upstream-vcs-tag = %(version%~%.)s.
To validate that the above files are working correctly, run gbp import-orig with the current version explicitly defined:
shell$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"
gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'
gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
As the original packaging was done based on the upstream release git tag, the above command will fetch the tarball release, create the pristine-tar branch, and store the tarball delta on it. This command will also attempt to create the tag upstream/5.6 on the upstream/latest branch.
Import new upstream versions in the future
Forking the upstream git repository, creating the initial packaging, and creating the DEP-14 branch structure are all one-off work needed only when creating the initial packaging.
Going forward, to import new upstream releases, one would simply run git fetch upstreamvcs; gbp import-orig --uscan, which fetches the upstream git tags, checks for new upstream tarballs, and automatically downloads, verifies, and imports the new version. See the galera-4-demo example in the Debian source packages in git explained post as a demo you can try running yourself and examine in detail.
You can also try running gbp import-orig --uscan without specifying a version. It would fetch it, as it will notice there is now Entr version 5.7 available, and import it.
Build using git-buildpackage
From this stage onwards you should build the package using gbp buildpackage, which will do a more comprehensive build.
shellgbp buildpackage -uc -us
gbp buildpackage -uc -us
The git-buildpackage build also includes running Lintian to find potential Debian policy violations in the sources or in the resulting .deb binary packages. Many Debian Developers run lintian -EviIL +pedantic after every build to check that there are no new nags, and to validate that changes intended to previous Lintian nags were correct.
Open a Merge Request on Salsa for Debian packaging review
Getting everything perfectly right takes a lot of effort, and may require reaching out to an experienced Debian Developers for review and guidance. Thus, you should aim to publish your initial packaging work on Salsa, Debian s GitLab instance, for review and feedback as early as possible.
For somebody to be able to easily see what you have done, you should rename your debian/latest branch to another name, for example next/debian/latest, and open a Merge Request that targets the debian/latest branch on your Salsa fork, which still has only the unmodified upstream files.
If you have followed the workflow in this post so far, you can simply run:
git checkout -b next/debian/latest
git push --set-upstream origin next/debian/latest
Open in a browser the URL visible in the git remote response
Write the Merge Request description in case the default text from your commit is not enough
Mark the MR as Draft using the checkbox
Publish the MR and request feedback
Once a Merge Request exists, discussion regarding what additional changes are needed can be conducted as MR comments. With an MR, you can easily iterate on the contents of next/debian/latest, rebase, force push, and request re-review as many times as you want.
While at it, make sure the Settings > CI/CD page has under CI/CD configuration file the value debian/salsa-ci.yml so that the CI can run and give you immediate automated feedback.
For an example of an initial packaging Merge Request, see https://salsa.debian.org/otto/entr-demo/-/merge_requests/1.
Open a Merge Request / Pull Request to fix upstream code
Due to the high quality requirements in Debian, it is fairly common that while doing the initial Debian packaging of an open source project, issues are found that stem from the upstream source code. While it is possible to carry extra patches in Debian, it is not good practice to deviate too much from upstream code with custom Debian patches. Instead, the Debian packager should try to get the fixes applied directly upstream.
Using git-buildpackage patch queues is the most convenient way to make modifications to the upstream source code so that they automatically convert into Debian patches (stored at debian/patches), and can also easily be submitted upstream as any regular git commit (and rebased and resubmitted many times over).
First, decide if you want to work out of the upstream development branch and later cherry-pick to the Debian packaging branch, or work out of the Debian packaging branch and cherry-pick to an upstream branch.
The example below starts from the upstream development branch and then cherry-picks the commit into the git-buildpackage patch queue:
shellgit checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expected
git commit -a -m "Commit title" -m "Commit body"
git push # submit upstream
gbp pq import --force --time-machine=10
git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadata
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
git checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expectedgit commit -a -m "Commit title" -m "Commit body"git push # submit upstreamgbp pq import --force --time-machine=10git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadatagbp buildpackage -uc -us -b
./entr # verify change works as expectedgbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
The example below starts by making the fix on a git-buildpackage patch queue branch, and then cherry-picking it onto the upstream development branch:
These can be run at any time, regardless if any debian/patches existed prior, or if existing patches applied cleanly or not, or if there were old patch queue branches around. Note that the extra -b in gbp buildpackage -uc -us -b instructs to build only binary packages, avoiding any nags from dpkg-source that there are modifications in the upstream sources while building in the patches-applied mode.
Programming-language specific dh-make alternatives
As each programming language has its specific way of building the source code, and many other conventions regarding the file layout and more, Debian has multiple custom tools to create new Debian source packages for specific programming languages.
Notably, Python does not have its own tool, but there is an dh_make --python option for Python support directly in dh_make itself. The list is not complete and many more tools exist. For some languages, there are even competing options, such as for Go there is in addition to dh-make-golang also Gophian.
When learning Debian packaging, there is no need to learn these tools upfront. Being aware that they exist is enough, and one can learn them only if and when one starts to package a project in a new programming language.
The difference between source git repository vs source packages vs binary packages
As seen in earlier example, running gbp buildpackage on the Entr packaging repository above will result in several files:
The entr_5.6-1_amd64.deb is the binary package, which can be installed on a Debian/Ubuntu system. The rest of the files constitute the source package. To do a source-only build, run gbp buildpackage -S and note the files produced:
The source package files can be used to build the binary .deb for amd64, or any architecture that the package supports. It is important to grasp that the Debian source package is the preferred form to be able to build the binary packages on various Debian build systems, and the Debian source package is not the same thing as the Debian packaging git repository contents.
If the package is large and complex, the build could result in multiple binary packages. One set of package definition files in debian/ will however only ever result in a single source package.
Option to repackage source packages with Files-Excluded lists in the debian/copyright file
Some upstream projects may include binary files in their release, or other undesirable content that needs to be omitted from the source package in Debian. The easiest way to filter them out is by adding to the debian/copyright file a Files-Excluded field listing the undesired files. The debian/copyright file is read by uscan, which will repackage the upstream sources on-the-fly when importing new upstream releases.
For a real-life example, see the debian/copyright files in the Godot package that lists:
The resulting repackaged upstream source tarball, as well as the upstream version component, will have an extra +ds to signify that it is not the true original upstream source but has been modified by Debian:
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb
Creating one Debian source package from multiple upstream source packages also possible
In some rare cases the upstream project may be split across multiple git repositories or the upstream release may consist of multiple components each in their own separate tarball. Usually these are very large projects that get some benefits from releasing components separately. If in Debian these are deemed to go into a single source package, it is technically possible using the component system in git-buildpackage and uscan. For an example see the gbp.conf and watch files in the node-cacache package.
Using this type of structure should be a last resort, as it creates complexity and inter-dependencies that are bound to cause issues later on. It is usually better to work with upstream and champion universal best practices with clear releases and version schemes.
When not to start the Debian packaging repository as a fork of the upstream one
Not all upstreams use Git for version control. It is by far the most popular, but there are still some that use e.g. Subversion or Mercurial. Who knows maybe in the future some new version control systems will start to compete with Git. There are also projects that use Git in massive monorepos and with complex submodule setups that invalidate the basic assumptions required to map an upstream Git repository into a Debian packaging repository.
In those cases one can t use a debian/latest branch on a clone of the upstream git repository as the starting point for the Debian packaging, but one must revert the traditional way of starting from an upstream release tarball with gbp import-orig package-1.0.tar.gz.
Conclusion
Created in August 1993, Debian is one of the oldest Linux distributions. In the 32 years since inception, the .deb packaging format and the tooling to work with it have evolved several generations. In the past 10 years, more and more Debian Developers have converged on certain core practices evidenced by https://trends.debian.net/, but there is still a lot of variance in workflows even for identical tasks. Hopefully, you find this post useful in giving practical guidance on how exactly to do the most common things when packaging software for Debian.
Happy packaging!
I actually released last week I haven t had time to blog, but today is my birthday and taking some time to myself!This release came with a major bugfix. As it turns out our applications were very crashy on non-KDE platforms including Ubuntu proper. Unfortunately, for years, and I didn t know. Developers were closing the bug reports as invalid because users couldn t provide a stacktrace. I have now convinced most developers to assign snap bugs to the Snap platform so I at least get a chance to try and fix them. So with that said, if you tried our snaps in the past and gave up in frustration, please do try them again! I also spent some time cleaning up our snaps to only have current releases in the store, as rumor has it snapcrafters will be responsible for any security issues. With 200+ snaps I maintain, that is a lot of responsibility. We ll see if I can pull it off.
Life!
My last surgery was a success! I am finally healing and out of a sling for the first time in almost a year. I have also lined up a good amount of web work for next month and hopefully beyond. I have decided to drop the piece work for donations and will only accept per project proposals for open source work. I will continue to maintain KDE snaps for as long as time allows. A big thank you to everyone that has donated over the last year to fund my survival during this broken arm fiasco. I truly appreciate it!
With that said, if you want to drop me a donation for my work, birthday or well-being until I get paid for the aforementioned web work please do so here:
When I configured forgejo-actions I used a docker-compose.yaml file to execute the runner and a dind container
configured to run using privileged mode to be able to build images with it; as mentioned on my
post about my
setup, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the
installation.
On a work chat the other day someone mentioned that the GitLab documentation about
using kaniko says it is no longer maintained (see the kaniko issue
#3348) so we should look into alternatives for kubernetes
clusters.
I never liked kaniko too much, but it works without privileged mode and does not need a daemon, which is a good reason
to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use
with my forgejo-actions setup.
I was going to try buildah and podman but it seems that they need to adjust
things on the systems running them:
When I tried to use buildah inside a docker container in Ubuntu I found the problems described on the buildah
issue #1901 so I moved on.
Reading the podman documentation I saw that I need to export the fuse device to run it inside a container and, as
I found other option, I also skipped it.
As my runner was already configured to use dind I decided to look into sysbox
as a way of removing the privileged flag to make things more secure but have the same functionality.
Installing the sysbox packageAs I use Debian and Ubuntu systems I used the .deb packages distributed from the sysbox release page to install
it (in my case I used the one from the 0.6.7 version).
On the machine running forgejo (a Debian 12 server) I downloaded the package, stopped the running containers (it is
needed to install the package and the only ones running where the ones started by the docker-compose.yaml file) and
installed the sysbox-ce_0.6.7.linux_amd64.deb package using dpkg.
Updating the docker-compose.yaml fileTo run the dind container without setting the privileged mode we set sysbox-runc as the runtime on the dind
container definition and set the privileged flag to false (it is the same as removing the key, as it defaults to
false):
Testing the changesAfter applying the changes to the docker-compose.yaml file we start the containers and to test things we re-run
previously executed jobs to see if things work as before.
In my case I re-executed the build-image-from-tag workflow
#18 from the oci project and everything worked as expected.
ConclusionFor my current use case (docker + dind) seems that sysbox is a good solution but I m not sure if I ll be
installing it on kubernetes anytime soon
unless I find a valid reason to do it (last time we talked about it my co workers said that they are evaluating
buildah and podman for kubernetes and probably we will use them to replace kaniko in our gitlab-ci pipelines and
for those tools the use of sysbox seems an overkill).
After my previous posts related to Argo CD (one about
argocd-autopilot and another with some
usage examples) I started to look into
Kluctl (I also plan to review Flux, but I m more interested on the kluctl
approach right now).
While reading an entry on the project blog about Cluster API
somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid
way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common
clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any
given time.
On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about
argocd) as the host and will show how to:
use its ingress (in our case traefik) to access the API of the virtual one (removes the need of having to use the
vcluster connect command to access it with kubectl),
publish the ingress objects deployed on the virtual cluster on the host ingress, and
use the sealed-secrets of the host cluster to manage the virtual cluster secrets.
Creating the virtual cluster
Installing the vcluster applicationTo create the virtual clusters we need the vcluster command, we can install it with arkade:
arkade get vcluster
The vcluster.yaml fileTo create the cluster we are going to use the following vcluster.yaml file (you can find the documentation about all
its options here):
controlPlane:proxy:# Extra hostnames to sign the vCluster proxy certificate forextraSANs:-my-vcluster-api.lo.mixinet.netexportKubeConfig:context:my-vcluster_k3d-argocdserver:https://my-vcluster-api.lo.mixinet.net:8443secret:name:my-vcluster-kubeconfigsync:toHost:ingresses:enabled:trueserviceAccounts:enabled:truefromHost:ingressClasses:enabled:truenodes:enabled:trueclearImageStatus:truesecrets:enabled:truemappings:byName:# sync all Secrets from the 'my-vcluster-default' namespace to the# virtual "default" namespace."my-vcluster-default/*":"default/*"# We could add other namespace mappings if needed, i.e.:# "my-vcluster-kube-system/*": "kube-system/*"
On the controlPlane section we ve added the proxy.extraSANs entry to add an extra host name to make sure it is added
to the cluster certificates if we use it from an ingress.
The exportKubeConfig section creates a kubeconfig secret on the virtual cluster namespace using the provided
host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.
On the sync section we enable the synchronization of Ingress objects and ServiceAccounts from the virtual to the
host cluster:
We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside
world.
The service account synchronization is not really needed, but we enable it because if we test this configuration with
EKS it would be useful if we use IAM roles for the service accounts.
On the opposite direction (from the host to the virtual cluster) we synchronize:
The IngressClass objects, to be able to use the host ingress server(s).
The Nodes (we are not using the info right now, but it could be interesting if we want to have the real information
of the nodes running pods of the virtual cluster).
The Secrets from the my-vcluster-default host namespace to the default of the virtual cluster; that
synchronization allows us to deploy SealedSecrets on the host that generate secrets that are copied
automatically to the virtual one. Initially we only copy secrets for one namespace but if the virtual cluster needs
others we can add namespaces on the host and their mappings to the virtual one on the vcluster.yaml file.
Creating the virtual clusterTo create the virtual cluster we run the following command:
It creates the virtual cluster on the my-vcluster namespace using the vcluster.yaml file shown before without
connecting to the cluster from our local machine (if we don t pass that option the command adds an entry on our
kubeconfig and launches a proxy to connect to the virtual cluster that we don t plan to use).
Adding an ingress TCP route to connect to the vcluster apiAs explained before, we need to create an IngressTcpRoute object to be able to connect to the vcluster API, we use the
following definition:
Once we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL
using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted
it on the vcluster.yaml file, as explained before).
Getting the kubeconfig for the vclusterOnce the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig secret on its
namespace on the host cluster.
To dump it to the ~/.kube/my-vcluster-config we can do the following:
Once available we can define the vkubectl alias to adjust the KUBECONFIG variable to access it:
alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"
Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx or a similar tool to change
the context (for our vcluster the context will be my-vcluster_k3d-argocd). If the KUBECONFIG variable is defined and
only has the PATH to a single file the merge can be done running the following:
On the rest of this post we will use the vkubectl alias when connecting to the virtual cluster, i.e. to check that it
works we can run the cluster-info subcommand:
vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Installing the dummyhttpd applicationTo test the virtual cluster we are going to install the dummyhttpd application using the following
kustomization.yaml file:
apiVersion:kustomize.config.k8s.io/v1beta1kind:Kustomizationresources:-https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0# Add the config mapconfigMapGenerator:-name:dummyhttp-configmapliterals:-CM_VAR="Vcluster Test Value"behavior:createoptions:disableNameSuffixHash:truepatches:# Change the ingress host name-target:kind:Ingressname:dummyhttppatch: -- op: replacepath: /spec/rules/0/hostvalue: vcluster-dummyhttp.lo.mixinet.net# Add reloader annotations -- it will only work if we install reloader on the# virtual cluster, as the one on the host cluster doesn't see the vcluster# deployment objects-target:kind:Deploymentname:dummyhttppatch: -- op: addpath: /metadata/annotationsvalue:reloader.stakater.com/auto: "true"reloader.stakater.com/rollout-strategy: "restart"
It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run
kustomize and vkubectl:
kustomize build . vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created
We can check that everything worked using curl:
curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ jq -cM."c": "Vcluster Test Value","s": ""
The objects available on the vcluster now are:
vkubectl get all,configmap,ingress
NAME READY STATUS RESTARTS AGE
pod/dummyhttp-55569589bc-9zl7t 1/1 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dummyhttp ClusterIP 10.43.51.39 <none>80/TCP 24s
service/kubernetes ClusterIP 10.43.153.12 <none>443/TCP 14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dummyhttp 1/1 1 1 24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/dummyhttp-55569589bc 1 1 1 24s
NAME DATA AGE
configmap/dummyhttp-configmap 1 24s
configmap/kube-root-ca.crt 1 14m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 24s
While we have the following ones on the my-vcluster namespace of the host cluster:
kubectl get all,configmap,ingress -n my-vcluster
NAME READY STATUS RESTARTS AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster 1/1 Running 0 18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster 1/1 Running 0 45s
pod/my-vcluster-0 1/1 Running 0 19m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dummyhttp-x-default-x-my-vcluster ClusterIP 10.43.51.39 <none>80/TCP 45s
service/kube-dns-x-kube-system-x-my-vcluster ClusterIP 10.43.91.198 <none>53/UDP,53/TCP,9153/TCP 18m
service/my-vcluster ClusterIP 10.43.153.12 <none>443/TCP,10250/TCP 19m
service/my-vcluster-headless ClusterIP None <none>443/TCP 19m
service/my-vcluster-node-k3d-argocd-agent-1 ClusterIP 10.43.189.188 <none>10250/TCP 18m
NAME READY AGE
statefulset.apps/my-vcluster 1/1 19m
NAME DATA AGE
configmap/coredns-x-kube-system-x-my-vcluster 2 18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster 1 45s
configmap/kube-root-ca.crt 1 19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster 1 11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster 1 18m
configmap/vc-coredns-my-vcluster 1 19m
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 45s
As shown, we have copies of the Service, Pod, Configmap and Ingress objects, but there is no copy of the
Deployment or ReplicaSet.
Creating a sealed secret for dummyhttpdTo use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default namespace
and add there the sealed secrets we want to have available as secrets on the default namespace of the virtual cluster:
After running the previous commands we have the following objects available on the host cluster:
kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME STATUS SYNCED AGE
sealedsecret.bitnami.com/dummyhttp-secret True 34s
NAME TYPE DATA AGE
secret/dummyhttp-secret Opaque 1 34s
And we can see that the secret is also available on the virtual cluster with the content we expected:
vkubectl get secrets
NAME TYPE DATA AGE
dummyhttp-secret Opaque 1 34s
vkubectl get secret/dummyhttp-secret --template=" .data.SECRET_VAR " \
base64 -d
Vcluster Boo
But the output of the curl command has not changed because, although we have the reloader controller deployed on the
host cluster, it does not see the Deployment object of the virtual one and the pods are not touched:
curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ jq -cM."c": "Vcluster Test Value","s": ""
Installing the reloader applicationTo make reloader work on the virtual cluster we just need to install it as we did on the host using the following
kustomization.yaml file:
apiVersion:kustomize.config.k8s.io/v1beta1kind:Kustomizationnamespace:kube-systemresources:-github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2patches:# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted-target:kind:Deploymentname:reloader-reloaderpatch: -- op: addpath: /spec/template/spec/containers/0/argsvalue:- '--reload-on-create=true'- '--reload-on-delete=true'- '--reload-strategy=annotations'
We deploy it with kustomize and vkubectl:
kustomize build . vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created
As the controller was not available when the secret was created the pods linked to the Deployment are not updated, but
we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed
version and copied to the virtual cluster where the reloader controller updates the pod and the curl command shows the
new output:
kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
sleep 2
vkubectl get pods
NAME READY STATUS RESTARTS AGE
dummyhttp-78bf5fb885-fmsvs 1/1 Terminating 0 6m33s
dummyhttp-c68684bbf-nx8f9 1/1 Running 0 6s
curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ jq -cM .
"c":"Vcluster Test Value","s":"Vcluster Boo"
If we change the secret on the host systems things get updated pretty quickly now:
Pause and restore the vclusterThe status of pods and statefulsets while the virtual cluster is active can be seen using kubectl:
kubectl get pods,statefulsets -n my-vcluster
NAME READY STATUS RESTARTS AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster 1/1 Running 0 127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster 1/1 Running 0 4m39s
pod/my-vcluster-0 1/1 Running 0 128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster 1/1 Running 0 60m
NAME READY AGE
statefulset.apps/my-vcluster 1/1 128m
Pausing the vclusterIf we don t need to use the virtual cluster we can pause it and after a small amount of time all Pods are gone because
the statefulSet is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that
have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on
clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that
now can be used for other things):
vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
kubectl get pods,statefulsets -n my-vcluster
NAME READY AGE
statefulset.apps/my-vcluster 0/0 130m
Now the curl command fails:
curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found
Although the ingress is still available (it returns a 404 because there is no pod behind the service):
kubectl get ingress -n my-vcluster
NAME CLASS HOSTS ADDRESS PORTS AGE
dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80 120m
In fact, the same problem happens when we try to connect to the vcluster API; the error shown by kubectl is related
to the TLS certificate because the 404 page uses the wildcard certificate instead of the self signed one:
vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1 grep subject
* subject: CN=lo.mixinet.net
* subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"
Resuming the vclusterWhen we want to use the virtual cluster again we just need to use the resume command:
Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of
course.
Cleaning upThe virtual cluster can be removed using the delete command:
vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted
That removes everything we used on this post except the sealed secrets and secrets that we put on the
my-vcluster-default namespace because it was created by us.
If we delete the namespace all the secrets and sealed secrets on it are also removed:
ConclusionsI believe that the use of virtual clusters can be a good option for two of the proposed use cases that I ve encountered
in real projects in the past:
need of short lived clusters for developers or teams,
execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual
clusters that are created on demand or paused and resumed when needed).
For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform
offering could be interesting.
In any case when everything is not done inside kubernetes we will also have to check how to manage the external services
(i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way
of creating, deleting or pause and resume those services).
As a followup of my post about the use of argocd-autopilot
I m going to deploy various applications to the cluster using Argo CD from the same
repository we used on the previous post.
For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the
argocd-server (the resource was updated but the application Pod was not because there was no change on the
argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we
want to avoid.
The proposed solution to this kind of issues on the
helm documentation is to add
annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if
a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is
triggered.
On this post we will install a couple of controllers and an application to show how we can handle Secrets with
argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks:
Deploy the Reloader controller to our cluster. It is a tool that watches
changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment,
StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some
annotations to the objects to make things work).
Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its
job when we add or update a ConfigMap.
Install the Sealed Secrets controller to manage secrets inside our
cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.
Creating the test project for argocd-autopilotAs we did our installation using argocd-autopilot we will use its structure to manage the applications.
The first thing to do is to create a project (we will name it test) as follows:
argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'
Now that the test project is available we will use it on our argocd-autopilot invocations when creating
applications.
Installing the reloader controllerTo add the reloader application to the test project as a kustomize application and deploy it on the tools
namespace with argocd-autopilot we do the following:
argocd-autopilot app create reloader \
--app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
--project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader
That command creates four files on the argocd repository:
One to create the tools namespace:bootstrap/cluster-resources/in-cluster/tools-ns.yaml
The kustomization.yaml file for the test project (by default it includes the same configuration used on the
base definition, but we could make other changes if needed):apps/reloader/overlays/test/kustomization.yaml
The config.json file used to define the application on argocd for the test project (it points to the folder
that includes the previous kustomization.yaml file):apps/reloader/overlays/test/config.json
We can check that the application is working using the argocd command line application:
argocd app get argocd/test-reloader -o tree
Name: argocd/test-reloader
Project: test
Server: https://kubernetes.default.svc
Namespace: tools
URL: https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo: https://forgejo.mixinet.net/blogops/argocd.git
Target:
Path: apps/reloader/overlays/test
SyncWindow: Sync Allowed
Sync Policy: Automated (Prune)
Sync Status: Synced to (2893b56)
Health Status: Healthy
KIND/NAME STATUS HEALTH MESSAGE
ClusterRole/reloader-reloader-role Synced
ClusterRoleBinding/reloader-reloader-role-binding Synced
ServiceAccount/reloader-reloader Synced serviceaccount/reloader-reloader created
Deployment/reloader-reloader Synced Healthy deployment.apps/reloader-reloader created
ReplicaSet/reloader-reloader-5b6dcc7b6f Healthy
Pod/reloader-reloader-5b6dcc7b6f-vwjcx Healthy
Adding flags to the reloader serverThe runtime configuration flags for the reloader server are described on the project
README.md
file, in our case we want to adjust three values:
We want to enable the option to reload a workload when a ConfigMap or Secret is created,
We want to enable the option to reload a workload when a ConfigMap or Secret is deleted,
We want to use the annotations strategy for reloads, as it is the recommended mode of operation when using argocd.
To pass them we edit the apps/reloader/overlays/test/kustomization.yaml file to patch the pod container template, the
text added is the following:
patches:# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted-target:kind:Deploymentname:reloader-reloaderpatch: -- op: addpath: /spec/template/spec/containers/0/argsvalue:- '--reload-on-create=true'- '--reload-on-delete=true'- '--reload-strategy=annotations'
After committing and pushing the updated file the system launches the application with the new options.
The dummyhttp applicationTo do a quick test we are going to deploy the dummyhttp web server using an
image generated using the following Dockerfile:
# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp># This arg could be passed by the container build command (used with mirrors)ARG OCI_REGISTRY_PREFIX# Latest tested version of alpineFROM $ OCI_REGISTRY_PREFIX alpine:3.21.3# Tool versionsARG DUMMYHTTP_VERS=1.1.1# Download binaryRUN ARCH="$(apk --print-arch)"&&\
VERS="$DUMMYHTTP_VERS"&&\
URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl"&&\
wget "$URL"-O"/tmp/dummyhttp"&&\
install /tmp/dummyhttp /usr/local/bin &&\
rm-f /tmp/dummyhttp
# Set the entrypoint to /usr/local/bin/dummyhttpENTRYPOINT [ "/usr/local/bin/dummyhttp" ]
The kustomize base application is available on a monorepo that contains the following files:
A Deployment definition that uses the previous image but uses /bin/sh -c as its entrypoint (command in the
k8s Pod terminology) and passes as its argument a string that runs the eval command to be able to expand
environment variables passed to the pod (the definition includes two optional variables, one taken from a
ConfigMap and another one from a Secret):
Deploying the dummyhttp application from argocdWe could create the dummyhttp application using the argocd-autopilot command as we ve done on the reloader case,
but we are going to do it manually to show how simple it is.
First we ve created the apps/dummyhttp/base/kustomization.yaml file to include the application from the previous
repository:
And finally we add the apps/dummyhttp/overlays/test/config.json file to configure the application as the
ApplicationSet defined by argocd-autopilot expects:
Patching the applicationNow we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml file:
One to add annotations for reloader (one to enable it and another one to set the roll out strategy to restart to
avoid touching the deployments, as that can generate issues with argocd).
Another to change the ingress hostname (not really needed, but something quite reasonable for a specific project).
After committing and pushing the changes we can use the argocd cli to check the status of the application:
argocd app get argocd/test-dummyhttp -o tree
Name: argocd/test-dummyhttp
Project: test
Server: https://kubernetes.default.svc
Namespace: default
URL: https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo: https://forgejo.mixinet.net/blogops/argocd.git
Target:
Path: apps/dummyhttp/overlays/test
SyncWindow: Sync Allowed
Sync Policy: Automated (Prune)
Sync Status: Synced to (fbc6031)
Health Status: Healthy
KIND/NAME STATUS HEALTH MESSAGE
Deployment/dummyhttp Synced Healthy deployment.apps/dummyhttp configured
ReplicaSet/dummyhttp-55569589bc Healthy
Pod/dummyhttp-55569589bc-qhnfk Healthy
Ingress/dummyhttp Synced Healthy ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp Synced Healthy service/dummyhttp unchanged
Endpoints/dummyhttp
EndpointSlice/dummyhttp-x57bl
As we can see, the Deployment and Ingress where updated, but the Service is unchanged.
To validate that the ingress is using the new hostname we can use curl:
curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
curl -s https://test-dummyhttp.lo.mixinet.net:8443/
"c": "", "s": ""
Adding a ConfigMapNow that the system is adjusted to reload the application when the ConfigMap or Secret is created, deleted or
updated we are ready to add one file and see how the system reacts.
We modify the apps/dummyhttp/overlays/test/kustomization.yaml file to create the ConfigMap using the
configMapGenerator as follows:
After committing and pushing the changes we can see that the ConfigMap is available, the pod has been deleted and
started again and the curl output includes the new value:
kubectl get configmaps,pods
NAME READY STATUS RESTARTS AGE
configmap/dummyhttp-configmap 1 11s
configmap/kube-root-ca.crt 1 4d7h
NAME DATA AGE
pod/dummyhttp-779c96c44b-pjq4d 1/1 Running 0 11s
pod/dummyhttp-fc964557f-jvpkx 1/1 Terminating 0 2m42s
curl -s https://test-dummyhttp.lo.mixinet.net:8443 jq -M .
"c": "Default Test Value",
"s": ""
Using helm with argocd-autopilotRight now there is no direct support in argocd-autopilot to manage applications using helm (see the issue
#38 on the project), but we want to use a chart in our
next example.
There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot is to
use kustomize applications that call helm as described
here.
The only thing needed before being able to use the approach is to add the kustomize.buildOptions flag to the
argocd-cm on the bootstrap/argo-cd/kustomization.yaml file, its contents now are follows:
bootstrap/argo-cd/kustomization.yaml
apiVersion:kustomize.config.k8s.io/v1beta1configMapGenerator:-behavior:mergeliterals:# Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)-kustomize.buildOptions="--enable-helm"-repository.credentials=- passwordSecret:key: git_tokenname: autopilot-secreturl: https://forgejo.mixinet.net/usernameSecret:key: git_usernamename: autopilot-secretname:argocd-cm# Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)-behavior:mergeliterals:-"server.insecure=true"name:argocd-cmd-params-cmkind:Kustomizationnamespace:argocdresources:-github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19-ingress_route.yaml
On the following section we will explain how the application is defined to make things work.
Installing the sealed-secrets controllerTo manage secrets in our cluster we are going to use the
sealed-secrets controller and to install it we are going to use its
chart.
As we mentioned on the previous section, the idea is to create a kustomize application and use that to deploy the
chart, but we are going to create the files manually, as we are not going import the base kustomization files from a
remote repository.
As there is no clear way to override helm Chart values using
overlays we are going to use a generator to create the helm configuration from an external resource and include it
from our overlays (the idea has been taken from this repository,
which was referenced from a comment
on the kustomize issue #38 mentioned earlier).
The sealed-secrets applicationWe have created the following files and folders manually:
apps/sealed-secrets/
helm
chart.yaml
kustomization.yaml
overlays
test
config.json
kustomization.yaml
values.yaml
The helm folder contains the generator template that will be included from our overlays.
The kustomization.yaml includes the chart.yaml as a resource:
apps/sealed-secrets/helm/kustomization.yaml
And the chart.yaml file defines the HelmChartInflationGenerator:
apps/sealed-secrets/helm/chart.yaml
apiVersion:builtinkind:HelmChartInflationGeneratormetadata:name:sealed-secretsreleaseName:sealed-secretsname:sealed-secretsnamespace:kube-systemrepo:https://bitnami-labs.github.io/sealed-secretsversion:2.17.2includeCRDs:true# Add common values to all argo-cd projects inlinevaluesInline:fullnameOverride:sealed-secrets-controller# Load a values.yaml file from the same directory that uses this generatorvaluesFile:values.yaml
For this chart the template adjusts the namespace to kube-system and adds the fullnameOverride on the
valuesInline key because we want to use those settings on all the projects (they are the values expected by the
kubeseal command line application, so we adjust them to avoid the need to add additional parameters to it).
We adjust global values as inline to be able to use a the valuesFile from our overlays; as we are using a generator
the path is relative to the folder that contains the kustomization.yaml file that calls it, in our case we will need
to have a values.yaml file on each overlay folder (if we don t want to overwrite any values for a project we can
create an empty file, but it has to exist).
Finally, our overlay folder contains three files, a kustomization.yaml file that includes the generator from the
helm folder, the values.yaml file needed by the chart and the config.json file used by argocd-autopilot to
install the application.
The kustomization.yaml file contents are:
apps/sealed-secrets/overlays/test/kustomization.yaml
apiVersion:kustomize.config.k8s.io/v1beta1kind:Kustomization# Uncomment if you want to add additional resources using kustomize#resources:#- ../../basegenerators:-../../helm
The values.yaml file enables the ingress for the application and adjusts its hostname:
apps/sealed-secrets/overlays/test/values.yaml
Once we commit and push the files the sealed-secrets application is installed in our cluster, we can check it using
curl to get the public certificate used by it:
That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart
to use the kube-system namespace and set the controller name to sealed-secrets-controller as the tool expects.
If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the
public key:
Or, if we don t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.
The sealed version of the secret looks like this:
This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd application),
but before doing that we are going to check the output of our dummyhttp service and get the list of Secrets and
SealedSecrets in the default namespace:
curl -s https://test-dummyhttp.lo.mixinet.net:8443 jq -M .
"c": "Default Test Value",
"s": ""
kubectl get sealedsecrets,secrets
No resources found in default namespace.
Now we add the SealedSecret to the dummyapp copying the file and adding it to the kustomization.yaml file:
Once we commit and push the files Argo CD creates the SealedSecret and the controller generates the Secret:
kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
kubectl get sealedsecrets,secrets
NAME STATUS SYNCED AGE
sealedsecret.bitnami.com/dummyhttp-secret True 3s
NAME TYPE DATA AGE
secret/dummyhttp-secret Opaque 1 3s
If we check the command output we can see the new value of the secret:
Using sealed-secrets in production clustersIf you plan to use sealed-secrets look into its
documentation to understand how it manages the
private keys, how to backup things and keep in mind that, as the documentation
explains, you can rotate
your sealed version of the secrets, but that doesn t change the actual secrets.
If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the
controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing
both things at the same time).
Final remarksOn this post we have seen how to deploy applications using the argocd-autopilot model, including the use of helm
charts inside kustomize applications and how to install and use the sealed-secrets controller.
It has been interesting and I ve learnt a lot about argocd in the process, but I believe that if I ever want to use it
in production I will also review the native helm support in argocd using a separate repository to manage the
applications, at least to be able to compare it to the model explained here.
I was just released from the hospital after a 3 day stay for my ( hopefully ) last surgery. There was concern with massive blood loss and low heart rate. I have stabilized and have come home. Unfortunately, they had to prescribe many medications this round and they are extremely expensive and used up all my funds. I need gas money to get to my post-op doctors appointments, and food would be cool. I would appreciate any help, even just a dollar!
I am already back to work, and continued work on the crashy KDE snaps in a non KDE env. ( Also affects anyone using kde-neon extensions such as FreeCAD) I hope to have a fix in the next day or so.
Fixed kate bug https://bugs.kde.org/show_bug.cgi?id=503285
Thanks for stopping by.
For a long time I ve been wanting to try GitOps tools, but I haven t had the chance to try them for real on the projects
I was working on.
As now I have some spare time I ve decided I m going to play a little with Argo CD,
Flux and Kluctl to test them and be able to use one of them in a real project
in the future if it looks appropriate.
On this post I will use Argo-CD Autopilot to install argocd on a
k3d local cluster installed using OpenTofu to test the autopilot approach of
managing argocd and test the tool (as it manages argocd using a git repository it can be used to test argocd as
well).
Installing tools locally with arkadeRecently I ve been using the arkade tool to install kubernetes related
applications on Linux servers and containers, I usually get the applications with it and install them on the
/usr/local/bin folder.
For this post I ve created a simple script that checks if the tools I ll be using are available and installs them on the
$HOME/.arkade/bin folder if missing (I m assuming that docker is already available, as it is not installable with
arkade):
#!/bin/sh# TOOLS LISTARKADE_APPS="argocd argocd-autopilot k3d kubectl sops tofu"# Add the arkade binary directory to the path if missingcase":$ PATH:"in*:"$ HOME/.arkade/bin":*);;*)export PATH="$ PATH:$ HOME/.arkade/bin";;esac# Install or update arkadeif command-v arkade >/dev/null;then
echo"Trying to update the arkade application"sudo arkade update
else
echo"Installing the arkade application"
curl -sLS https://get.arkade.dev sudo sh
fi
echo""echo"Installing tools with arkade"echo""for app in$ARKADE_APPS;do
app_path="$(command-v$app)"true
if["$app_path"];then
echo"The application '$app' already available on '$app_path'"else
arkade get "$app"fi
done
cat<<EOF
Add the ~/.arkade/bin directory to your PATH if tools have been installed there
EOF
The rest of scripts will add the binary directory to the PATH if missing to make sure things work if something was
installed there.
Creating a k3d cluster with opentofuAlthough using k3d directly will be a good choice for the creation of the cluster, I m using tofu to do it because
that will probably be the tool used to do it if we were working with Cloud Platforms like AWS or Google.
The main.tf file is as follows:
The k3d configuration is quite simple, as I plan to use the default traefik ingress controller with TLS I publish
the 443 port on the hosts 8443 port, I ll explain how I add a valid certificate on the next step.
I ve prepared the following script to initialize and apply the changes:
#!/bin/shset-e# VARIABLES# Default token for the argocd clusterK3D_CLUSTER_TOKEN="argocdToken"# Relative PATH to install the k3d cluster using terr-iaformK3D_TF_RELPATH="k3d-tf"# Secrets yaml fileSECRETS_YAML="secrets.yaml"# Relative PATH to the workdir from the script directoryWORK_DIR_RELPATH=".."# Compute WORKDIRSCRIPT="$(readlink-f"$0")"SCRIPT_DIR="$(dirname"$SCRIPT")"WORK_DIR="$(readlink-f"$SCRIPT_DIR/$WORK_DIR_RELPATH")"# Update the PATH to add the arkade bin directory# Add the arkade binary directory to the path if missingcase":$ PATH:"in*:"$ HOME/.arkade/bin":*);;*)export PATH="$ PATH:$ HOME/.arkade/bin";;esac# Go to the k3d-tf dircd"$WORK_DIR/$K3D_TF_RELPATH"exit 1
# Create secrets.yaml file and encode it with sops if missingif[!-f"$SECRETS_YAML"];then
echo"token: $K3D_CLUSTER_TOKEN">"$SECRETS_YAML"
sops encrypt -i"$SECRETS_YAML"fi# Initialize terraform
tofu init
# Apply the configuration
tofu apply
Adding a wildcard certificate to the k3d ingressAs an optional step, after creating the k3d cluster I m going to add a default wildcard certificate for the traefik
ingress server to be able to use everything with HTTPS without certificate issues.
As I manage my own DNS domain I ve created the lo.mixinet.net and *.lo.mixinet.net DNS entries on my public and
private DNS servers (both return 127.0.0.1 and ::1) and I ve created a TLS certificate for both entries using
Let s Encrypt with Certbot.
The certificate is updated automatically on one of my servers and when I need it I copy the contents of the
fullchain.pem and privkey.pem files from the /etc/letsencrypt/live/lo.mixinet.net server directory to the local
files lo.mixinet.net.crt and lo.mixinet.net.key.
After copying the files I run the following file to install or update the certificate and configure it as the default
for traefik:
#!/bin/sh# Script to update thesecret="lo-mixinet-net-ingress-cert"cert="$ 1:-lo.mixinet.net.crt"key="$ 2:-lo.mixinet.net.key"if[-f"$cert"]&&[-f"$key"];then
kubectl -n kube-system create secret tls $secret\--key=$key\--cert=$cert\--dry-run=client --save-config-o yaml kubectl apply -f -
kubectl apply -f - <<EOF
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
namespace: kube-system
spec:
defaultCertificate:
secretName: $secretEOF
else
cat<<EOF
To add or update the traefik TLS certificate the following files are needed:
- cert: '$cert'
- key: '$key'
Note: you can pass the paths as arguments to this script.
EOF
fi
Creating a repository and a token for autopilotI ll be using a project on my forgejo instance to manage argocd, the repository I ve created is on the URL
https://forgejo.mixinet.net/blogops/argocd and I ve created a private user named argocd that only has write access to
that repository.
Logging as the argocd user on forgejo I ve created a token with permission to read and write repositories that I ve
saved on my pass password store on the mixinet.net/argocd@forgejo/repository-write
entry.
Bootstrapping the installationTo bootstrap the installation I ve used the following script (it uses the previous GIT_REPO and GIT_TOKEN values):
#!/bin/shset-e# VARIABLES# Relative PATH to the workdir from the script directoryWORK_DIR_RELPATH=".."# Compute WORKDIRSCRIPT="$(readlink-f"$0")"SCRIPT_DIR="$(dirname"$SCRIPT")"WORK_DIR="$(readlink-f"$SCRIPT_DIR/$WORK_DIR_RELPATH")"# Update the PATH to add the arkade bin directory# Add the arkade binary directory to the path if missingcase":$ PATH:"in*:"$ HOME/.arkade/bin":*);;*)export PATH="$ PATH:$ HOME/.arkade/bin";;esac# Go to the working directorycd"$WORK_DIR"exit 1
# Set GIT variablesif[-z"$GIT_REPO"];then
export GIT_REPO="https://forgejo.mixinet.net/blogops/argocd.git"fi
if[-z"$GIT_TOKEN"];then
GIT_TOKEN="$(pass mixinet.net/argocd@forgejo/repository-write)"export GIT_TOKEN
fi
argocd-autopilot repo bootstrap --provider gitea
The output of the execution is as follows:
bin/argocd-bootstrap.sh
INFO cloning repo: https://forgejo.mixinet.net/blogops/argocd.git
INFO empty repository, initializing a new one with specified remote
INFO using revision: "", installation path: ""
INFO using context: "k3d-argocd", namespace: "argocd"
INFO applying bootstrap manifests to cluster...
namespace/argocd created
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
secret/autopilot-secret created
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
application.argoproj.io/autopilot-bootstrap created
INFO running argocd login to initialize argocd config
Context 'autopilot' updated
INFO argocd initialized. password: XXXXXXX-XXXXXXXX
INFO run:
kubectl port-forward -n argocd svc/argocd-server 8080:80
Now we have the argocd installed and running, it can be checked using the port-forward and connecting to
https://localhost:8080/ (the certificate will be wrong, we are going to fix that in the next step).
Updating the argocd installation in gitNow that we have the application deployed we can clone the argocd repository and edit the deployment to disable TLS
for the argocd server (we are going to use TLS termination with traefik and that needs the server running as insecure,
see the Argo CD documentation)
ssh clone ssh://git@forgejo.mixinet.net/blogops/argocd.git
cd argocd
edit bootstrap/argo-cd/kustomization.yaml
git commit -m 'Disable TLS for the argocd-server'
The changes made to the kustomization.yaml file are the following:
As this simply changes the ConfigMap we have to restart the argocd-server to read it again, to do it we delete the
server pods so they are re-created using the updated resource:
After doing this the port-forward command is killed automatically, if we run it again the connection to get to the
argocd-server has to be done using HTTP instead of HTTPS.
Instead of testing that we are going to add an ingress definition to be able to connect to the server using HTTPS and
GRPC against the address argocd.lo.mixinet.net using the wildcard TLS certificate we installed earlier.
To do it we to edit the bootstrap/argo-cd/kustomization.yaml file to add the ingress_route.yaml file to the
deployment:
After pushing the changes and waiting a little bit the change is applied and we can access the server using HTTPS and
GRPC, the first way can be tested from a browser and the GRPC using the command line interface:
argocd --grpc-web login argocd.lo.mixinet.net:8443
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.lo.mixinet.net:8443' updated
argocd app list -o name
argocd/argo-cd
argocd/autopilot-bootstrap
argocd/cluster-resources-in-cluster
argocd/root
So things are working fine and that is all on this post, folks!
In addition to all the regular testing I am testing our snaps in a non KDE environment, so far it is not looking good in Xubuntu. We have kernel/glibc crashes on startup for some and for file open for others. I am working on a hopeful fix.
Next week I will have ( I hope ) my final surgery. If you can spare any change to help bring me over the finish line, I will be forever grateful
After careful consideration, I ve decided to embark on a new chapter in my professional journey. I ve left my position at AWS to dedicate at least the next six months to developing open source software and strengthening digital ecosystems. My focus will be on contributing to Linux distributions (primarily Debian) and other critical infrastructure components that our modern society depends on, but which may not receive adequate attention or resources.
The Evolution of Open Source
Open source won. Over the 25+ years I ve been involved in the open source movement, I ve witnessed its remarkable evolution. Today, Linux powers billions of devices from tiny embedded systems and Android smartphones to massive cloud datacenters and even space stations. Examine any modern large-scale digital system, and you ll discover it s built upon thousands of open source projects.
I feel the priority for the open source movement should no longer be increasing adoption, but rather solving how to best maintain the vast ecosystem of software. This requires building robust institutions and processes to secure proper resourcing and ensure the collaborative development process remains efficient and leads to ever-increasing quality of software.
What is Special About Debian?
Debian, established in 1993 by Ian Murdock, stands as one of these institutions that has demonstrated exceptional resilience. There is no single authority, but instead a complex web of various stakeholders, each with their own goals and sources of funding. Every idea needs to be championed at length to a wide audience and implemented through a process of organic evolution.
Thanks to this approach, Debian has been consistently delivering production-quality, universally useful software for over three decades. Having been a Debian Developer for more than ten years, I m well-positioned to contribute meaningfully to this community.
If your organization relies on Debian or its derivatives such as Ubuntu, and you re interested in funding cyber infrastructure maintenance by sponsoring Debian work, please don t hesitate to reach out. This could include package maintenance and version currency, improving automated upgrade testing, general quality assurance and supply chain security enhancements.
Best way to reach me is by e-mail otto at debian.org. You can also book a 15-minute chat with me for a quick introduction.
Grow or Die
My four-year tenure as a Software Development Manager at Amazon Web Services was very interesting. I m grateful for my time at AWS and proud of my team s accomplishments, particularly for creating an open source contribution process that got Amazon from zero to the largest external contributor to the MariaDB open source database.
During this time, I got to experience and witness a plethora of interesting things. I will surely share some of my key learnings in future blog posts. Unfortunately, the rate of progress in this mammoth 1.5 million employee organization was slowing down, and I didn t feel I learned much new in the last years. This realization, combined with the opportunity cost of not spending enough time on new cutting-edge technology, motivated me to take this leap.
Being a full-time open source developer may not be financially the most lucrative idea, but I think it is an excellent way to force myself to truly assess what is important on a global scale and what areas I want to contribute to.
Working fully on open source presents a fascinating duality: you re not bound by any external resource or schedule limitations, and can the progress you make is directly proportional to how much energy you decide to invest. Yet, you also depend on collaboration with people you might never meet and who are not financially incentivized to collaborate. This will undoubtedly expose me to all kinds of challenges. But what would be better in fostering holistic personal growth? I know that deep down in my DNA, I am not made to stay cozy or to do easy things. I need momentum.
OK, let s get going
Icy morning Witch Wells Az
Life:
Last week we were enjoying springtime, this week winter has made a comeback! Good news on the broken arm front, the infection is gone, so they can finally deal with the broken issue again. I will have a less invasive surgery April 25th to pull the bones back together so they can properly knit back together! If you can spare any change please consider a donation to my continued healing and recovery, or just support my work
Kubuntu:
While testing Beta I came across some crashy apps ( Namely PIM ) due to apparmor. I have uploaded fixed profiles for kmail, akregator, akonadiconsole, konqueror, tellico
KDE Snaps:
Added sctp support in Qt https://invent.kde.org/neon/snap-packaging/kde-qt6-core-sdk/-/commit/bbcb1dc39044b930ab718c8ffabfa20ccd2b0f75
This will allow me to finish a pyside6 snap and fix FreeCAD build.
Changed build type to Release in the kf6-core24-sdk which will reduce the size of kf6-core24 significantly.
Fixed a few startup errors in kf5-core24 and kf6-core24 snapcraft-desktop-integration.
Soumyadeep fixed wayland icons in https://invent.kde.org/neon/snap-packaging/kf6-core-sdk/-/merge_requests/3
KDE Applications 25.03.90 RC released to candidate ( I know it says 24.12.3, version won t be updated until 25.04.0 release )
Kasts core24 fixed in candidate
Kate now core24 with Breeze theme! candidate
Neochat: Fixed missing QML and 25.04 dependencies in candidate
Kdenlive now with Galxnimate animations! candidate
Digikam 8.6.0 now with scanner support in stable
Kstars 3.7.6 released to stable for realz, removed store rejected plugs.
Thanks for stopping by!