Search Results: "mkd"

5 January 2026

Colin Watson: Free software activity in December 2025

About 95% of my Debian contributions this month were sponsored by Freexian. You can also support my work directly via Liberapay or GitHub Sponsors. Python packaging I upgraded these packages to new upstream versions: Python 3.14 is now a supported version in unstable, and we re working to get that into testing. As usual this is a pretty arduous effort because it requires going round and fixing lots of odds and ends across the whole ecosystem. We can deal with a fair number of problems by keeping up with upstream (see above), but there tends to be a long tail of packages whose upstreams are less active and where we need to chase them, or where problems only show up in Debian for one reason or another. I spent a lot of time working on this: Fixes for pytest 9: I filed lintian: Report Python egg-info files/directories to help us track the migration to pybuild-plugin-pyproject. I did some work on dh-python: Normalize names in pydist lookups and pyproject plugin: Support headers (the latter of which allowed converting python-persistent and zope.proxy to pybuild-plugin-pyproject, although it needed a follow-up fix). I fixed or helped to fix several other build/test failures: Other bugs: Other bits and pieces Code reviews

24 December 2025

Daniel Lange: Getting scanning to work with Gimp on Trixie

Trixie ships Gimp 3.0.4 and the 3.x series has gotten incompatible to XSane, the common frontend for scanners on Linux. Hence the maintainer, J rg Frings-F rst, has disabled the Gimp integration temporarily in response to a Debian bug #1088080. There seems to be no tracking bug for getting the functionality back but people have been commenting on Debian bug #993293 as that is ... loosely related :-). There are two options to get the Scanning functionality back in Trixie until this is properly resolved by an updated XSane in Debian (e.g. via trixie-backports): Lee Yingtong Li (RunasSudo) has created a Python script that calls XSane as a cli application and published it at https://yingtongli.me/git/gimp-xsanecli/. This worked okish for me but needed me to find the scan in /tmp/ a number of times. This is a good stop-gap script if you need to scan from Gimp $now and look for a quick solution. Upstream has completed the necessary steps to get XSane working as a Gimp 3.x plugin at https://gitlab.com/sane-project/frontend/xsane. Unfortunately compiling this is a bit involved but I made a version that can be dropped into /usr/local/bin or $HOME/bin and works alongside Gimp and the system-installed XSane. So:
  1. sudo apt install gimp xsane
  2. Download xsane-1.0.0-fit-003 (752kB, AMD64 executable for Trixie) and place it in /usr/local/bin (as root)
  3. sha256sum /usr/local/bin/xsane-1.0.0-fit-003
    # result needs to be af04c1a83c41cd2e48e82d04b6017ee0b29d555390ca706e4603378b401e91b2
  4. sudo chmod +x /usr/local/bin/xsane-1.0.0-fit-003
  5. # Link the executable into the Gimp plugin directory as the user running Gimp:
    mkdir -p $HOME/.config/GIMP/3.0/plug-ins/xsane/
    ln -s /usr/local/bin/xsane-1.0.0-fit-003 $HOME/.config/GIMP/3.0/plug-ins/xsane/
  6. Restart Gimp
  7. Scan from Gimp via File Create Acquire XSane
The source code for the xsane executable above is available under GPL-2 at https://gitlab.com/sane-project/frontend/xsane/-/tree/c5ac0d921606309169067041931e3b0c73436f00. This points to the last upstream commit from 27. September 2025 at the time of writing this blog article.

19 December 2025

Otto Kek l inen: Backtesting trailing stop-loss strategies with Python and market data

Featured image of post Backtesting trailing stop-loss strategies with Python and market dataIn January 2024 I wrote about the insanity of the Magnificent Seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional, I decided to analyze whether using stop-loss orders could reliably automate avoiding deep drawdowns. As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, a restaurant dinner that costs 20 euros today will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years. Hence, if you intend to retain the value of your hard-earned savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.

What is a trailing stop-loss order? What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin. For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus, no matter what happens, you only lost 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash. In the simple case above, it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in too large a drawdown before exiting. If markets crash rapidly, it might be that nobody buys your stocks at the stop-loss price, and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?

Backtesting stock market strategies with Python, YFinance, Pandas and Lightweight Charts I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock. First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the data points feel effortless. The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):
python
CACHE_DIR.mkdir(parents=True, exist_ok=True)
end_date = datetime.today().strftime("%Y-%m-%d")
cache_file = CACHE_DIR / f" TICKER - START_DATE -- end_date .parquet"

if cache_file.is_file():
 dataframe = pandas.read_parquet(cache_file)
 print(f"Loaded price data from cache:  cache_file ")
else:
 dataframe = yfinance.download(
 TICKER,
 start=START_DATE,
 end=end_date,
 progress=False,
 auto_adjust=False
 )

 dataframe.to_parquet(cache_file)
 print(f"Fetched new price data from Yahoo Finance and cached to:  cache_file ")
The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like, you can use:
python
print("First 5 rows of the raw data:")
print(df.head())
print("Last 5 rows of the raw data:")
print(df.tail())
Example output:
First 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
Last 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA
Date
2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818
Adding new columns to the dataframe is easy. For example, I used a custom function to calculate the Relative Strength Index (RSI). To add a new column RSI with a value for every row based on the price from that row, only one line of code is needed, without custom loops:
python
df["RSI"] = compute_rsi(df["price"], period=14)
After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:
python
 baseline_series = [
  "time": ts, "value": val 
 for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
 ]

 baseline_json = json.dumps(baseline_series)
 template = jinja2.Template("template.html")
 rendered_html = template.render(
 title=title,
 heading=heading,
 description=description_html,
 ...
 baseline_json=baseline_json,
 ...
 )

 with open("report.html", "w", encoding="utf-8") as f:
 f.write(rendered_html)
 print("Report generated!")
In the HTML template, the marker variable in Jinja syntax gets replaced with the actual JSON:
html
<!DOCTYPE html>
<html lang="en">
<head>
 <meta charset="UTF-8">
 <title>  title  </title>
 ...
</head>
<body>
 <h1>  heading  </h1>
 <div id="chart"></div>
 <script>
 // Ensure the DOM is ready before we initialise the chart
 document.addEventListener('DOMContentLoaded', () =>  
 // Parse the JSON data passed from Python
 const baselineData =   baseline_json   safe  ;
 const strategyData =   strategy_json   safe  ;
 const markersData =   markers_json   safe  ;

 // Create the chart
 const chart = LightweightCharts.createChart(document.getElementById('chart'),  
 width: document.getElementById('chart').clientWidth,
 height: 500,
 layout:  
 background:   color: "#222"  ,
 textColor: "#ccc"
  ,
 grid:  
 vertLines:   color: "#555"  ,
 horzLines:   color: "#555"  
  
  );

 // Add baseline series
 const baselineSeries = chart.addLineSeries( 
 title: '  baseline_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 priceLineWidth: 1
  );
 baselineSeries.setData(baselineData);

 baselineSeries.priceScale().applyOptions( 
 entireTextOnly: true
  );

 // Add strategy series
 const strategySeries = chart.addLineSeries( 
 title: '  strategy_label  ',
 lastValueVisible: false,
 priceLineVisible: false,
 color: '#FF6D00'
  );
 strategySeries.setData(strategyData);

 // Add buy/sell markers to the strategy series
 strategySeries.setMarkers(markersData);

 // Fit the chart to show the full data range (full zoom)
 chart.timeScale().fitContent();
  )
 </script>
</body>
</html>
There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test. The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline Buy and hold scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way. Backtest run example

Results I experimented with multiple strategies and tested them with various parameters, but I don t think I found a strategy that was consistently and clearly better than just buy-and-hold. It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price. In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold. The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example, in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks; thus, the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash had been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price. Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if many people have stop-loss orders in place, a large price drop would trigger all of them, creating a flood of sell orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don t need to fear that too many people will be using them.

Conclusion Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of insurance policy to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I d rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels. Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall. Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently. Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

12 December 2025

Freexian Collaborators: Debian Contributions: Updates about DebConf Video Team Sprint, rebootstrap, SBOM tooling in Debian and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-11 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Video Team Sprint The DebConf Video Team records, streams, and publishes talks from DebConf and many miniDebConfs. A lot of the infrastructure development happens during setup for these events, but we also try to organize a sprint once a year to work on infrastructure, when there isn t a DebConf about to happen. Stefano attended the sprint in Herefordshire this year and wrote up a report.

rebootstrap, by Helmut Grohne A number of jobs were stuck in architecture-specific failures. gcc-15 and dpkg still disagree about whether PIE is enabled occasionally and big endian mipsen needed fixes in systemd. Beyond this regular uploads of libxml2 and gcc-15 required fixes and rebasing of pending patches. Earlier, Loongson used rebootstrap to create the initial package set for loong64 and Miao Wang now submitted their changes. Therefore, there is now initial support for suites other than unstable and use with derivatives.

Building the support for Software Bill Of Materials tooling in Debian, by Santiago Ruano Rinc n Vendors of Debian-based products may/should be paying attention to the evolution of different jurisdictions (such as the CRA or updates on CISA s Minimum Elements for a Software Bill of Materials) that require to make available Software Bill of Materials (SBOM) of their products. It is important then to have tools in Debian to make it easier to produce such SBOMs. In this context, Santiago continued the work on packaging libraries related to SBOMs. This includes the packaging of the SPDX python library (python-spdx-tools), and its dependencies rdflib and mkdocs-include-markdown-plugin. System Package Data Exchange (SPDX), defined by ISO/IEC 5962:2021, is an open standard capable of representing systems with software components as SBOMs and other data and security references. SPDX and CycloneDX (whose python library python3-cyclonedx-lib was packaged by prior efforts this year), encompass the two main SBOM standards available today.

Miscellaneous contributions
  • Carles improved po-debconf-manager: added checking status of bug reports automatically via python-debianbts; changed some command line options naming or output based on user feedback; finished refactoring user interaction to rich; codebase is now flake8-compliant; added type safety with mypy.
  • Carles, using po-debconf-manager, created 19 bug reports for translations where the merge requests were pending; reviewed and created merge requests for 4 packages.
  • Carles planned a second version of the tool that detects packages that Recommends or Suggests packages which are not in Debian. He is taking ideas from dumat.
  • Carles submitted a pull request to python-unidiff2 (adapted from the original pull request to python-unidiff). He also started preparing a qnetload update.
  • Stefano did miscellaneous python package updates: mkdocs-macros-plugin, python-confuse, python-pip, python-mitogen.
  • Stefano reviewed a beets upload for a new maintainer who is taking it over.
  • Stefano handled some debian.net infrastructure requests.
  • Stefano updated debian.social infrastructure for the trixie point release.
  • The update broke jitsi.debian.social, Stefano put some time into debugging it and eventually enlisted upstream assistance, who solved the problem!
  • Stefano worked on some patches for Python that help Debian:
    • GH-139914: The main HP PA-RISC support patch for 3.14.
    • GH-141930: We observed an unhelpful error when failing to write a .pyc file during package installation. We may have fixed the problem, and at least made the error better.
    • GH-141011: Ignore missing ifunc support on HP PA-RISC.
  • Stefano spun up a website for hamburg2026.mini.debconf.org.
  • Rapha l reviewed a merge request updating tracker.debian.org to rely on bootstrap
    version 5.
  • Emilio coordinated various transitions.
  • Helmut sent patches for 26 cross build failures.
  • Helmut officially handed over the cleanup of the /usr-move transition.
  • Helmut monitored the transition moving libcrypt-dev out of build-essential and bumped the remaining bugs to rc-severity in coordination with the release team.
  • Helmut updated the Build-Profiles patch for debian-policy incorporating feedback from Sean Whitton with a lot of help from Nattie Mayer-Hutchings and Freexian colleagues.
  • Helmut discovered that the way mmdebstrap deals with start-stop-daemon may result in broken output and sent a patch.
  • As a result of armel being removed from sid , but not from forky , the multiarch hinter broke. Helmut fixed it.
  • Helmut uploaded debvm accepting a patch from Luca Boccassi to fix it for newer
    systemd.
  • Colin began preparing for the second stage of the OpenSSH GSS-API key exchange package split.
  • Colin caught and fixed a devscripts regression due to it breaking part of Debusine.
  • Colin packaged django-pgtransaction and backported it to trixie , since it looks useful for Debusine.
  • Thorsten uploaded the packages lprng, cpdb-backend-cups, cpdb-libs and ippsample to fix some RC bugs as well as other bugs that accumulated over time. He also uploaded cups-filters to all Debian releases to fix three CVEs.

7 December 2025

Vincent Bernat: Compressing embedded files in Go

Go s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive!

Code The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1
package embed
import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)
//go:embed data/embed.zip
var embeddedZip []byte
var dataOnce = sync.OnceValue(func() *zip.Reader  
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil  
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
   
  return r
 )
func Data() fs.FS  
  return dataOnce()
 
We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.
common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^
The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain Akvorado, a flow collector written in Go, embeds several static assets:
  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.
Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:
$ unzip -p common/embed/data/embed.zip   wc -c   numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4 slower. It also allocates some memory.2
goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op
Each access to an asset requires a decompression step, as seen in this flame graph:
&#128444; Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.
While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.
For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination!

  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods.
  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory.
  3. SOZip is a profile that enables fast random access in a compressed file. However, Go s archive/zip module does not support it.

2 December 2025

Simon Josefsson: Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today s announcement of Guix on Trisquel/Ubuntu container images. I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
Or you prefer guix-on-dpkg on Docker Hub:
docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix
You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.
jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 
You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously. I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8. Let s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let s start by defining a job skeleton:
.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**
This installs some packages, clones guile-gnutls (it could be any project, that s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs. Let s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.
guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls
guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls
guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls
guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls
Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let s add a pipeline job to do the comparison:
guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.*   sort   grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.*   sort   grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.*   sort   uniq -c -w64   sort -rn
  - sha256sum */*.tar.* */*/*.tar.*   grep    -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.*   grep -v -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**
Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:
$ cd out
$ sha256sum */*.tar.* */*/*.tar.*   sort   grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   sort   grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   sort   uniq -c -w64   sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   grep    -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.*   grep -v -- -src.tar.   sort   uniq -c -w64   grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
That s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

19 November 2025

Michael Ablassmeier: building SLES 16 vagrant/libvirt images using guestfs tools

SLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that s not the case anymore, as with more recent SLES15 releases the official images were gone. In the past, it was possible to clone existing projects on the opensuse build service to build the images by yourself, but i couldn t find any templates for SLES 16. Naturally, there are several ways to build images, and the tooling around involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing packer recipes wont work anymore, as Yast has been replaced by a new installer, called agma). All pretty complicated, So my current take on creating a vagrant image for SLE16 has been the following: Two guestfs-tools that can now be used to modify the created qcow2 image:
 virt-sysprep -a sles16.qcow2
#!/bin/bash
useradd vagrant
mkdir -p /home/vagrant/.ssh/
chmod 0700 /home/vagrant/.ssh/
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/
# apply recommended ssh settings for vagrant boxes
SSHD_CONFIG=/etc/ssh/sshd_config.d/99-vagrant.conf
if [[ ! -d "$(dirname $ SSHD_CONFIG )" ]]; then
    SSHD_CONFIG=/etc/ssh/sshd_config
    # prepend the settings, so that they take precedence
    echo -e "UseDNS no\nGSSAPIAuthentication no\n$(cat $ SSHD_CONFIG )" > $ SSHD_CONFIG 
else
    echo -e "UseDNS no\nGSSAPIAuthentication no" > $ SSHD_CONFIG 
fi
SUDOERS_LINE="vagrant ALL=(ALL) NOPASSWD: ALL"
if [ -d /etc/sudoers.d ]; then
    echo "$SUDOERS_LINE" >  /etc/sudoers.d/vagrant
    visudo -cf /etc/sudoers.d/vagrant
    chmod 0440 /etc/sudoers.d/vagrant
else
    echo "$SUDOERS_LINE" >> /etc/sudoers
    visudo -cf /etc/sudoers
fi
 
mkdir -p /vagrant
chown -R vagrant:vagrant /vagrant
systemctl enable sshd
 virt-customize -a sle16.qcow2 --upload vagrant.sh:/tmp/vagrant.sh
 virt-customize -a sle16.qcow2 --run-command "/tmp/vagrant.sh"
After this, use the create-box.sh from the vagrant-libvirt project to create an box image: https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh and add the image to your environment:
 create_box.sh sle16.qcow2 sle16.box
 vagrant box add --name my/sles16 test.box
the resulting box is working well within my CI environment as far as i can tell.

13 November 2025

Freexian Collaborators: Debian Contributions: Upstreaming cPython patches, ansible-core autopkgtest robustness and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-10 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Upstreaming cPython patches, by Stefano Rivera Python 3.14.0 (final) released in early October, and Stefano uploaded it to Debian unstable. The transition to support 3.14 has begun in Ubuntu, but hasn t started in Debian, yet. While build failures in Debian s non-release ports are typically not a concern for package maintainers, Python is fairly low in the stack. If a new minor version has never successfully been built for a Debian port by the time we start supporting it, it will quickly become a problem for the port. Python 3.14 had been failing to build on two Debian ports architectures (hppa and m68k), but thankfully their porters provided patches. These were applied and uploaded, and Stefano forwarded the hppa one upstream. Getting it into shape for upstream approval took some work, and shook out several other regressions for the Python hppa port. Debugging these on slow hardware takes a while. These two ports aren t successfully autobuilding 3.14 yet (they re both timing out in tests), but they re at least manually buildable, which unblocks the ports. Docutils 0.22 also landed in Debian around this time, and Python needed some work to build its docs with it. The upstream isn t quite comfortable with distros using newer docutils, so there isn t a clear path forward for these patches, yet. The start of the Python 3.15 cycle was also a good time to renew submission attempts on our other outstanding python patches, most importantly multiarch tuples for stable ABI extension filenames.

ansible-core autopkgtest robustness, by Colin Watson The ansible-core package runs its integration tests via autopkgtest. For some time, we ve seen occasional failures in the expect, pip, and template_jinja2_non_native tests that usually go away before anyone has a chance to look into them properly. Colin found that these were blocking an openssh upgrade and so decided to track them down. It turns out that these failures happened exactly when the libpython3.13-stdlib package had different versions in testing and unstable. A setup script removed /usr/lib/python3*/EXTERNALLY-MANAGED in order that pip can install system packages for some of the tests, but if a package shipping that file were ever upgraded then that customization would be undone, and the same setup script removed apt pins in a way that caused problems when autopkgtest was invoked in certain ways. In combination with this, one of the integration tests attempted to disable system apt sources while testing the behaviour of the ansible.builtin.apt module, but it failed to do so comprehensively enough and so that integration test accidentally upgraded the testbed from testing to unstable in the middle of the test. Chaos ensued. Colin fixed this in Debian and contributed the relevant part upstream.

Miscellaneous contributions
  • Carles kept working on the missing-relations (packages which Recommends or Suggests packages that are not available in Debian). He improved the tooling to detect Suggested packages that are not available in Debian because they were removed (or changed names).
  • Carles improved po-debconf-manager to send translations for packages that are not in Salsa. He also improved the UI of the tool (using rich for some of the output).
  • Carles, using po-debconf-manager, reviewed and submitted 38 debconf template translations.
  • Carles created a merge request for distro-tracker to align text and input-field (postponed until distro-tracker uses Bootstrap 5).
  • Rapha l updated gnome-shell-extension-hamster for GNOME 49. It is a GNOME Shell integration for the Hamster time tracker.
  • Rapha l merged a couple of trivial merge requests, but he did not yet find the time to properly review and test the bootstrap 5 related merge requests that are still waiting on salsa.
  • Helmut sent patches for 20 cross build failures.
  • Helmut refactored debvm dropping support for running on bookworm . There are two trixie features improving the operation. mkfs.ext4 can now consume a tar archive to populate the filesystem via libarchive and dash now supports set -o pipefail. Beyond this change in operation, a number of robustness and quality issues have been resolved.
  • Thorsten fixed some bugs in the printing software and uploaded improved versions of brlaser and ifhp. Moreover he uploaded a new upstream version of cups.
  • Emilio updated xorg-server to the latest security release and helped with various transitions.
  • Santiago worked on and reviewed different Salsa CI MR to address some regressions introduced by the move to sbuild+unshare. Those MR included stop adding the salsa-ci user in the build image to the sbuild group, fix the suffix path used by mmdebstrap to create the chroot and update the documentation about how to use aptly repos in another project.
  • Santiago supported the work on the DebConf 26 organisation, particularly helping with an implemented method to count the votes to choose the conference logo.
  • Stefano reviewed Python PEP-725 and PEP-804, which hope to provide a mechanism to declare external (e.g. APT) dependencies in Python packages. Stefano engaged in discussion and provided feedback to the authors.
  • Stefano prepared for Berkeley DB removal in Python.
  • Stefano ported the backend to reverse-depends to Python 3 (yes, it had been running on 2.7) and migrated it to git from bzr.
  • Stefano updated miscellaneous packages, including beautifulsoup4, mkdocs-macros-plugin, python-pipx.
  • Stefano applied an upstream patch to pypy3, fixing an AST Compiler Assertion error.
  • Stefano uploaded an update to distro-info-data, including data for two additional Debian derivatives: eLxr and Devuan.
  • Stefano prepared an update to dh-python, the python packaging tool, merging several contributed patches and resolving some bugs.
  • Colin upgraded OpenSSH to 10.1p1, helped upstream to chase down some regressions, and further upgraded to 10.2p1. This is also now in trixie-backports.
  • Colin fixed several build regressions with Python 3.14, scikit-learn 1.7, and other transitions.
  • Colin investigated a malware report against tini, making use of reproducible builds to help demonstrate that this is highly likely to be a false positive.
  • Anupa prepared questions and collected interview responses from women contributors in Debian to publish the post as part of Ada Lovelace day 2025.

12 October 2025

Dirk Eddelbuettel: RcppSpdlog 0.0.23 on CRAN: New Upstream

Version 0.0.23 of RcppSpdlog arrived on CRAN today (after a slight delay) and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site. This release updates the code to the version 1.16.0 of spdlog which was released yesterday morning, and includes version 1.12.0 of fmt. We also converted the documentation site to now using mkdocs-material to altdoc (plus local style and production tweaks) rather than directly. I updated the package yesterday morning when spdlog was updated. But the passage was delayed for a day at CRAN as their machines still times out hitting the GPL-2 URL from the README.md badge, leading to a human to manually check the log assert the nothingburgerness of it. This timeout does not happen to me locally using the corresponding URL checker package. I pondered this in a r-package-devel thread and may just have to switch to using the R Project URL for the GPL-2 as this is in fact recurrning. The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.23 (2025-10-11)
  • Upgraded to upstream release spdlog 1.16.0 (including fmt 12.0)
  • The mkdocs-material documentation site is now generated via altdoc

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

9 October 2025

Dirk Eddelbuettel: xptr 1.2.0 on CRAN: New(ly Adopted) Package!

Excited to share that xptr is back on CRAN! The xptr package helps to create, check, modify, use, share, external pointer objects. External pointers are used quite extensively throughout R to manage external resources such as datanbase connection objects and alike, and can be very useful to pass pointers to just about any C / C++ data structure around. While described in Writing R Extensions (notably Section 5.13), they can be a little bare-bones and so this package can be useful. It had been created by Randy Lai and maintained by him during 2017 to 2020, but then fell off CRAN. In work with nanoarrow and its clean and minimal Arrow interface xptr came in handy so I adopted it. Several extensions and updates have been added: (compiled) function registration, continuous integration, tests, refreshed and extended documentation as well as a print format extension useful for PyCapsule objects when passing via reticulate. The package documentation site was switched to altdoc driving the most excellent Material for MkDocs framework (providing my first test case of altdoc replacing my older local scripts; I should post some more about that ). The first NEWS entry follow.

Changes in version 1.2.0 (2025-10-03)
  • New maintainer
  • Compiled functions are now registered, .Call() adjusted
  • README.md and DESCRIPTION edited and update
  • Simple unit tests and continunous integration have been added
  • The package documentation site has been recreated using altdoc
  • All manual pages for functions now contain \value sections

For more, see the package page, the git repo or the documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Charles: How to Build an Incus Buster Image

It s always nice to have container images of Debian releases to test things, run applications or explore a bit without polluting your host machine. From some Brazilian friends (you know who you are ;-), I ve learned the best way to debug a problem or test a fix is spinning up an incus container, getting to it and finding the minimum reproducer. So the combination incus + Debian is something that I m very used to, but the problem is there are no images for Debian ELTS and testing security fixes to see if they actually fix the vulnerability and don t break anything else is very important. Well, the regular images don t materialize out of thin air, right? So we can learn how they are made and try to generate ELTS images in the same way - shouldn t be that difficult, right? Well, kinda ;-) The images available by default in incus come from images.linuxcontainers.org and are built by Jenkins using distrobuilder. If you follow the links, you will find the repository containing the yaml image definitions used by distrobuilder at github.com/lxc/lxc-ci. With a bit of investigation work, a fork, an incus VM with distrobuilder installed and some magic (also called trial and error) I was able to build a buster image! Whooray, but VM and stretch images are still work in progress. Anyway, I wanted to share how you can build your images and document this process so I don t forget, so here we are

Building Instructions We will use an incus trixie VM to perform the build so we don t clutter our own machine.
incus launch images:debian/trixie <instance-name> --vm
Then let s hop into the machine and install the dependencies.
incus shell <instance-name>
And
apt install git distrobuilder
Let s clone the repository with the yaml definition to build a buster container.
git clone --branch support-debian-buster https://github.com/charles2910/lxc-ci.git
# and cd into it
cd lxc-ci
Then all we need is to pass the correct arguments to distrobuilder so it can build the image. It can output the image in the current directory or in a pre-defined place, so let s create an easy place for the images.
mkdir -p /tmp/images/buster/container
# and perform the build
distrobuilder build-incus images/debian.yaml /tmp/images/buster/container/ \
            -o image.architecture=amd64 \
            -o image.release=buster \
            -o image.variant=default  \
            -o source.url="http://archive.debian.org/debian"
It requires a build definition written in yaml format to perform the build. If you are curious, check the images/ subdir. If all worked correctly, you should have two files in your pre-defined target directory. In our case, /tmp/images/buster/container/ contains:
incus.tar.xz  rootfs.squashfs
Let s copy it to our host so we can add the image to our incus server.
incus file pull <instance-name>/tmp/images/buster/container/incus.tar.xz .
incus file pull <instance-name>/tmp/images/buster/container/rootfs.squashfs .
# and import it as debian/10
incus image import incus.tar.xz rootfs.squashfs --alias debian/10
If we are lucky, we can run our Debian buster container now!
incus launch local:debian/10 <debian-buster-instance>
incus shell <debian-buster-instance>
Well, now all that is left is to install Freexian s ELTS package repository and update the image to get a lot of CVE fixes.
apt install --assume-yes wget
wget https://deb.freexian.com/extended-lts/archive-key.gpg -O /etc/apt/trusted.gpg.d/freexian-archive-extended-lts.gpg
cat <<EOF >/etc/apt/sources.list.d/extended-lts.list
deb http://deb.freexian.com/extended-lts buster-lts main contrib non-free
EOF
apt update
apt --assume-yes upgrade

8 October 2025

Dirk Eddelbuettel: RPushbullet 0.3.5: Mostly Maintenance

RPpushbullet demo A new version 0.3.5 of the RPushbullet package arrived on CRAN. It marks the first release in 4 1/2 years for this mature and feature-stable package. RPushbullet interfaces the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send (programmatic) alerts like the one to the left to your browser, phone, tablet, or all at once. This releases reflects mostly internal maintenance and updates to the documentation site, to continuous integration, to package metadata, and one code robustitication. See below for more details.

Changes in version 0.3.5 (2025-10-08)
  • URL and BugReports fields have been added to DESCRIPTION
  • The pbPost function deals more robustly with the case of multiple target emails
  • The continuous integration and the README badge have been updated
  • The DESCRIPTION file now use Authors@R
  • The (encrypted) unit test configuration has been adjusted to reflect the current set of active devices
  • The mkdocs-material documentation site is now generated via altdoc

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Sven Hoexter: Backstage Render Markdown in a Collapsible Block

Brief note to maybe spare someone else the trouble. If you want to hide e.g. a huge table in Backstage (techdocs/mkdocs) behind a collapsible element you need the md_in_html extension and use the markdown attribute for it to kick in on the <details> html tag. Add the extension to your mkdocs.yaml:
markdown_extensions:
  - md_in_html
Hide the table in your markdown document in a collapsible element like this:
<details markdown>
<summary>Long Table</summary>
  Foo   Bar  
 - - 
  Fizz   Buzz  
</details>
It's also required to have an empty line between the html tag and starting the markdown part. Rendered for me that way in VSCode, GitHub and Backstage.

3 October 2025

Dirk Eddelbuettel: #053: Adding llvm Snapshots for R Package Testing

Welcome to post 53 in the R4 series. Continuing with posts #51 from Tuesday and #52 from Wednesday and their stated intent of posting some more here is another quick one. Earlier today I helped another package developer who came to the r-package-devel list asking for help with a build error on the Fedora machine at CRAN running recent / development clang. In such situations, the best first step is often to replicate the issue. As I pointed out on the list, the LLVM team behind clang maintains an apt repo at apt.llvm.org/ making it a good resource to add to Debian-based container such as Rocker r-base or the offical r-base (the two are in fact interchangeable, and I take care of both). A small pothole, however, is that the documentation at the top of apt.llvm.org site is a bit stale and behind two aspects that changed on current Debian systems (i.e. unstable/testing as used for r-base). First, apt now prefers files ending in .sources (in a nicer format) and second, it now really requires a key (which is good practice). As it took me a few minutes to regather how to meet both requirements, I reckoned I might as well script this. Et voil the following script does that:
#!/bin/sh

## Update does not hurt but is not strictly needed
#apt update --quiet --quiet
#apt upgrade --yes

## wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key   sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc
## or as we are root in container
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key > /etc/apt/trusted.gpg.d/apt.llvm.org.asc

cat <<EOF >/etc/apt/sources.list.d/llvm-dev.sources
Types: deb
URIs: http://apt.llvm.org/unstable/
# for clang-21 
#   Suites: llvm-toolchain-21
# for current clang
Suites: llvm-toolchain
Components: main
Signed-By: /etc/apt/trusted.gpg.d/apt.llvm.org.asc
EOF

test -d ~/.R   mkdir ~/.R
cat <<EOF >~/.R/Makevars
CLANGVER=-22
# CLANGLIB=-stdlib=libc++
CXX=clang++\$(CLANGVER) \$(CLANGLIB)
CXX11=clang++\$(CLANGVER) \$(CLANGLIB)
CXX14=clang++\$(CLANGVER) \$(CLANGLIB)
CXX17=clang++\$(CLANGVER) \$(CLANGLIB)
CXX20=clang++\$(CLANGVER) \$(CLANGLIB)
CC=clang\$(CLANGVER)
SHLIB_CXXLD=clang++\$(CLANGVER) \$(CLANGLIB)
EOF

apt update
apt install --yes clang-22
Once the script is run, one can test a package (or set of packages) against clang-22 and clang++-22. This may help R package developers. The script is also generic enough for other development communities who can ignore (or comment-out / delete) the bit about ~/.R/Makevars and deploy the compiler differently. Updating the softlink as apt-preferences does is one way and done in many GitHub Actions recipes. As we only need wget here a basic Debian container should work, possibly with the addition of wget. For R users r-base hits a decent sweet spot.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

11 September 2025

Freexian Collaborators: Debian Contributions: Preparing for setup.py install deprecation, Salsa CI, Debian 13 "trixie" release and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-08 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for setup.py install deprecation, by Colin Watson setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (though they don t necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). Some of the Python team talked about this a bit at DebConf, and Colin volunteered to write up some notes on cases where this isn t straightforward. This page will likely grow as the team works on this problem.

Salsa CI, by Santiago Ruano Rinc n Santiago fixed some pending issues in the MR that moves the pipeline to sbuild+unshare, and after several months, Santiago was able to mark the MR as ready. Part of the recent fixes include handling external repositories, honoring the RELEASE autodetection from d/changelog (thanks to Ahmed Siam for spotting the main reason of the issue), and fixing a regression about the apt resolver for *-backports releases. Santiago is currently waiting for a final review and approval from other members of the Salsa CI team, and being able to merge it. Thanks to all the folks who have helped testing the changes or provided feedback so far. If you want to test the current MR, you need to include the following pipeline definition in your project s CI config file:
---
include:
  - https://salsa.debian.org/santiago/pipeline/raw/sbuild-unshare-02-salsa-ci/salsa-ci.yml
  - https://salsa.debian.org/santiago/pipeline/raw/sbuild-unshare-02-salsa-ci/pipeline-jobs.yml
As a reminder, this MR will make the Salsa CI pipeline build the packages more similar to how it s built by the Debian official builders. This will also save some resources, since the default pipeline will have one stage less (the provisioning) stage, and will make it possible for more projects to be built on salsa.debian.org (including large projects and those from the OCaml ecosystem), etc. See the different issues being fixed in the MR description.

Debian 13 trixie release, by Emilio Pozuelo Monfort On August 9th, Debian 13 trixie was released, building on two years worth of updates and bug fixes from hundreds of developers. Emilio helped coordinate the release, communicating with several teams involved in the process.

DebConf 26 Site Visit, by Stefano Rivera Stefano visited Santa Fe, Argentina, the site for DebConf 26 next year. The aim of the visit was to help build a local team and see the conference venue first-hand. Stefano and Nattie represented the DebConf Committee. The local team organized Debian meetups in Buenos Aires and Santa Fe, where Stefano presented a talk on Debian and DebConf. Venues were scouted and the team met with the university management and local authorities.

Miscellaneous contributions
  • Rapha l updated tracker.debian.org after the trixie release to add the new forky release in the set of monitored distributions. He also reviewed and deployed the work of Scott Talbert showing open merge requests from salsa in the action needed panel.
  • Rapha l reviewed some DEP-3 changes to modernize the embedded examples in light of the broad git adoption.
  • Rapha l configured new workflows on debusine.debian.net to upload to trixie and trixie-security, and officially announced the service on debian-devel-announce, inviting Debian developers to try the service for their next upload to unstable.
  • Carles created a merge request for django-compressor upstream to fix an error when concurrent node processing happened. This will allow removing a workaround added in openstack-dashboard and avoid the same bug in other projects that use django-compressor.
  • Carles prepared a system to detect packages that Recommends packages which don t exist in unstable. Processed (either reported or ignored due to mis-detected problems or temporary problems) 16% of the reports. Will continue next month.
  • Carles got familiar and gave feedback for the freedict-wikdict package. Planned contributions with the maintainer to improve the package.
  • Helmut responded to queries related to /usr-move.
  • Helmut adapted crossqa.d.n to the release of trixie .
  • Helmut diagnosed sufficient failures in rebootstrap to make it work with gcc-15.
  • Helmut fixed the CI pipeline of debvm.
  • Helmut sent patches for 19 cross build problems.
  • Faidon discovered that the Multi-Arch hinter would emit confusing hints about :any annotations. Helmut identified the root cause to be the handling of virtual packages and fixed it.
  • Enrico took some dust off python-debiancontributors and prototyped a receiving end for salsa webpings, to start followup work to contributors.debian.org discussions at DebConf25.
  • Colin upgraded about 70 Python packages to new upstream versions, which is around 10% of the backlog; this included a complicated Pydantic upgrade in collaboration with the Rust team.
  • Colin fixed a bug in debbugs that caused incoming emails to bugs.debian.org with certain header contents to go missing.
  • Thorsten uploaded sane-airscan, which was already in experimental, to unstable.
  • Thorsten created a script to automate the upload of new upstream versions of foomatic-db. The database contains information about printers and regularly gets an update. Now it is possible to keep the package more up to date in Debian.
  • Stefano prepared updates to almost all of his packages that had new versions waiting to upload to unstable. (beautifulsoup4, hatch-vcs, mkdocs-macros-plugin, pypy3, python-authlib, python-cffi, python-mitogen, python-pip, python-pipx, python-progress, python-truststore, python-virtualenv, re2, snowball, soupsieve).
  • Stefano uploaded two new python3.13 point releases to unstable.
  • Stefano updated distro-info-data in stable releases, to document the trixie release and expected EoL dates.
  • Stefano did some debian.social sysadmin work (keeping up quotas with growing databases and filesystems).
  • Stefano supported the Debian treasurers in processing some of the DebConf 25 reimbursements.
  • Lucas uploaded ruby3.4 to experimental. It was already approved by FTP masters.
  • Lucas uploaded ruby-defaults to experimental to add support for ruby3.4. It will allow us to start triggering test rebuilds and catch any FTBFS with ruby3.4.
  • Lucas did some administrative work for Google Summer of Code (GSoC) and replied to some queries from mentors and students.
  • Anupa helped to organize release parties for Debian 13 and Debian Day events.
  • Anupa did the live coverage for the Debian 13 release and prepared the Bits post for the release announcement and 32nd Debian Day as part of the Debian Publicity team.
  • Anupa attended a Debian Day event organized by FOSS club SSET as a speaker.

29 August 2025

Ravi Dwivedi: Installing Debian With Btrfs and Encryption

Motivation On the 8th of August 2025 (a day before the Debian Trixie release), I was upgrading my personal laptop from Debian Bookworm to Trixie. It was a major update. However, the update didn t go smoothly, and I ran into some errors. From the Debian support IRC channel, I got to know that it would be best if I removed the texlive packages. However, it was not so easy to just remove texlive with a simple apt remove command. I had to remove the texlive packages from /usr/bin. Then I ran into other errors. Hours after I started the upgrade, I realized I preferred having my system as it was before, as I had to travel to Noida the next day. Needless to say, I wanted to go to sleep rather than fix my broken system. Only if I had a way to go back to my system before I started upgrading, it would have saved a lot of trouble for me. I ended up installing Trixie from scratch. It turns out that there was a way to recover to the state before the upgrade - using Timeshift to roll back the system to a state (in our example, it is the state before the upgrade process started) in the past. However, it needs the Btrfs filesystem with appropriate subvolumes, not provided by Debian installer in their guided partitioning menu. I have set it up after a few weeks of the above-mentioned incident. Let me demonstrate how it works.
Check the screenshot above. It shows a list of snapshots made by Timeshift. Some of them were made by me manually. Others were made by Timeshift automatically as per the routine - I have set up hourly backups and weekly backups etc. In the above-mentioned major update, I could have just taken a snapshot using Timeshift before performing the upgrade and could have rolled back to that snapshot when I found that I cannot spend more time on fixing my installation errors. Then I could just perform the upgrade later.

Installation In this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for /home so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file. I went through countless tutorials on the Internet, but I didn t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes. Furthermore, it is pertinent to note that I used Debian Trixie s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop. Let s start the tutorial by booting up the Debian installer.
The above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.
Let s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. Non-expert install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.
After selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.
Let s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.
Choose Manual.
Select your disk where you would like to install Debian.
Select Yes when asked for creating a new partition.
I chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.
Select the free space option as shown above.
Choose Create a new partition.
I chose the partition size to be 1 GB.
Choose Primary.
Choose Beginning.
Now, I got to this screen.
I changed mount point to /boot and turned on the bootable flag and then selected Done setting up the partition.
Now select free space.
Choose the Create a new partition option.
I made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.
Select Primary.
Select the Use as option to change its value.
Select physical volume for encryption.
Select Done setting up the partition.
Now select Configure encrypted volumes.
Select Yes.
Select Finish.
Selecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting Yes. Otherwise, you could select No and compromise on the quality of encryption. After this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.
Now select that encrypted volume as shown in the screenshot above.
Here we will change a couple of options which will be shown in the next screenshot.
In the Use as menu, select btrfs journaling file system.
Now, click on the mount point option.
Change it to / - the root file system.
Select Done setting up the partition.
This is a preview of the paritioning after performing the above-mentioned steps.
If everything is okay, proceed with the Finish partitioning and write changes to disk option.
The installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.
If everything looks fine, choose yes for writing the changes to disks.
Now we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us. However, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install. Now press Ctrl + F2.
You will see the screen as in the above screenshot. It says Please press Enter to activate this console. So, let s press Enter.
After pressing Enter, we see the above screen.
The screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn t encrypt the disk in his tutorial). First we run df -h to have a look at how our disk is partitioned. In my case, the output was:
# df -h
Filesystem              Size  Used  Avail   Use% Mounted on
tmpfs                   1.6G  344.0K  1.6G    0% /run
devtmpfs                7.7G       0  7.7G   0% /dev
/dev/sdb1               3.7G    3.7G    0   100% /cdrom
/dev/mapper/sda2_crypt  952.9G  5.8G  950.9G  0% /target
/dev/sda1               919.7M  260.0K  855.8M  0% /target/boot
df -h shows us that /dev/mapper/sda2_crypt and /dev/sda1 are mounted on /target and /target/boot respectively. Let s unmount them. For that, we run:
# umount /target
# umount /target/boot
Next, let s mount our root filesystem to /mnt.
# mount /dev/mapper/sda2_crypt /mnt
Let s go into the /mnt directory.
# cd /mnt
Upon listing the contents of this directory, we get:
/mnt # ls
@rootfs
Debian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let s rename the @rootfs subvolume to @.
/mnt # mv @rootfs @
Listing the contents of the directory again, we get:
/mnt # ls
@
We only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.
/mnt # btrfs subvolume create @home
Create subvolume './@home'
If we perform ls now, we will see there are two subvolumes:
/mnt # ls
@ @home
Let us mount /dev/mapper/sda2_crypt to /target
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ /dev/mapper/sda2_crypt /target/
Now we need to create a directory for /home.
/mnt # mkdir /target/home/
Now we mount the /home directory with subvol=@home option.
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home /dev/mapper/sda2_crypt /target/home/
Now mount /dev/sda1 to /target/boot.
/mnt # mount /dev/sda1 /target/boot/
Now we need to add these options to the fstab file, which is located at /target/etc/fstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.
nano /target/etc/fstab
Edit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/sda2_crypt /        btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0       0
/dev/mapper/sda2_crypt /home    btrfs   noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0       0
# /boot was on /dev/sda1 during installation
UUID=12842b16-d3b3-44b4-878a-beb1e6362fbc /boot           ext4    defaults        0       2
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
Please double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running
cat /target/etc/fstab
and verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete. Next, press Ctrl + Alt + F1 to go back to the installer.
Proceed to Install the base system.
Screenshot of Debian installer installing the base system. Screenshot of Debian installer installing the base system.
I chose the default option here - linux-image-amd64. After this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process and assume that you were able to install from here.

Post installation Let s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (/etc/sudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same. Now we set up zram as swap. First, install zram-tools.
sudo apt install zram-tools
Now edit the file /etc/default/zramswap and make sure to have the following lines are uncommented:
ALGO=lz4
PERCENT=50
Now, run
sudo systemctl restart zramswap
If you run lsblk now, you should see the below-mentioned entry in the output:
zram0          253:0    0   7.8G  0 disk  [SWAP]
This shows us that zram has been activated as swap. Now we install timeshift, which can be done by running
sudo apt install timeshift
After the installation is complete, run Timeshift and schedule snapshots as you please. We are done now. Hope the tutorial was helpful. See you in the next post and let me know if you have any suggestions and questions on this tutorial.

Noah Meyerhans: Determining Network Online Status of Dualstack Cloud VMs

When a Debian cloud VM boots, it typically runs cloud-init at various points in the boot process. Each invocation can perform certain operations based on the host s static configuration passed by the user, typically either through a well known link-local network service or an attached iso9660 drive image. Some of the cloud-init steps execute before the network comes up, and others at a couple of different points after the network is up. I recently encountered an unexpected issue when configuring a dualstack (uses both IPv6 and legacy IPv4 networking) VM to use a custom apt server accessible only via IPv6. VM provisioning failed because it was unable to access the server in question, yet when I logged in to investigate, it was able to access the server without any problem. The boot had apparently gone smoothly right up until cloud-init s Package Update Upgrade Install module called apt-get update, which failed and broke subsequent provisioning steps. The errors reported by apt-get indicated that there was no route to the service in question, which more accurately probably meant that there was not yet a route to the service. But there was shortly after, when I investigated. This was surprising because the apt-get invocations occur in a cloud-init sequence that s explicitly ordered after the network is configured according to systemd-networkd-wait-online. Investigation eventually led to similar issues encountered in other environments reported in Debian bug #1111791, systemd: network-online.target reached before IPv6 address is ready . The issue described in that bug is identical to mine, but the bug is tagged wontfix. The behavior is considered correct.

Why the default behavior is the correct one While it s a bit counterintuitive, the systemd-networkd behavior is correct, and it s also not something we d want to override in the cloud images. Without explicit configuration, systemd can t accurately infer the intended network configuration of a given system. If a system is IPv6-only, systemd-networkd-wait-online will introduce unexpected delays in the boot process if it waits for IPv4, and vice-versa. If it assumes dualstack, things are even worse because it would block for a long time (approximately two minutes) in any single stack network before failing, leaving the host in degraded state. So the most reasonable default behavior is to block until any protocol is configured. For these same reasons, we can t change the systemd-networkd-wait-online configuration in our cloud images. All of the cloud environments we support offer both single stack and dual stack networking, so we preserve systemd s default behavior. What s causing problems here is that IPv6 takes significantly longer to configure due to its more complex router solicitation + router advertisement + DHCPv6 setup process. So in this particular case, where I ve got a dualstack VM that needs to access a v6-only apt server during the provisioning process, I need to find some mechanism to override systemd s default behavior and wait for IPv6 connectivity specifically.

What won t work Cloud-init offers the ability to write out arbitrary files during provisioning. So writing a drop-in for systemd-networkd-wait-online.service is trivial. Unfortunately, this doesn t give us everything we actually need. We still need to invoke systemctl daemon-reload to get systemd to actually apply the changes after we ve written them, and of course we need to do that before the service actually runs. Cloud-init provides a bootcmd module that lets us run shell commands very early in the boot process , but it runs too early: it runs before we ve written out our configuration files. Similarly, it provides a runcmd module, but scripts there are towards the end of the boot process, far too late to be useful. Instead of using the bootcmd facility, to simply reload systemd s config, it seemed possible that we could both write the config and trigger the reload, similar to the following:
 bootcmd:
- mkdir -p /etc/systemd/system/systemd-networkd-wait-online.service.d
- echo "[Service]" > /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- echo "ExecStart=/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6" >> /etc/systemd/system/systemd-networkd-wait-online.service.d/10-netplan.conf
- systemctl daemon-reload
But even that runs too late, as we can see in the logs that systemd-networkd-wait-online.service has completed before bootcmd is executed:
root@sid-tmp2:~# journalctl --no-pager -l -u systemd-networkd-wait-online.service
Aug 29 17:02:12 sid-tmp2 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Aug 29 17:02:13 sid-tmp2 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured
.
root@sid-tmp2:~# grep -F 'config-bootcmd ran' /var/log/cloud-init.log
2025-08-29 17:02:14,766 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully and took 0.467 seconds
At this point, it s looking like there are few options left!

What eventually worked I ended up identifying two solutions to the issue, both of which involve getting some other component of the provisioning process to run systemd-networkd-wait-online.

Solution 1 The first involves getting apt-get itself to wait for IPv6 configuration. The apt.conf configuration interface allows the definition of an APT::Update::Pre-Invoke hook that s executed just before apt s update operation. By writing the following to a file in /etc/apt/apt.conf.d/, we re able to ensure that we have IPv6 connectivity before apt-get tries accessing the network. This cloud-config snippet accomplishes that:
 - path: /etc/apt/apt.conf.d/99-wait-for-ipv6
content:  
APT::Update::Pre-Invoke   "/usr/lib/systemd/systemd-networkd-wait-online --operational-state=routable --any --ipv6";  
This is safe to leave in place after provisioning, because the delay will be negligible once IPv6 connectivity is established. It s only during address configuration that it ll block for a noticeable amount of time, but that s what we want. This solution isn t entirely correct, though, because it s only apt-get that s actually affected by it. Other service that start after the system is ostensibly online might only see IPv4 connectivity when they start. This seems acceptable at the moment, though.

Solution 2 The second solution is to simply invoke systemd-networkd-wait-online directly from a cloud-init bootcmd. Similar to the first solution, it s not exactly correct because the host has already reached network-online.target, but it does block enough of cloud-init that package installation happens only after it completes. The cloud-config snippet for this is
bootcmd:
- [/usr/lib/systemd/systemd-networkd-wait-online, --operational-state=routable, --any, --ipv6]
In either case, we still want to write out a snippet to configure systemd-networkd-wait-online to wait for IPv6 connectivity for future reboots. Even though cloud-init won t necessarily run in those cases, and many cloud VMs never reboot at all, it does complete the solution. Additionally, it solves the problem for any derivative images that may be created based on the running VM s state. (At least if we can be certain that instances of those derivative images will never run in an IPv4-only network!)
write_files:
- path: /run/systemd/system/systemd-networkd-wait-online.service.d/99-ipv6-wait.conf
content:  
[Service]
ExecStart=
ExecStart=/lib/systemd/systemd-networkd-wait-online --any --operational-state=routable --ipv6

How to properly solve it One possible improvement would be for cloud-init to support a configuration key allowing the admin to specify the required protocols. Based on the presence of this key, cloud-init could reconfigure systemd-networkd-wait-online.service accordingly. Alternatively it could set the appropriate RequiredFamilyForOnline= value in the generated .network file. cloud-init supports multiple network configuration backends, so each of those would need to be updated. If using the systemd-networkd configuration renderer, this should be straightforward, but Debian uses the netplan renderer, so that tool might also need to be taught to pass such a configuration along to systemd-networkd.

28 May 2025

Jonathan Dowland: Linux Mount Namespaces

I've been refreshing myself on the low-level guts of Linux container technology. Here's some notes on mount namespaces. In the below examples, I will use more than one root shell simultaneously. To disambiguate them, the examples will feature a numbered shell prompt: 1# for the first shell, and 2# for the second. Preliminaries Namespaces are normally associated with processes and are removed when the last associated process terminates. To make them persistent, you have to bind-mount the corresponding virtual file from an associated processes's entry in /proc, to another path1. The receiving path needs to have its "propogation" property set to "private". Most likely your system's existing mounts are mostly "public". You can check the propogation setting for mounts with
1# findmnt -o+PROPAGATION
We'll create a new directory to hold mount namespaces we create, and set its Propagation to private, via a bind-mount of itself to itself.
1# mkdir /root/mntns
1# mount --bind --make-private /root/mntns /root/mntns
The namespace itself needs to be bind-mounted over a file rather than a directory, so we'll create one.
1# touch /root/mntns/1
Creating and persisting a new mount namespace
1# unshare --mount=/root/mntns/1
We are now 'inside' the new namespace in a new shell process. We'll change the shell prompt to make this clearer
PS1='inside# '
We can make a filesystem change, such as mounting a tmpfs
inside# mount -t tmpfs /mnt /mnt
inside# touch /mnt/hi-there
And observe it is not visible outside that namespace
2# findmnt /mnt
2# stat /mnt/hi-there
stat: cannot statx '/mnt/hi-there': No such file or directory
Back to the namespace shell, we can find an integer identifier for the namespace via the shell processes /proc entry:
inside# readlink /proc/$$/ns/mnt
It will be something like mnt:[4026533646]. From another shell, we can list namespaces and see that it exists:
2# lsns -t mnt
        NS TYPE NPROCS   PID USER             COMMAND
 
4026533646 mnt       1 52525 root             -bash
If we exit the shell that unshare created,
inside# exit
running lsns again should2 still list the namespace, albeit with the NPROCS column now reading 0.
2# lsns -t mnt
We can see that a virtual filesystem of type nsfs is mounted at the path we selected when we ran unshare:
2# grep /root/mntns/1 /proc/mounts 
nsfs /root/mntns/1 nsfs rw 0 0
Entering the namespace from another process This is relatively easy:
1# nsenter --mount=/root/mntns/1
1# stat /mnt/hi-there
  File: /mnt/hi-there
 
More to come in future blog posts! References These were particularly useful in figuring this out:

  1. This feels really weird to me. At least at first. I suppose it fits with the "everything is a file" philosophy.
  2. I've found lsns in util-linux 2.38.1 (from 2022-08-04) doesn't list mount namespaces with no associated processes; but 2.41 (from 2025-03-18) does. The fix landed in 2022-11-08. For extra fun, I notice that a namespace can be held persistent with a file descriptor which is unlinked from the filesystem

26 May 2025

Otto Kek l inen: Creating Debian packages from upstream Git

Featured image of post Creating Debian packages from upstream GitIn this post, I demonstrate the optimal workflow for creating new Debian packages in 2025, preserving the upstream git history. The motivation for this is to lower the barrier for sharing improvements to and from upstream, and to improve software provenance and supply-chain security by making it easy to inspect every change at any level using standard git tooling. Key elements of this workflow include: To make the instructions so concrete that anyone can repeat all the steps themselves on a real package, I demonstrate the steps by packaging the command-line tool Entr. It is written in C, has very few dependencies, and its final Debian source package structure is simple, yet exemplifies all the important parts that go into a complete Debian package:
  1. Creating a new packaging repository and publishing it under your personal namespace on salsa.debian.org.
  2. Using dh_make to create the initial Debian packaging.
  3. Posting the first draft of the Debian packaging as a Merge Request (MR) and using Salsa CI to verify Debian packaging quality.
  4. Running local builds efficiently and iterating on the packaging process.

Create new Debian packaging repository from the existing upstream project git repository First, create a new empty directory, then clone the upstream Git repository inside it:
shell
mkdir debian-entr
cd debian-entr
git clone --origin upstreamvcs --branch master \
 --single-branch https://github.com/eradman/entr.git
Using a clean directory makes it easier to inspect the build artifacts of a Debian package, which will be output in the parent directory of the Debian source directory. The extra parameters given to git clone lay the foundation for the Debian packaging git repository structure where the upstream git remote name is upstreamvcs. Only the upstream main branch is tracked to avoid cluttering git history with upstream development branches that are irrelevant for packaging in Debian. Next, enter the git repository directory and list the git tags. Pick the latest upstream release tag as the commit to start the branch upstream/latest. This latest refers to the upstream release, not the upstream development branch. Immediately after, branch off the debian/latest branch, which will have the actual Debian packaging files in the debian/ subdirectory.
shell
cd entr
git tag # shows the latest upstream release tag was '5.6'
git checkout -b upstream/latest 5.6
git checkout -b debian/latest
%% init:   'gitGraph':   'mainBranchName': 'master'      %%
gitGraph:
checkout master
commit id: "Upstream 5.6 release" tag: "5.6"
branch upstream/latest
checkout upstream/latest
commit id: "New upstream version 5.6" tag: "upstream/5.6"
branch debian/latest
checkout debian/latest
commit id: "Initial Debian packaging"
commit id: "Additional change 1"
commit id: "Additional change 2"
commit id: "Additional change 3"
At this point, the repository is structured according to DEP-14 conventions, ensuring a clear separation between upstream and Debian packaging changes, but there are no Debian changes yet. Next, add the Salsa repository as a new remote which called origin, the same as the default remote name in git.
shell
git remote add origin git@salsa.debian.org:otto/entr-demo.git
git push --set-upstream origin debian/latest
This is an important preparation step to later be able to create a Merge Request on Salsa that targets the debian/latest branch, which does not yet have any debian/ directory.

Launch a Debian Sid (unstable) container to run builds in To ensure that all packaging tools are of the latest versions, run everything inside a fresh Sid container. This has two benefits: you are guaranteed to have the most up-to-date toolchain, and your host system stays clean without getting polluted by various extra packages. Additionally, this approach works even if your host system is not Debian/Ubuntu.
shell
cd ..
podman run --interactive --tty --rm --shm-size=1G --cap-add SYS_PTRACE \
 --env='DEB*' --volume=$PWD:/tmp/test --workdir=/tmp/test debian:sid bash
Note that the container should be started from the parent directory of the git repository, not inside it. The --volume parameter will loop-mount the current directory inside the container. Thus all files created and modified are on the host system, and will persist after the container shuts down. Once inside the container, install the basic dependencies:
shell
apt update -q && apt install -q --yes git-buildpackage dpkg-dev dh-make

Automate creating the debian/ files with dh-make To create the files needed for the actual Debian packaging, use dh_make:
shell
# dh_make --packagename entr_5.6 --single --createorig
Maintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]

Done. Please edit the files in the debian/ subdirectory now.
Due to how dh_make works, the package name and version need to be written as a single underscore separated string. In this case, you should choose --single to specify that the package type is a single binary package. Other options would be --library for library packages (see libgda5 sources as an example) or --indep (see dns-root-data sources as an example). The --createorig will create a mock upstream release tarball (entr_5.6.orig.tar.xz) from the current release directory, which is necessary due to historical reasons and how dh_make worked before git repositories became common and Debian source packages were based off upstream release tarballs (e.g. *.tar.gz). At this stage, a debian/ directory has been created with template files, and you can start modifying the files and iterating towards actual working packaging.
shell
git add debian/
git commit -a -m "Initial Debian packaging"

Review the files The full list of files after the above steps with dh_make would be:
 -- entr
   -- LICENSE
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- configure
   -- data.h
   -- debian
     -- README.Debian
     -- README.source
     -- changelog
     -- control
     -- copyright
     -- gbp.conf
     -- entr-docs.docs
     -- entr.cron.d.ex
     -- entr.doc-base.ex
     -- manpage.1.ex
     -- manpage.md.ex
     -- manpage.sgml.ex
     -- manpage.xml.ex
     -- postinst.ex
     -- postrm.ex
     -- preinst.ex
     -- prerm.ex
     -- rules
     -- salsa-ci.yml.ex
     -- source
       -- format
     -- upstream
       -- metadata.ex
     -- watch.ex
   -- entr.1
   -- entr.c
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- system_test.sh
 -- entr_5.6.orig.tar.xz
You can browse these files in the demo repository. The mandatory files in the debian/ directory are:
  • changelog,
  • control,
  • copyright,
  • and rules.
All the other files have been created for convenience so the packager has template files to work from. The files with the suffix .ex are example files that won t have any effect until their content is adjusted and the suffix removed. For detailed explanations of the purpose of each file in the debian/ subdirectory, see the following resources:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • Debhelper man pages: Detailed information of how the Debian package build system works, and how the contents of the various files in debian/ affect the end result.
As Entr, the package used in this example, is a real package that already exists in the Debian archive, you may want to browse the actual Debian packaging source at https://salsa.debian.org/debian/entr/-/tree/debian/latest/debian for reference. Most of these files have standardized formatting conventions to make collaboration easier. To automatically format the files following the most popular conventions, simply run wrap-and-sort -vast or debputy reformat --style=black.

Identify build dependencies The most common reason for builds to fail is missing dependencies. The easiest way to identify which Debian package ships the required dependency is using apt-file. If, for example, a build fails complaining that pcre2posix.h cannot be found or that libcre2-posix.so is missing, you can use these commands:
shell
$ apt install -q --yes apt-file && apt-file update
$ apt-file search pcre2posix.h
libpcre2-dev: /usr/include/pcre2posix.h
$ apt-file search libpcre2-posix.so
libpcre2-dev: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3.0.6
The output above implies that the debian/control should be extended to define a Build-Depends: libpcre2-dev relationship. There is also dpkg-depcheck that uses strace to trace the files the build process tries to access, and lists what Debian packages those files belong to. Example usage:
shell
dpkg-depcheck -b debian/rules build

Build the Debian sources to generate the .deb package After the first pass of refining the contents of the files in debian/, test the build by running dpkg-buildpackage inside the container:
shell
dpkg-buildpackage -uc -us -b
The options -uc -us will skip signing the resulting Debian source package and other build artifacts. The -b option will skip creating a source package and only build the (binary) *.deb packages. The output is very verbose and gives a large amount of context about what is happening during the build to make debugging build failures easier. In the build log of entr you will see for example the line dh binary --buildsystem=makefile. This and other dh commands can also be run manually if there is a need to quickly repeat only a part of the build while debugging build failures. To see what files were generated or modified by the build simply run git status --ignored:
shell
$ git status --ignored
On branch debian/latest

Untracked files:
 (use "git add <file>..." to include in what will be committed)
 debian/debhelper-build-stamp
 debian/entr.debhelper.log
 debian/entr.substvars
 debian/files

Ignored files:
 (use "git add -f <file>..." to include in what will be committed)
 Makefile
 compat.c
 compat.o
 debian/.debhelper/
 debian/entr/
 entr
 entr.o
 status.o
Re-running dpkg-buildpackage will include running the command dh clean, which assuming it is configured correctly in the debian/rules file will reset the source directory to the original pristine state. The same can of course also be done with regular git commands git reset --hard; git clean -fdx. To avoid accidentally committing unnecessary build artifacts in git, a debian/.gitignore can be useful and it would typically include all four files listed as untracked above. After a successful build you would have the following files:
shell
 -- entr
   -- LICENSE
   -- Makefile -> Makefile.linux
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- compat.c
   -- compat.o
   -- configure
   -- data.h
   -- debian
     -- README.source.md
     -- changelog
     -- control
     -- copyright
     -- debhelper-build-stamp
     -- docs
     -- entr
       -- DEBIAN
         -- control
         -- md5sums
       -- usr
       -- bin
         -- entr
       -- share
       -- doc
         -- entr
         -- NEWS.gz
         -- README.md
         -- changelog.Debian.gz
         -- copyright
       -- man
       -- man1
       -- entr.1.gz
     -- entr.debhelper.log
     -- entr.substvars
     -- files
     -- gbp.conf
     -- patches
       -- PR149-expand-aliases-in-system-test-script.patch
       -- series
       -- system-test-skip-no-tty.patch
       -- system-test-with-system-binary.patch
     -- rules
     -- salsa-ci.yml
     -- source
       -- format
     -- tests
       -- control
     -- upstream
       -- metadata
       -- signing-key.asc
     -- watch
   -- entr
   -- entr.1
   -- entr.c
   -- entr.o
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- status.o
   -- system_test.sh
 -- entr-dbgsym_5.6-1_amd64.deb
 -- entr_5.6-1.debian.tar.xz
 -- entr_5.6-1.dsc
 -- entr_5.6-1_amd64.buildinfo
 -- entr_5.6-1_amd64.changes
 -- entr_5.6-1_amd64.deb
 -- entr_5.6.orig.tar.xz
The contents of debian/entr are essentially what goes into the resulting entr_5.6-1_amd64.deb package. Familiarizing yourself with the majority of the files in the original upstream source as well as all the resulting build artifacts is time consuming, but it is a necessary investment to get high-quality Debian packages. There are also tools such as Debcraft that automate generating the build artifacts in separate output directories for each build, thus making it easy to compare the changes to correlate what change in the Debian packaging led to what change in the resulting build artifacts.

Re-run the initial import with git-buildpackage When upstreams publish releases as tarballs, they should also be imported for optimal software supply-chain security, in particular if upstream also publishes cryptographic signatures that can be used to verify the authenticity of the tarballs. To achieve this, the files debian/watch, debian/upstream/signing-key.asc, and debian/gbp.conf need to be present with the correct options. In the gbp.conf file, ensure you have the correct options based on:
  1. Does upstream release tarballs? If so, enforce pristine-tar = True.
  2. Does upstream sign the tarballs? If so, configure explicit signature checking with upstream-signatures = on.
  3. Does upstream have a git repository, and does it have release git tags? If so, configure the release git tag format, e.g. upstream-vcs-tag = %(version%~%.)s.
To validate that the above files are working correctly, run gbp import-orig with the current version explicitly defined:
shell
$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"
gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'
gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
As the original packaging was done based on the upstream release git tag, the above command will fetch the tarball release, create the pristine-tar branch, and store the tarball delta on it. This command will also attempt to create the tag upstream/5.6 on the upstream/latest branch.

Import new upstream versions in the future Forking the upstream git repository, creating the initial packaging, and creating the DEP-14 branch structure are all one-off work needed only when creating the initial packaging. Going forward, to import new upstream releases, one would simply run git fetch upstreamvcs; gbp import-orig --uscan, which fetches the upstream git tags, checks for new upstream tarballs, and automatically downloads, verifies, and imports the new version. See the galera-4-demo example in the Debian source packages in git explained post as a demo you can try running yourself and examine in detail. You can also try running gbp import-orig --uscan without specifying a version. It would fetch it, as it will notice there is now Entr version 5.7 available, and import it.

Build using git-buildpackage From this stage onwards you should build the package using gbp buildpackage, which will do a more comprehensive build.
shell
gbp buildpackage -uc -us
The git-buildpackage build also includes running Lintian to find potential Debian policy violations in the sources or in the resulting .deb binary packages. Many Debian Developers run lintian -EviIL +pedantic after every build to check that there are no new nags, and to validate that changes intended to previous Lintian nags were correct.

Open a Merge Request on Salsa for Debian packaging review Getting everything perfectly right takes a lot of effort, and may require reaching out to an experienced Debian Developers for review and guidance. Thus, you should aim to publish your initial packaging work on Salsa, Debian s GitLab instance, for review and feedback as early as possible. For somebody to be able to easily see what you have done, you should rename your debian/latest branch to another name, for example next/debian/latest, and open a Merge Request that targets the debian/latest branch on your Salsa fork, which still has only the unmodified upstream files. If you have followed the workflow in this post so far, you can simply run:
  1. git checkout -b next/debian/latest
  2. git push --set-upstream origin next/debian/latest
  3. Open in a browser the URL visible in the git remote response
  4. Write the Merge Request description in case the default text from your commit is not enough
  5. Mark the MR as Draft using the checkbox
  6. Publish the MR and request feedback
Once a Merge Request exists, discussion regarding what additional changes are needed can be conducted as MR comments. With an MR, you can easily iterate on the contents of next/debian/latest, rebase, force push, and request re-review as many times as you want. While at it, make sure the Settings > CI/CD page has under CI/CD configuration file the value debian/salsa-ci.yml so that the CI can run and give you immediate automated feedback. For an example of an initial packaging Merge Request, see https://salsa.debian.org/otto/entr-demo/-/merge_requests/1.

Open a Merge Request / Pull Request to fix upstream code Due to the high quality requirements in Debian, it is fairly common that while doing the initial Debian packaging of an open source project, issues are found that stem from the upstream source code. While it is possible to carry extra patches in Debian, it is not good practice to deviate too much from upstream code with custom Debian patches. Instead, the Debian packager should try to get the fixes applied directly upstream. Using git-buildpackage patch queues is the most convenient way to make modifications to the upstream source code so that they automatically convert into Debian patches (stored at debian/patches), and can also easily be submitted upstream as any regular git commit (and rebased and resubmitted many times over). First, decide if you want to work out of the upstream development branch and later cherry-pick to the Debian packaging branch, or work out of the Debian packaging branch and cherry-pick to an upstream branch. The example below starts from the upstream development branch and then cherry-picks the commit into the git-buildpackage patch queue:
shell
git checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expected
git commit -a -m "Commit title" -m "Commit body"
git push # submit upstream
gbp pq import --force --time-machine=10
git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadata
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
The example below starts by making the fix on a git-buildpackage patch queue branch, and then cherry-picking it onto the upstream development branch:
shell
gbp pq import --force --time-machine=10
nano entr.c
git commit -a -m "Commit title" -m "Commit body"
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
git checkout -b bugfix-branch master
git cherry-pick <commit id>
git commit --amend # prepare commit message for upstream submission
git push # submit upstream
The key git-buildpackage commands to enter and exit the patch-queue mode are:
shell
gbp pq import --force --time-machine=10
gbp pq export --drop --commit
%% init:   'gitGraph':   'mainBranchName': 'debian/latest'      %%
gitGraph
checkout debian/latest
commit id: "Initial packaging"
branch patch-queue/debian/latest
checkout patch-queue/debian/latest
commit id: "Delete debian/patches/..."
commit id: "Patch 1 title"
commit id: "Patch 2 title"
commit id: "Patch 3 title"
These can be run at any time, regardless if any debian/patches existed prior, or if existing patches applied cleanly or not, or if there were old patch queue branches around. Note that the extra -b in gbp buildpackage -uc -us -b instructs to build only binary packages, avoiding any nags from dpkg-source that there are modifications in the upstream sources while building in the patches-applied mode.

Programming-language specific dh-make alternatives As each programming language has its specific way of building the source code, and many other conventions regarding the file layout and more, Debian has multiple custom tools to create new Debian source packages for specific programming languages. Notably, Python does not have its own tool, but there is an dh_make --python option for Python support directly in dh_make itself. The list is not complete and many more tools exist. For some languages, there are even competing options, such as for Go there is in addition to dh-make-golang also Gophian. When learning Debian packaging, there is no need to learn these tools upfront. Being aware that they exist is enough, and one can learn them only if and when one starts to package a project in a new programming language.

The difference between source git repository vs source packages vs binary packages As seen in earlier example, running gbp buildpackage on the Entr packaging repository above will result in several files:
entr_5.6-1_amd64.changes
entr_5.6-1_amd64.deb
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The entr_5.6-1_amd64.deb is the binary package, which can be installed on a Debian/Ubuntu system. The rest of the files constitute the source package. To do a source-only build, run gbp buildpackage -S and note the files produced:
entr_5.6-1_source.changes
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The source package files can be used to build the binary .deb for amd64, or any architecture that the package supports. It is important to grasp that the Debian source package is the preferred form to be able to build the binary packages on various Debian build systems, and the Debian source package is not the same thing as the Debian packaging git repository contents.
flowchart LR
git[Git repository<br>branch debian/latest] --> gbp buildpackage -S  src[Source Package<br>.dsc + .tar.xz]
src --> dpkg-buildpackage  bin[Binary Packages<br>.deb]
If the package is large and complex, the build could result in multiple binary packages. One set of package definition files in debian/ will however only ever result in a single source package.

Option to repackage source packages with Files-Excluded lists in the debian/copyright file Some upstream projects may include binary files in their release, or other undesirable content that needs to be omitted from the source package in Debian. The easiest way to filter them out is by adding to the debian/copyright file a Files-Excluded field listing the undesired files. The debian/copyright file is read by uscan, which will repackage the upstream sources on-the-fly when importing new upstream releases. For a real-life example, see the debian/copyright files in the Godot package that lists:
debian
Files-Excluded: platform/android/java/gradle/wrapper/gradle-wrapper.jar
The resulting repackaged upstream source tarball, as well as the upstream version component, will have an extra +ds to signify that it is not the true original upstream source but has been modified by Debian:
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb

Creating one Debian source package from multiple upstream source packages also possible In some rare cases the upstream project may be split across multiple git repositories or the upstream release may consist of multiple components each in their own separate tarball. Usually these are very large projects that get some benefits from releasing components separately. If in Debian these are deemed to go into a single source package, it is technically possible using the component system in git-buildpackage and uscan. For an example see the gbp.conf and watch files in the node-cacache package. Using this type of structure should be a last resort, as it creates complexity and inter-dependencies that are bound to cause issues later on. It is usually better to work with upstream and champion universal best practices with clear releases and version schemes.

When not to start the Debian packaging repository as a fork of the upstream one Not all upstreams use Git for version control. It is by far the most popular, but there are still some that use e.g. Subversion or Mercurial. Who knows maybe in the future some new version control systems will start to compete with Git. There are also projects that use Git in massive monorepos and with complex submodule setups that invalidate the basic assumptions required to map an upstream Git repository into a Debian packaging repository. In those cases one can t use a debian/latest branch on a clone of the upstream git repository as the starting point for the Debian packaging, but one must revert the traditional way of starting from an upstream release tarball with gbp import-orig package-1.0.tar.gz.

Conclusion Created in August 1993, Debian is one of the oldest Linux distributions. In the 32 years since inception, the .deb packaging format and the tooling to work with it have evolved several generations. In the past 10 years, more and more Debian Developers have converged on certain core practices evidenced by https://trends.debian.net/, but there is still a lot of variance in workflows even for identical tasks. Hopefully, you find this post useful in giving practical guidance on how exactly to do the most common things when packaging software for Debian. Happy packaging!

25 May 2025

Otto Kek l inen: New Debian package creation from upstream git repository

Featured image of post New Debian package creation from upstream git repositoryIn this post, I demonstrate the optimal workflow for creating new Debian packages in 2025, preserving the upstream git history. The motivation for this is to lower the barrier for sharing improvements to and from upstream, and to improve software provenance and supply-chain security by making it easy to inspect every change at any level using standard git tooling. Key elements of this workflow include: To make the instructions so concrete that anyone can repeat all the steps themselves on a real package, I demonstrate the steps by packaging the command-line tool Entr. It is written in C, has very few dependencies, and its final Debian source package structure is simple, yet exemplifies all the important parts that go into a complete Debian package:
  1. Creating a new packaging repository and publishing it under your personal namespace on salsa.debian.org.
  2. Using dh_make to create the initial Debian packaging.
  3. Posting the first draft of the Debian packaging as a Merge Request (MR) and using Salsa CI to verify Debian packaging quality.
  4. Running local builds efficiently and iterating on the packaging process.

Create new Debian packaging repository from the existing upstream project git repository First, create a new empty directory, then clone the upstream Git repository inside it:
shell
mkdir debian-entr
cd debian-entr
git clone --origin upstreamvcs --branch master \
 --single-branch https://github.com/eradman/entr.git
Using a clean directory makes it easier to inspect the build artifacts of a Debian package, which will be output in the parent directory of the Debian source directory. The extra parameters given to git clone lay the foundation for the Debian packaging git repository structure where the upstream git remote name is upstreamvcs. Only the upstream main branch is tracked to avoid cluttering git history with upstream development branches that are irrelevant for packaging in Debian. Next, enter the git repository directory and list the git tags. Pick the latest upstream release tag as the commit to start the branch upstream/latest. This latest refers to the upstream release, not the upstream development branch. Immediately after, branch off the debian/latest branch, which will have the actual Debian packaging files in the debian/ subdirectory.
shell
cd entr
git tag # shows the latest upstream release tag was '5.6'
git checkout -b upstream/latest 5.6
git checkout -b debian/latest
%% init:   'gitGraph':   'mainBranchName': 'master'      %%
gitGraph:
checkout master
commit id: "Upstream 5.6 release" tag: "5.6"
branch upstream/latest
checkout upstream/latest
commit id: "New upstream version 5.6" tag: "upstream/5.6"
branch debian/latest
checkout debian/latest
commit id: "Initial Debian packaging"
commit id: "Additional change 1"
commit id: "Additional change 2"
commit id: "Additional change 3"
At this point, the repository is structured according to DEP-14 conventions, ensuring a clear separation between upstream and Debian packaging changes, but there are no Debian changes yet. Next, add the Salsa repository as a new remote which called origin, the same as the default remote name in git.
shell
git remote add origin git@salsa.debian.org:otto/entr-demo.git
git push --set-upstream origin debian/latest
This is an important preparation step to later be able to create a Merge Request on Salsa that targets the debian/latest branch, which does not yet have any debian/ directory.

Launch a Debian Sid (unstable) container to run builds in To ensure that all packaging tools are of the latest versions, run everything inside a fresh Sid container. This has two benefits: you are guaranteed to have the most up-to-date toolchain, and your host system stays clean without getting polluted by various extra packages. Additionally, this approach works even if your host system is not Debian/Ubuntu.
shell
cd ..
podman run --interactive --tty --rm --shm-size=1G --cap-add SYS_PTRACE \
 --env='DEB*' --volume=$PWD:/tmp/test --workdir=/tmp/test debian:sid bash
Note that the container should be started from the parent directory of the git repository, not inside it. The --volume parameter will loop-mount the current directory inside the container. Thus all files created and modified are on the host system, and will persist after the container shuts down. Once inside the container, install the basic dependencies:
shell
apt update -q && apt install -q --yes git-buildpackage dpkg-dev dh-make

Automate creating the debian/ files with dh-make To create the files needed for the actual Debian packaging, use dh_make:
shell
# dh_make --packagename entr_5.6 --single --createorig
Maintainer Name : Otto Kek l inen
Email-Address : otto@debian.org
Date : Sat, 15 Feb 2025 01:17:51 +0000
Package Name : entr
Version : 5.6
License : blank
Package Type : single
Are the details correct? [Y/n/q]

Done. Please edit the files in the debian/ subdirectory now.
Due to how dh_make works, the package name and version need to be written as a single underscore separated string. In this case, you should choose --single to specify that the package type is a single binary package. Other options would be --library for library packages (see libgda5 sources as an example) or --indep (see dns-root-data sources as an example). The --createorig will create a mock upstream release tarball (entr_5.6.orig.tar.xz) from the current release directory, which is necessary due to historical reasons and how dh_make worked before git repositories became common and Debian source packages were based off upstream release tarballs (e.g. *.tar.gz). At this stage, a debian/ directory has been created with template files, and you can start modifying the files and iterating towards actual working packaging.
shell
git add debian/
git commit -a -m "Initial Debian packaging"

Review the files The full list of files after the above steps with dh_make would be:
 -- entr
   -- LICENSE
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- configure
   -- data.h
   -- debian
     -- README.Debian
     -- README.source
     -- changelog
     -- control
     -- copyright
     -- gbp.conf
     -- entr-docs.docs
     -- entr.cron.d.ex
     -- entr.doc-base.ex
     -- manpage.1.ex
     -- manpage.md.ex
     -- manpage.sgml.ex
     -- manpage.xml.ex
     -- postinst.ex
     -- postrm.ex
     -- preinst.ex
     -- prerm.ex
     -- rules
     -- salsa-ci.yml.ex
     -- source
       -- format
     -- upstream
       -- metadata.ex
     -- watch.ex
   -- entr.1
   -- entr.c
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- system_test.sh
 -- entr_5.6.orig.tar.xz
You can browse these files in the demo repository. The mandatory files in the debian/ directory are:
  • changelog,
  • control,
  • copyright,
  • and rules.
All the other files have been created for convenience so the packager has template files to work from. The files with the suffix .ex are example files that won t have any effect until their content is adjusted and the suffix removed. For detailed explanations of the purpose of each file in the debian/ subdirectory, see the following resources:
  • The Debian Policy Manual: Describes the structure of the operating system, the package archive and requirements for packages to be included in the Debian archive.
  • The Developer s Reference: A collection of best practices and process descriptions Debian packagers are expected to follow while interacting with one another.
  • Debhelper man pages: Detailed information of how the Debian package build system works, and how the contents of the various files in debian/ affect the end result.
As Entr, the package used in this example, is a real package that already exists in the Debian archive, you may want to browse the actual Debian packaging source at https://salsa.debian.org/debian/entr/-/tree/debian/latest/debian for reference. Most of these files have standardized formatting conventions to make collaboration easier. To automatically format the files following the most popular conventions, simply run wrap-and-sort -vast or debputy reformat --style=black.

Identify build dependencies The most common reason for builds to fail is missing dependencies. The easiest way to identify which Debian package ships the required dependency is using apt-file. If, for example, a build fails complaining that pcre2posix.h cannot be found or that libcre2-posix.so is missing, you can use these commands:
shell
$ apt install -q --yes apt-file && apt-file update
$ apt-file search pcre2posix.h
libpcre2-dev: /usr/include/pcre2posix.h
$ apt-file search libpcre2-posix.so
libpcre2-dev: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3
libpcre2-posix3: /usr/lib/x86_64-linux-gnu/libpcre2-posix.so.3.0.6
The output above implies that the debian/control should be extended to define a Build-Depends: libpcre2-dev relationship. There is also dpkg-depcheck that uses strace to trace the files the build process tries to access, and lists what Debian packages those files belong to. Example usage:
shell
dpkg-depcheck -b debian/rules build

Build the Debian sources to generate the .deb package After the first pass of refining the contents of the files in debian/, test the build by running dpkg-buildpackage inside the container:
shell
dpkg-buildpackage -uc -us -b
The options -uc -us will skip signing the resulting Debian source package and other build artifacts. The -b option will skip creating a source package and only build the (binary) *.deb packages. The output is very verbose and gives a large amount of context about what is happening during the build to make debugging build failures easier. In the build log of entr you will see for example the line dh binary --buildsystem=makefile. This and other dh commands can also be run manually if there is a need to quickly repeat only a part of the build while debugging build failures. To see what files were generated or modified by the build simply run git status --ignored:
shell
$ git status --ignored
On branch debian/latest

Untracked files:
 (use "git add <file>..." to include in what will be committed)
 debian/debhelper-build-stamp
 debian/entr.debhelper.log
 debian/entr.substvars
 debian/files

Ignored files:
 (use "git add -f <file>..." to include in what will be committed)
 Makefile
 compat.c
 compat.o
 debian/.debhelper/
 debian/entr/
 entr
 entr.o
 status.o
Re-running dpkg-buildpackage will include running the command dh clean, which assuming it is configured correctly in the debian/rules file will reset the source directory to the original pristine state. The same can of course also be done with regular git commands git reset --hard; git clean -fdx. To avoid accidentally committing unnecessary build artifacts in git, a debian/.gitignore can be useful and it would typically include all four files listed as untracked above. After a successful build you would have the following files:
shell
 -- entr
   -- LICENSE
   -- Makefile -> Makefile.linux
   -- Makefile.bsd
   -- Makefile.linux
   -- Makefile.linux-compat
   -- Makefile.macos
   -- NEWS
   -- README.md
   -- compat.c
   -- compat.o
   -- configure
   -- data.h
   -- debian
     -- README.source.md
     -- changelog
     -- control
     -- copyright
     -- debhelper-build-stamp
     -- docs
     -- entr
       -- DEBIAN
         -- control
         -- md5sums
       -- usr
       -- bin
         -- entr
       -- share
       -- doc
         -- entr
         -- NEWS.gz
         -- README.md
         -- changelog.Debian.gz
         -- copyright
       -- man
       -- man1
       -- entr.1.gz
     -- entr.debhelper.log
     -- entr.substvars
     -- files
     -- gbp.conf
     -- patches
       -- PR149-expand-aliases-in-system-test-script.patch
       -- series
       -- system-test-skip-no-tty.patch
       -- system-test-with-system-binary.patch
     -- rules
     -- salsa-ci.yml
     -- source
       -- format
     -- tests
       -- control
     -- upstream
       -- metadata
       -- signing-key.asc
     -- watch
   -- entr
   -- entr.1
   -- entr.c
   -- entr.o
   -- missing
     -- compat.h
     -- kqueue_inotify.c
     -- strlcpy.c
     -- sys
     -- event.h
   -- status.c
   -- status.h
   -- status.o
   -- system_test.sh
 -- entr-dbgsym_5.6-1_amd64.deb
 -- entr_5.6-1.debian.tar.xz
 -- entr_5.6-1.dsc
 -- entr_5.6-1_amd64.buildinfo
 -- entr_5.6-1_amd64.changes
 -- entr_5.6-1_amd64.deb
 -- entr_5.6.orig.tar.xz
The contents of debian/entr are essentially what goes into the resulting entr_5.6-1_amd64.deb package. Familiarizing yourself with the majority of the files in the original upstream source as well as all the resulting build artifacts is time consuming, but it is a necessary investment to get high-quality Debian packages. There are also tools such as Debcraft that automate generating the build artifacts in separate output directories for each build, thus making it easy to compare the changes to correlate what change in the Debian packaging led to what change in the resulting build artifacts.

Re-run the initial import with git-buildpackage When upstreams publish releases as tarballs, they should also be imported for optimal software supply-chain security, in particular if upstream also publishes cryptographic signatures that can be used to verify the authenticity of the tarballs. To achieve this, the files debian/watch, debian/upstream/signing-key.asc, and debian/gbp.conf need to be present with the correct options. In the gbp.conf file, ensure you have the correct options based on:
  1. Does upstream release tarballs? If so, enforce pristine-tar = True.
  2. Does upstream sign the tarballs? If so, configure explicit signature checking with upstream-signatures = on.
  3. Does upstream have a git repository, and does it have release git tags? If so, configure the release git tag format, e.g. upstream-vcs-tag = %(version%~%.)s.
To validate that the above files are working correctly, run gbp import-orig with the current version explicitly defined:
shell
$ gbp import-orig --uscan --upstream-version 5.6
gbp:info: Launching uscan...
gpgv: Signature made 7. Aug 2024 07.43.27 PDT
gpgv: using RSA key 519151D83E83D40A232B4D615C418B8631BC7C26
gpgv: Good signature from "Eric Radman <ericshane@eradman.com>"
gbp:info: Using uscan downloaded tarball ../entr_5.6.orig.tar.gz
gbp:info: Importing '../entr_5.6.orig.tar.gz' to branch 'upstream/latest'...
gbp:info: Source package is entr
gbp:info: Upstream version is 5.6
gbp:info: Replacing upstream source on 'debian/latest'
gbp:info: Running Postimport hook
gbp:info: Successfully imported version 5.6 of ../entr_5.6.orig.tar.gz
As the original packaging was done based on the upstream release git tag, the above command will fetch the tarball release, create the pristine-tar branch, and store the tarball delta on it. This command will also attempt to create the tag upstream/5.6 on the upstream/latest branch.

Import new upstream versions in the future Forking the upstream git repository, creating the initial packaging, and creating the DEP-14 branch structure are all one-off work needed only when creating the initial packaging. Going forward, to import new upstream releases, one would simply run git fetch upstreamvcs; gbp import-orig --uscan, which fetches the upstream git tags, checks for new upstream tarballs, and automatically downloads, verifies, and imports the new version. See the galera-4-demo example in the Debian source packages in git explained post as a demo you can try running yourself and examine in detail. You can also try running gbp import-orig --uscan without specifying a version. It would fetch it, as it will notice there is now Entr version 5.7 available, and import it.

Build using git-buildpackage From this stage onwards you should build the package using gbp buildpackage, which will do a more comprehensive build.
shell
gbp buildpackage -uc -us
The git-buildpackage build also includes running Lintian to find potential Debian policy violations in the sources or in the resulting .deb binary packages. Many Debian Developers run lintian -EviIL +pedantic after every build to check that there are no new nags, and to validate that changes intended to previous Lintian nags were correct.

Open a Merge Request on Salsa for Debian packaging review Getting everything perfectly right takes a lot of effort, and may require reaching out to an experienced Debian Developers for review and guidance. Thus, you should aim to publish your initial packaging work on Salsa, Debian s GitLab instance, for review and feedback as early as possible. For somebody to be able to easily see what you have done, you should rename your debian/latest branch to another name, for example next/debian/latest, and open a Merge Request that targets the debian/latest branch on your Salsa fork, which still has only the unmodified upstream files. If you have followed the workflow in this post so far, you can simply run:
  1. git checkout -b next/debian/latest
  2. git push --set-upstream origin next/debian/latest
  3. Open in a browser the URL visible in the git remote response
  4. Write the Merge Request description in case the default text from your commit is not enough
  5. Mark the MR as Draft using the checkbox
  6. Publish the MR and request feedback
Once a Merge Request exists, discussion regarding what additional changes are needed can be conducted as MR comments. With an MR, you can easily iterate on the contents of next/debian/latest, rebase, force push, and request re-review as many times as you want. While at it, make sure the Settings > CI/CD page has under CI/CD configuration file the value debian/salsa-ci.yml so that the CI can run and give you immediate automated feedback. For an example of an initial packaging Merge Request, see https://salsa.debian.org/otto/entr-demo/-/merge_requests/1.

Open a Merge Request / Pull Request to fix upstream code Due to the high quality requirements in Debian, it is fairly common that while doing the initial Debian packaging of an open source project, issues are found that stem from the upstream source code. While it is possible to carry extra patches in Debian, it is not good practice to deviate too much from upstream code with custom Debian patches. Instead, the Debian packager should try to get the fixes applied directly upstream. Using git-buildpackage patch queues is the most convenient way to make modifications to the upstream source code so that they automatically convert into Debian patches (stored at debian/patches), and can also easily be submitted upstream as any regular git commit (and rebased and resubmitted many times over). First, decide if you want to work out of the upstream development branch and later cherry-pick to the Debian packaging branch, or work out of the Debian packaging branch and cherry-pick to an upstream branch. The example below starts from the upstream development branch and then cherry-picks the commit into the git-buildpackage patch queue:
shell
git checkout -b bugfix-branch master
nano entr.c
make
./entr # verify change works as expected
git commit -a -m "Commit title" -m "Commit body"
git push # submit upstream
gbp pq import --force --time-machine=10
git cherry-pick <commit id>
git commit --amend # extend commit message with DEP-3 metadata
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
The example below starts by making the fix on a git-buildpackage patch queue branch, and then cherry-picking it onto the upstream development branch:
shell
gbp pq import --force --time-machine=10
nano entr.c
git commit -a -m "Commit title" -m "Commit body"
gbp buildpackage -uc -us -b
./entr # verify change works as expected
gbp pq export --drop --commit
git commit --amend # Write commit message along lines "Add patch to .."
git checkout -b bugfix-branch master
git cherry-pick <commit id>
git commit --amend # prepare commit message for upstream submission
git push # submit upstream
The key git-buildpackage commands to enter and exit the patch-queue mode are:
shell
gbp pq import --force --time-machine=10
gbp pq export --drop --commit
%% init:   'gitGraph':   'mainBranchName': 'debian/latest'      %%
gitGraph
checkout debian/latest
commit id: "Initial packaging"
branch patch-queue/debian/latest
checkout patch-queue/debian/latest
commit id: "Delete debian/patches/..."
commit id: "Patch 1 title"
commit id: "Patch 2 title"
commit id: "Patch 3 title"
These can be run at any time, regardless if any debian/patches existed prior, or if existing patches applied cleanly or not, or if there were old patch queue branches around. Note that the extra -b in gbp buildpackage -uc -us -b instructs to build only binary packages, avoiding any nags from dpkg-source that there are modifications in the upstream sources while building in the patches-applied mode.

Programming-language specific dh-make alternatives As each programming language has its specific way of building the source code, and many other conventions regarding the file layout and more, Debian has multiple custom tools to create new Debian source packages for specific programming languages. Notably, Python does not have its own tool, but there is an dh_make --python option for Python support directly in dh_make itself. The list is not complete and many more tools exist. For some languages, there are even competing options, such as for Go there is in addition to dh-make-golang also Gophian. When learning Debian packaging, there is no need to learn these tools upfront. Being aware that they exist is enough, and one can learn them only if and when one starts to package a project in a new programming language.

The difference between source git repository vs source packages vs binary packages As seen in earlier example, running gbp buildpackage on the Entr packaging repository above will result in several files:
entr_5.6-1_amd64.changes
entr_5.6-1_amd64.deb
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The entr_5.6-1_amd64.deb is the binary package, which can be installed on a Debian/Ubuntu system. The rest of the files constitute the source package. To do a source-only build, run gbp buildpackage -S and note the files produced:
entr_5.6-1_source.changes
entr_5.6-1.debian.tar.xz
entr_5.6-1.dsc
entr_5.6.orig.tar.gz
entr_5.6.orig.tar.gz.asc
The source package files can be used to build the binary .deb for amd64, or any architecture that the package supports. It is important to grasp that the Debian source package is the preferred form to be able to build the binary packages on various Debian build systems, and the Debian source package is not the same thing as the Debian packaging git repository contents.
flowchart LR
git[Git repository<br>branch debian/latest] --> gbp buildpackage -S  src[Source Package<br>.dsc + .tar.xz]
src --> dpkg-buildpackage  bin[Binary Packages<br>.deb]
If the package is large and complex, the build could result in multiple binary packages. One set of package definition files in debian/ will however only ever result in a single source package.

Option to repackage source packages with Files-Excluded lists in the debian/copyright file Some upstream projects may include binary files in their release, or other undesirable content that needs to be omitted from the source package in Debian. The easiest way to filter them out is by adding to the debian/copyright file a Files-Excluded field listing the undesired files. The debian/copyright file is read by uscan, which will repackage the upstream sources on-the-fly when importing new upstream releases. For a real-life example, see the debian/copyright files in the Godot package that lists:
debian
Files-Excluded: platform/android/java/gradle/wrapper/gradle-wrapper.jar
The resulting repackaged upstream source tarball, as well as the upstream version component, will have an extra +ds to signify that it is not the true original upstream source but has been modified by Debian:
godot_4.3+ds.orig.tar.xz
godot_4.3+ds-1_amd64.deb

Creating one Debian source package from multiple upstream source packages also possible In some rare cases the upstream project may be split across multiple git repositories or the upstream release may consist of multiple components each in their own separate tarball. Usually these are very large projects that get some benefits from releasing components separately. If in Debian these are deemed to go into a single source package, it is technically possible using the component system in git-buildpackage and uscan. For an example see the gbp.conf and watch files in the node-cacache package. Using this type of structure should be a last resort, as it creates complexity and inter-dependencies that are bound to cause issues later on. It is usually better to work with upstream and champion universal best practices with clear releases and version schemes.

When not to start the Debian packaging repository as a fork of the upstream one Not all upstreams use Git for version control. It is by far the most popular, but there are still some that use e.g. Subversion or Mercurial. Who knows maybe in the future some new version control systems will start to compete with Git. There are also projects that use Git in massive monorepos and with complex submodule setups that invalidate the basic assumptions required to map an upstream Git repository into a Debian packaging repository. In those cases one can t use a debian/latest branch on a clone of the upstream git repository as the starting point for the Debian packaging, but one must revert the traditional way of starting from an upstream release tarball with gbp import-orig package-1.0.tar.gz.

Conclusion Created in August 1993, Debian is one of the oldest Linux distributions. In the 32 years since inception, the .deb packaging format and the tooling to work with it have evolved several generations. In the past 10 years, more and more Debian Developers have converged on certain core practices evidenced by https://trends.debian.net/, but there is still a lot of variance in workflows even for identical tasks. Hopefully, you find this post useful in giving practical guidance on how exactly to do the most common things when packaging software for Debian. Happy packaging!

Next.