Search Results: "ben"

5 November 2025

Reproducible Builds: Reproducible Builds in October 2025

Welcome to the October 2025 report from the Reproducible Builds project! Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website. In this report:

  1. Farewell from the Reproducible Builds Summit 2025
  2. Google s Play Store breaks reproducible builds for Signal
  3. Mailing list updates
  4. The Original Sin of Computing that no one can fix
  5. Reproducible Builds at the Transparency.dev summit
  6. Supply Chain Security for Go
  7. Three new academic papers published
  8. Distribution work
  9. Upstream patches
  10. Website updates
  11. Tool development

Farewell from the Reproducible Builds Summit 2025 Thank you to everyone who joined us at the Reproducible Builds Summit in Vienna, Austria! We were thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. During this event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim was to create an inclusive space that fosters collaboration, innovation and problem-solving. The agenda of the three main days is available online however, some working sessions may still lack notes at time of publication. One tangible outcome of the summit is that Johannes Starosta finished their rebuilderd tutorial, which is now available online and Johannes is actively seeking feedback.

Google s Play Store breaks reproducible builds for Signal On the issue tracker for the popular Signal messenger app, developer Greyson Parrelli reports that updates to the Google Play store have, in effect, broken reproducible builds:
The most recent issues have to do with changes to the APKs that are made by the Play Store. Specifically, they add some attributes to some .xml files around languages are resources, which is not unexpected because of how the whole bundle system works. This is trickier to resolve, because unlike current expected differences (like signing information), we can t just exclude a whole file from the comparison. We have to take a more nuanced look at the diff. I ve been hesitant to do that because it ll complicate our currently-very-readable comparison script, but I don t think there s any other reasonable option here.
The full thread with additional context is available on GitHub.

Mailing list updates On our mailing list this month:
  • kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution.
  • Arnout Engelen, Justin Cappos, Ludovic Court s and kpcyrd continued a conversation started in September regarding the Minimum Elements for a Software Bill of Materials . (Full thread)
  • Felix Moessbauer of Siemens posted to the list reporting that he had recently stumbled upon a couple of Debian source packages on the snapshot mirrors that are listed multiple times (same name and version), but each time with a different checksum . The thread, which Felix titled, Debian: what precisely identifies a source package is about precisely that what can be axiomatically relied upon by consumers of the Debian archives, as well as indicating an issue where we can t exactly say which packages were used during build time (even when having the .buildinfo files).
  • Luca DiMaio posted to the list announcing the release of xfsprogs 6.17.0 which specifically includes a commit that implements the functionality to populate a newly created XFS filesystem directly from an existing directory structure which makes it easier to create populated filesystems without having to mount them [and thus is] particularly useful for reproducible builds . Luca asked the list how they might contribute to the docs of the System images page.

The Original Sin of Computing that no one can fix Popular YouTuber @laurewired published a video this month with an engaging take on the Trusting Trust problem. Titled The Original Sin of Computing that no one can fix, the video touches on David A. Wheeler s Diverse Double-Compiling dissertation. GNU developer Janneke Nieuwenhuizen followed-up with an email (additionally sent to our mailing list) as well, underscoring that GNU Mes s current solution [to this issue] uses ancient softwares in its bootstrap path, such as gcc-2.95.3 and glibc-2.2.5 . (According to Colby Russell, the GNU Mes bootstrapping sequence is shown at 18m54s in the video.)

Reproducible Builds at the Transparency.dev summit Holger Levsen gave a talk at this year s Transparency.dev summit in Gothenburg, Sweden, outlining the achievements of the Reproducible Builds project in the last 12 years, covering both upstream developments as well as some distribution-specific details. As mentioned on the talk s page, Holger s presentation concluded with an outlook into the future and an invitation to collaborate to bring transparency logs into Reproducible Builds projects . The slides of the talk are available, although a video has yet to be released. Nevertheless, as a result of the discussions at Transparency.dev there is a new page on the Debian wiki with the aim of describing a potential transparency log setup for Debian.

Supply Chain Security for Go Andrew Ayer has setup a new service at sourcespotter.com that aims to monitor the supply chain security for Go releases. It consists of four separate trackers:
  1. A tool to verify that the Go Module Mirror and Checksum Database is behaving honestly and has not presented inconsistent information to clients.
  2. A module monitor that records every module version served by the Go Module Mirror and Checksum Database, allowing you to monitor for unexpected versions of your modules.
  3. A tool to verifies that the Go toolchains published in the Go Module Mirror can be reproduced from source code, making it difficult to hide backdoors in the binaries downloaded by the go command.
  4. A telemetry config tracker that tracks the names of telemetry counters uploaded by the Go toolchain, to ensure that Go telemetry is not violating users privacy.
As the homepage of the service mentions, the trackers are free software and do not rely on Google infrastructure.

Three new academic papers published Julien Malka of the Institut Polytechnique de Paris published an exciting paper this month on How NixOS could have detected the XZ supply-chain attack for the benefit of all thanks to reproducible-builds. Julien outlines his paper as follows:
In March 2024, a sophisticated backdoor was discovered in xz, a core compression library in Linux distributions, covertly inserted over three years by a malicious maintainer, Jia Tan. The attack, which enabled remote code execution via ssh, was only uncovered by chance when Andres Freund investigated a minor performance issue. This incident highlights the vulnerability of the open-source supply chain and the effort attackers are willing to invest in gaining trust and access. In this article, I analyze the backdoor s mechanics and explore how bitwise build reproducibility could have helped detect it.
A PDF of the paper is available online.
Iy n M ndez Veiga and Esther H nggi (of the Lucerne University of Applied Sciences and Arts and ETH Zurich) published a paper this month on the topic of Reproducible Builds for Quantum Computing. The abstract of their paper mentions the following:
Although quantum computing is a rapidly evolving field of research, it can already benefit from adopting reproducible builds. This paper aims to bridge the gap between the quantum computing and reproducible builds communities. We propose a generalization of the definition of reproducible builds in the quantum setting, motivated by two threat models: one targeting the confidentiality of end users data during circuit preparation and submission to a quantum computer, and another compromising the integrity of quantum computation results. This work presents three examples that show how classical information can be hidden in transpiled quantum circuits, and two cases illustrating how even minimal modifications to these circuits can lead to incorrect quantum computation results.
A full PDF of their paper is available.
Congratulations to Georg Kofler who submitted their Master s thesis for the Johannes Kepler University of Linz, Austria on the topic of Reproducible builds of E2EE-messengers for Android using Nix hermetic builds:
The thesis focuses on providing a reproducible build process for two open-source E2EE messaging applications: Signal and Wire. The motivation to ensure reproducibility and thereby the integrity of E2EE messaging applications stems from their central role as essential tools for modern digital privacy. These applications provide confidentiality for private and sensitive communications, and their compromise could undermine encryption mechanisms, potentially leaking sensitive data to third parties.
A full PDF of their thesis is available online.
Shawkot Hossain of Aalto University, Finland has also submitted their Master s thesis on the The Role of SBOM in Modern Development with a focus on the extant tooling:
Currently, there are numerous solutions and techniques available in the market to tackle supply chain security, and all claim to be the best solution. This thesis delves deeper by implementing those solutions and evaluates them for better understanding. Some of the tools that this thesis implemented are Syft, Trivy, Grype, FOSSA, dependency-check, and Gemnasium. Software dependencies are generated in a Software Bill of Materials (SBOM) format by using these open-source tools, and the corresponding results have been analyzed. Among these tools, Syft and Trivy outperform others as they provide relevant and accurate information on software dependencies.
A PDF of the thesis is also available.

Distribution work Michael Plura published an interesting article on Heise.de on the topic of Trust is good, reproducibility is better:
In the wake of growing supply chain attacks, the FreeBSD developers are relying on a transparent build concept in the form of Zero-Trust Builds. The approach builds on the established Reproducible Builds, where binary files can be rebuilt bit-for-bit from the published source code. While reproducible builds primarily ensure verifiability, the zero-trust model goes a step further and removes trust from the build process itself. No single server, maintainer, or compiler can be considered more than potentially trustworthy.
The article mentions that this goal has now been achieved with a slight delay and can be used in the current development branch for FreeBSD 15 .
In Debian this month, 7 reviews of Debian packages were added, 5 were updated and 11 were removed this month adding to our knowledge about identified issues. For the Debian CI tests Holger fixed #786644 and set nocheck in DEB_BUILD_OPTIONS for the 2nd build..
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Website updates Once again, there were a number of improvements made to our website this month including: In addition, a number of contributors added a series of notes from our recent summit to the website, including Alexander Couzens [ ], Robin Candau [ ][ ][ ][ ][ ][ ][ ][ ][ ] and kpcyrd [ ].

Tool development diffoscope version 307 was uploaded to Debian unstable by Chris Lamb, who made a number of changes including fixing compatibility with LLVM version 21 [ ], an attempt to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [ ] In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 307.

Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 November 2025

Dirk Eddelbuettel: RcppCNPy 0.2.14 on CRAN: Minor Maintenance

Another (again somewhat minor) maintenance release of the RcppCNPy package arrived on CRAN just now. RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers along with Rcpp for the glue to R. The changes are all minor chores. As R now checks usage of packages in demos, we added the rbenchmark to Suggests: in DESCRIPTION. We refreshed the main continuous integration script for a minor update, and also replaced one URL in a badge to avoid a timeout during checks at CRAN. So nothing user-facing this time! Full details are below.

Changes in version 0.2.14 (2024-11-03)
  • The rbenchmark package is now a Suggests: as it appears in demo
  • The continuous integration setup now uses r-ci with its embedded setup step
  • The URL used for the GPL-2 is now the R Project copy

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

3 November 2025

Russ Allbery: Review: The Raven Scholar

Review: The Raven Scholar, by Antonia Hodgson
Series: Eternal Path Trilogy #1
Publisher: Orbit
Copyright: April 2025
ISBN: 0-316-57723-5
Format: Kindle
Pages: 651
The Raven Scholar is an epic fantasy and the first book of a projected trilogy. It is Antonia Hodgson's first published fantasy novel; her previous published novels are historical mystery. I would classify this as adult fantasy the main character is thirty-four with a stable court position but it has strong YA vibes because of the generational turnover feel of the main plot. Eight years before the start of this book, Andren Valit attempted to assassinate the emperor and failed. Since then, his widow and three children twins Yana and Ruko and infant Nisthala have been living in disgrace in a cramped apartment, subject to constant inspections and suspicion. As the story opens, they have been summoned to appear before the emperor, escorted by a young and earnest Hound (essentially the state security services) named Shal Worthy. The resulting interrogation is full of dangerous traps. Not all of them will be avoided. The formalization of the consequences of that imperial summons falls to an unpopular Junior Archivist (Third Class) whose one notable skill is her penmanship. A meeting that was disasterous for the Valits becomes unexpectedly fortunate for the archivist, albeit with a poisonous core. Eight years later, Neema Kraa is High Scholar, and Emperor Bersun's twenty-four years of permitted reign is coming to an end. The Festival is about to begin. One representative from each of the empire's eight anats (religious schools) will compete in seven days of Trials, save for the Dragons who do not want the throne and will send a proxy. The victor according to the Trials scoring system will become emperor and reign unquestioned for twenty-four years or until resignation. This is the system that put an end to the era of chaos and has been in place for over a thousand years. On the eve of the Trials, the Raven contender is found murdered. Neema is immediately a suspect; she even has reasons to suspect herself. She volunteers to lead the investigation because she has to know what happened. She is also volunteered to be the replacement Raven contender. There is no chance that she will become emperor; she doesn't even know how to fight. But agnostic Neema has a rather unexpected ally.
As the last chime fades we drop neatly on to the balcony's rusting hand rail, folding our wings with a soft shuffle. Noon, on the ninth day of the eighth month, 1531. Neema Kraa's lodgings. We are here, exactly where we should be, at exactly the right moment, because we are the Raven, and we are magnificent.
The Raven Scholar is a rather good epic fantasy, with some caveats that I'll get to in a moment, but I found it even more fascinating as a genre artifact. I've read my share of epic fantasy over the years, although most of my familiarity of the current wave of new adult fairy epics comes from reviews rather than personal experience. The Raven Scholar is epic fantasy, through and through. There is court intrigue, a main character who is a court functionary unexpectedly thrown into the middle of some problem, civilization-wide stakes, dramatic political alliances, detailed magic and mythological systems, and gods. There were moments that reminded me of a Guy Gavriel Kay novel, although Hodgson's characters tend more towards disarming moments of humanization instead of Kay's operatic scenes of emotional intensity. But The Raven Scholar is also a murder mystery, complete with a crime scene, clues, suspects, evidence, an investigation, a possibly compromised detective, and a morass of possible motives and red herrings. I'm not much of a mystery reader, but this didn't feel like sort of ancillary mystery that might crop up in the course of a typical epic fantasy. It felt like a full-fledged investigation with an amateur detective; one can tell that Hodgson's previous four books were historical mysteries. And then there's the Trials, which are the centerpiece of the book. This book helped me notice that people (okay, me, I'm the people) have been sleeping on the influence of The Hunger Games, Battle Royale, and reality TV (specifically Survivor) on genre fiction, possibly because the more obvious riffs on the idea (Powerless, The Selection) have been young adult or new adult. Once I started looking, I realized this idea is everywhere now: Throne of Glass, Fourth Wing, even The Night Circus to some extent. Competitions with consequences are having a moment. I suspect having a competition to decide the next emperor is going to strike some traditional fantasy readers as sufficiently absurd and unbelievable that it will kick them out of the book. I had a moment of "okay, this is weird, why would anyone stick with this system for so long" myself. But I would encourage such readers to interrogate whether that's only a response from unfamiliarity; after all, strange women lying in ponds distributing swords is no basis for a system of government either. This is hardly the most unrealistic epic fantasy trope, and it has the advantage of being a hell of a plot generator when handled well. Hodgson handles it well. Society in this novel is structured around the anats and the eight Guardians, gods who, according to myth, had returned seven times previously to save the world, but who will destroy the world when they return again. Each Guardian represents a group of characteristics and useful societal functions: the Ox is trustworthy, competent and hard-working; the Fox is a trickster and a rule-bender; the Raven is shrewd and careful and is the Guardian of scholars and lawyers. Each Trial is organized by one of the anats and tests the contenders for the skills most valued by that Guardian, often in subtle and rather ingenious ways. There are flaws here that you could poke at if you wanted to, but I was charmed and thoroughly entertained by how well Hodgson weaves the story around the Trials and uses the conflicting values to create character conflict, unexpected alliances, and engrossing plot. Most importantly for a book of this sort, I liked Neema. She has a charming combination of competence, quirks (she is almost physically unable to not correct people's factual errors), insecurity, imposter syndrome, and determination. She is way out of her depth and knows it, but she has an ethical core and an insatiable curiosity that won't let her leave the central mysteries of the book alone. And the character dynamics are great; there are a lot of characters, including the competition problem of having to juggle eight contenders and give them all sufficient characterization to be meaningful, but this book uses its length to give each character some room to breathe. This is a long book, well over 600 pages, but it felt packed with events and plot twists. After every chapter I had to fight the urge to read just one more. The biggest drawback of this book is that it is very much the first book of a trilogy, none of the other volumes are out yet, and the ending is rather nasty. This is the sort of trilogy that opens with a whole lot of bad things happening, and while I am thoroughly hooked and will purchase the next volume as soon as it's available, I wish Hodgson had found a way to end the book on a somewhat more positive or hopeful note. The middle of the book was great; the end was a bit of an emotional slog, alas. The writing is good enough here that I'm fairly sure the depression will be worth it, but if you need your endings to be triumphant (and who could blame you in this moment in history), you may want to wait on this one until more volumes are out. Apart from that, though, this was a lot of fun. The Guardians felt like they came from a different strand of fantasy than you usually see in epic, more of a traditional folk tale vibe, which adds an intriguing twist to the epic fantasy setting. The characters all work, and Hodgson even pulls off some Game of Thrones style twists that make you sympathetic to characters you previously hated. The magic system apart from the Guardians felt underbaked, but the politics had more depth than a lot of fantasy novels. If you want the truly complex and twisty politics you would get from one of Guy Gavriel Kay's historical rewrites, you will come away disappointed, but it was good enough for me. And I did enjoy the Raven.
Respect, that's all we demand. Recognition of our magnificence. Offerings. Love. Fear. Trembling awe. Worship. Shiny things. Blood sacrifice, some of us very much enjoy blood sacrifice. Truly, we ask for so little.
Followed by an as-yet untitled sequel that I hope will materialize. Rating: 7 out of 10

2 November 2025

Guido G nther: Free Software Activities October 2025

Quiete some things made progress last month: We put out Phosh 0.50 release, got closer to enabling media roles for audio by default in Phosh (see related post) and reworked our images builds. You should also (hopefully) notice some nice quality of life improvements once changes land in a distro near you and you're using Phosh. See below for details: phosh phoc phosh-mobile-settings stevia (formerly phosh-osk-stub) phosh-tour meta-phosh xdg-desktop-portal-phosh libphosh-rs Calls Phrog phosh-recipes feedbackd feedbackd-device-themes Chatty Squeekboard Debian Cellbroadcastd gnome-settings-daemon gnome-control-center gnome-initial-setup sdm845-mainline gnome-session alpine droid-juicer phosh-site phosh-debs Linux Wireplumber Phosh.mobi e.V. demo-phones Reviews This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions! Comments? Join the Fediverse thread

Ben Hutchings: FOSS activity in October 2025

28 October 2025

Russell Coker: Internode NBN500

I have just converted to the Internode NBN500 plan which is now the same price as the NBN100 plan. I m in a HFC area so they won t let me get fiber to the home (due to Malcolm Turnbull breaking the NBN to help Murdoch) so I m limited to what HFC can do. I first tried it out on a 100mbit card and got speeds of 96/47 mb/s according to speedtest.net. I ve always had the MTU set to 1492 for the PPPoE connection (something I forgot to mention in my blog post about connecting to the Arris CM8200 on Debian [1]) but when run on the 100mbit card I had to set it to 1488. Apparently 1488 is the number because 4 bytes are taken for the VLAN header and 8 bytes for the PPPoE header. But it seems that when using gigabit ethernet it doesn t take 4 bytes for the VLAN (comments explaining that would be appreciated). when connected via gigabit with a MTU of 1492 I got speeds of 534/46 which are quite good. When I tested with my laptop on a Wifi link while sitting next to the main node of my Kogan Wifi6 mesh [2] via 2.4GHz wifi I got 172/45. When using 5GHz I got 514/41. When using 5GHz at the far end of my home over the mesh I got 200/45. Here s a table summarising the speeds. I rounded all speeds off to 1Mbit/s because I don t think that the results are even that accurate. I think that Wifi5 over mesh reporting a faster upload speed than Wifi5 near the AP is because of random factors not an actual benefit to being further away, but I will do more tests later on.
Connection Receive Mbit/s Send Mbit/s
100baseT 96 47
Gigabit 535 46
2.4GHz Wifi 172 45
Wifi5 514 41
Wifi5 Over Mesh 200 45

25 October 2025

Sam Hartman: My First Successful AI Coding Experience

Yesterday, I had my first successful AI coding experience. I ve used AI coding tools before and come away disappointed. The results were underwhelming: low-quality code, inconsistent abstraction levels, and subtle bugs that take longer to fix than it would take to write the whole thing from scratch. Those problems haven t vanished. The code quality this time was still disappointing. As I asked the AI to refined its work, it would randomly drop important constraints or refactor things in unhelpful ways. And yet, this experience was different and genuinely valuable for two reasons. The first benefit was the obvious one: the AI helped me get over the blank-page problem. It produced a workable skeleton for the project imperfect, but enough to start building on. The second benefit was more surprising. I was working on a problem in odds-ratio preference optimization specifically, finding a way to combine similar examples in datasets for AI training. I wanted an ideal algorithm, one that extracted every ounce of value from the data. The AI misunderstood my description. Its first attempt was laughably simple it just concatenated two text strings. Thanks, but I can call strcat or the Python equivalent without help. However, the second attempt was different. It was still not what I had asked for but as I thought about it, I realized it was good enough. The AI had created a simpler algorithm that would probably solve my problem in practice. In trying too hard to make the algorithm perfect, I d overlooked that the simpler approach might be the right one. The AI, by misunderstanding, helped me see that. This experience reminded me of something that happened years ago when I was mentoring a new developer. They came to me asking how to solve a difficult problem. Rather than telling them it was impossible, I explained what would be required: a complex authorization framework, intricate system interactions, and a series of political and organizational hurdles that would make deployment nearly impossible. A few months later, they returned and said they d found a solution. I was astonished until I looked more closely. What they d built wasn t the full, organization-wide system I had envisioned. Instead, they d reframed the problem. By narrowing the scope reducing the need for global trust and deep integration they d built a local solution that worked well enough within their project. They succeeded precisely because they didn t see all the constraints I did. Their inexperience freed them from assumptions that had trapped me. That s exactly what happened with the AI. It didn t know which boundaries not to cross. In its simplicity, it found a path forward that I had overlooked. My conclusion isn t that AI coding is suddenly great. It s that working with someone or something that thinks differently can open new paths forward. Whether it s an AI, a peer, or a less experienced engineer, that collaboration can bring fresh perspectives that challenge your assumptions and reveal simpler, more practical ways to solve problems.

comment count unavailable comments

23 October 2025

Russ Allbery: Review: Politics on the Edge

Review: Politics on the Edge, by Rory Stewart
Publisher: Penguin Books
Copyright: 2023, 2025
Printing: 2025
ISBN: 979-8-217-06167-9
Format: Kindle
Pages: 429
Rory Stewart is a former British diplomat, non-profit executive, member of Parliament, and cabinet minister. Politics on the Edge is a memoir of his time in the UK Parliament from 2019 to 2019 as a Tory (Conservative) representing the Penrith and The Border constituency in northern England. It ends with his failed run against Boris Johnson for leader of the Conservative Party and Prime Minister. This book provoked many thoughts, only some of which are about the book. You may want to get a beverage; this review will be long. Since this is a memoir told in chronological order, a timeline may be useful. After Stewart's time as a regional governor in occupied Iraq (see The Prince of the Marshes), he moved to Kabul to found and run an NGO to preserve traditional Afghani arts and buildings (the Turquoise Mountain Foundation, about which I know nothing except what Stewart wrote in this book). By his telling, he found that work deeply rewarding but thought the same politicians who turned Iraq into a mess were going to do the same to Afghanistan. He started looking for ways to influence the politics more directly, which led him first to Harvard and then to stand for Parliament. The bulk of this book covers Stewart's time as MP for Penrith and The Border. The choice of constituency struck me as symbolic of Stewart's entire career: He was not a resident and had no real connection to the district, which he chose for political reasons and because it was the nearest viable constituency to his actual home in Scotland. But once he decided to run, he moved to the district and seems sincerely earnest in his desire to understand it and become part of its community. After five years as a backbencher, he joined David Cameron's government in a minor role as Minister of State in the Department for Environment, Food, and Rural Affairs. He then bounced through several minor cabinet positions (more on this later) before being elevated to Secretary of State for International Development under Theresa May. When May's government collapsed during the fight over the Brexit agreement, he launched a quixotic challenge to Boris Johnson for leader of the Conservative Party. I have enjoyed Rory Stewart's writing ever since The Places in Between. This book is no exception. Whatever one's other feelings about Stewart's politics (about which I'll have a great deal more to say), he's a talented memoir writer with an understated and contemplative style and a deft ability to shift from concrete description to philosophical debate without bogging down a story. Politics on the Edge is compelling reading at the prose level. I spent several afternoons happily engrossed in this book and had great difficulty putting it down. I find Stewart intriguing since, despite being a political conservative, he's neither a neoliberal nor any part of the new right. He is instead an apparently-sincere throwback to a conservatism based on epistemic humility, a veneration of rural life and long-standing traditions, and a deep commitment to the concept of public service. Some of his principles are baffling to me, and I think some of his political views are obvious nonsense, but there were several things that struck me throughout this book that I found admirable and depressingly rare in politics. First, Stewart seems to learn from his mistakes. This goes beyond admitting when he was wrong and appears to include a willingness to rethink entire philosophical positions based on new experience.
I had entered Iraq supporting the war on the grounds that we could at least produce a better society than Saddam Hussein's. It was one of the greatest mistakes in my life. We attempted to impose programmes made up by Washington think tanks, and reheated in air-conditioned palaces in Baghdad a new taxation system modelled on Hong Kong; a system of ministers borrowed from Singapore; and free ports, modelled on Dubai. But we did it ultimately at the point of a gun, and our resources, our abstract jargon and optimistic platitudes could not conceal how much Iraqis resented us, how much we were failing, and how humiliating and degrading our work had become. Our mission was a grotesque satire of every liberal aspiration for peace, growth and democracy.
This quote comes from the beginning of this book and is a sentiment Stewart already expressed in The Prince of the Marshes, but he appears to have taken this so seriously that it becomes a theme of his political career. He not only realized how wrong he was on Iraq, he abandoned the entire neoliberal nation-building project without abandoning his belief in the moral obligation of international aid. And he, I think correctly, identified a key source of the error: an ignorant, condescending superiority that dismissed the importance of deep expertise.
Neither they, nor indeed any of the 12,000 peacekeepers and policemen who had been posted to South Sudan from sixty nations, had spent a single night in a rural house, or could complete a sentence in Dinka, Nuer, Azande or Bande. And the international development strategy written jointly between the donor nations resembled a fading mission statement found in a new space colony, whose occupants had all been killed in an alien attack.
Second, Stewart sincerely likes ordinary people. This shone through The Places in Between and recurs here in his descriptions of his constituents. He has a profound appreciation for individual people who have spent their life learning some trade or skill, expresses thoughtful and observant appreciation for aspects of local culture, and appears to deeply appreciate time spent around people from wildly different social classes and cultures than his own. Every successful politician can at least fake gregariousness, and perhaps that's all Stewart is doing, but there is something specific and attentive about his descriptions of other people, including long before he decided to enter politics, that makes me think it goes deeper than political savvy. Third, Stewart has a visceral hatred of incompetence. I think this is the strongest through-line of his politics in this book: Jobs in government are serious, important work; they should be done competently and well; and if one is not capable of doing that, one should not be in government. Stewart himself strikes me as an insecure overachiever: fiercely ambitious, self-critical, a bit of a micromanager (I suspect he would be difficult to work for), but holding himself to high standards and appalled when others do not do the same. This book is scathing towards multiple politicians, particularly Boris Johnson whom Stewart clearly despises, but no one comes off worse than Liz Truss.
David Cameron, I was beginning to realise, had put in charge of environment, food and rural affairs a Secretary of State who openly rejected the idea of rural affairs and who had little interest in landscape, farmers or the environment. I was beginning to wonder whether he could have given her any role she was less suited to apart perhaps from making her Foreign Secretary. Still, I could also sense why Cameron was mesmerised by her. Her genius lay in exaggerated simplicity. Governing might be about critical thinking; but the new style of politics, of which she was a leading exponent, was not. If critical thinking required humility, this politics demanded absolute confidence: in place of reality, it offered untethered hope; instead of accuracy, vagueness. While critical thinking required scepticism, open-mindedness and an instinct for complexity, the new politics demanded loyalty, partisanship and slogans: not truth and reason but power and manipulation. If Liz Truss worried about the consequences of any of this for the way that government would work, she didn't reveal it.
And finally, Stewart has a deeply-held belief in state capacity and capability. He and I may disagree on the appropriate size and role of the government in society, but no one would be more disgusted by an intentional project to cripple government in order to shrink it than Stewart. One of his most-repeated criticisms of the UK political system in this book is the way the cabinet is formed. All ministers and secretaries come from members of Parliament and therefore branches of government are led by people with no relevant expertise. This is made worse by constant cabinet reshuffles that invalidate whatever small amounts of knowledge a minister was able to gain in nine months or a year in post. The center portion of this book records Stewart's time being shuffled from rural affairs to international development to Africa to prisons, with each move representing a complete reset of the political office and no transfer of knowledge whatsoever.
A month earlier, they had been anticipating every nuance of Minister Rogerson's diary, supporting him on shifts twenty-four hours a day, seven days a week. But it was already clear that there would be no pretence of a handover no explanation of my predecessor's strategy, and uncompleted initiatives. The arrival of a new minister was Groundhog Day. Dan Rogerson was not a ghost haunting my office, he was an absence, whose former existence was suggested only by the black plastic comb.
After each reshuffle, Stewart writes of trying to absorb briefings, do research, and learn enough about his new responsibilities to have the hope of making good decisions, while growing increasingly frustrated with the system and the lack of interest by most of his colleagues in doing the same. He wants government programs to be successful and believes success requires expertise and careful management by the politicians, not only by the civil servants, a position that to me both feels obviously correct and entirely at odds with politics as currently practiced. I found this a fascinating book to read during the accelerating collapse of neoliberalism in the US and, to judge by current polling results, the UK. I have a theory that the political press are so devoted to a simplistic left-right political axis based on seating arrangements during the French Revolution that they are missing a significant minority whose primary political motivation is contempt for arrogant incompetence. They could be convinced to vote for Sanders or Trump, for Polanski or Farage, but will never vote for Biden, Starmer, Romney, or Sunak. Such voters are incomprehensible to those who closely follow and debate policies because their hostile reaction to the center is not about policies. It's about lack of trust and a nebulous desire for justice. They've been promised technocratic competence and the invisible hand of market forces for most of their lives, and all of it looks like lies. Everyday living is more precarious, more frustrating, more abusive and dehumanizing, and more anxious, despite (or because of) this wholehearted embrace of economic "freedom." They're sick of every complaint about the increasing difficulty of life being met with accusations about their ability and work ethic, and of being forced to endure another round of austerity by people who then catch a helicopter ride to a party on some billionaire's yacht. Some of this is inherent in the deep structural weaknesses in neoliberal ideology, but this is worse than an ideological failure. The degree to which neoliberalism started as a project of sincere political thinkers is arguable, but that is clearly not true today. The elite class in politics and business is now thoroughly captured by people whose primary skill is the marginal manipulation of complex systems for their own power and benefit. They are less libertarian ideologues than narcissistic mediocrities. We are governed by management consultants. They are firmly convinced their organizational expertise is universal, and consider the specific business of the company, or government department, irrelevant. Given that context, I found Stewart's instinctive revulsion towards David Cameron quite revealing. Stewart, later in the book, tries to give Cameron some credit by citing several policy accomplishments and comparing him favorably to Boris Johnson (which, true, is a bar Cameron probably flops over). But I think Stewart's baffled astonishment at Cameron's vapidity says a great deal about how we have ended up where we are. This last quote is long, but I think it provides a good feel for Stewart's argument in this book.
But Cameron, who was rumoured to be sceptical about nation-building projects, only nodded, and then looking confidently up and down the table said, "Well, at least we all agree on one extremely straightforward and simple point, which is that our troops are doing very difficult and important work and we should all support them." It was an odd statement to make to civilians running humanitarian operations on the ground. I felt I should speak. "No, with respect, we do not agree with that. Insofar as we have focused on the troops, we have just been explaining that what the troops are doing is often futile, and in many cases making things worse." Two small red dots appeared on his cheeks. Then his face formed back into a smile. He thanked us, told us he was out of time, shook all our hands, and left the room. Later, I saw him repeat the same line in interviews: "the purpose of this visit is straightforward... it is to show support for what our troops are doing in Afghanistan". The line had been written, in London, I assumed, and tested on focus groups. But he wanted to convince himself it was also a position of principle. "David has decided," one of his aides explained, when I met him later, "that one cannot criticise a war when there are troops on the ground." "Why?" "Well... we have had that debate. But he feels it is a principle of British government." "But Churchill criticised the conduct of the Boer War; Pitt the war with America. Why can't he criticise wars?" "British soldiers are losing their lives in this war, and we can't suggest they have died in vain." "But more will die, if no one speaks up..." "It is a principle thing. And he has made his decision. For him and the party." "Does this apply to Iraq too?" "Yes. Again he understands what you are saying, but he voted to support the Iraq War, and troops are on the ground." "But surely he can say he's changed his mind?" The aide didn't answer, but instead concentrated on his food. "It is so difficult," he resumed, "to get any coverage of our trip." He paused again. "If David writes a column about Afghanistan, we will struggle to get it published." "But what would he say in an article anyway?" I asked. "We can talk about that later. But how do you get your articles on Afghanistan published?" I remembered how the US politicians and officials had shown their mastery of strategy and detail. I remembered the earnestness of Gordon Brown when I had briefed him on Iraq. Cameron seemed somehow less serious. I wrote as much in a column in the New York Times, saying that I was afraid the party of Churchill was becoming the party of Bertie Wooster.
I don't know Stewart's reputation in Britain, or in the constituency that he represented. I know he's been accused of being a self-aggrandizing publicity hound, and to some extent this is probably true. It's hard to find an ambitious politician who does not have that instinct. But whatever Stewart's flaws, he can, at least, defend his politics with more substance than a corporate motto. One gets the impression that he would respond favorably to demonstrated competence linked to a careful argument, even if he disagreed. Perhaps this is an illusion created by his writing, but even if so, it's a step in the right direction. When people become angry enough at a failing status quo, any option that promises radical change and punishment for the current incompetents will sound appealing. The default collapse is towards demagogues who are skilled at expressing anger and disgust and are willing to promise simple cures because they are indifferent to honesty. Much of the political establishment in the US, and possibly (to the small degree that I can analyze it from an occasional news article) in the UK, can identify the peril of the demagogue, but they have no solution other than a return to "politics as usual," represented by the amoral mediocrity of a McKinsey consultant. The rare politicians who seem to believe in something, who will argue for personal expertise and humility, who are disgusted by incompetence and have no patience for facile platitudes, are a breath of fresh air. There are a lot of policies on which Stewart and I would disagree, and perhaps some of his apparent humility is an affectation from the rhetorical world of the 1800s that he clearly wishes he were inhabiting, but he gives the strong impression of someone who would shoulder a responsibility and attempt to execute it with competence and attention to detail. He views government as a job, where coworkers should cooperate to achieve defined goals, rather than a reality TV show. The arc of this book, like the arc of current politics, is the victory of the reality TV show over the workplace, and the story of Stewart's run against Boris Johnson is hard reading because of it, but there's a portrayal here of a different attitude towards politics that I found deeply rewarding. If you liked Stewart's previous work, or if you want an inside look at parliamentary politics, highly recommended. I will be thinking about this book for a long time. Rating: 9 out of 10

21 October 2025

Gunnar Wolf: LLM Hallucinations in Practical Code Generation Phenomena, Mechanism, and Mitigation

This post is a review for Computing Reviews for LLM Hallucinations in Practical Code Generation Phenomena, Mechanism, and Mitigation , a article published in Proceedings of the ACM on Software Engineering, Volume 2, Issue ISSTA
How good can large language models (LLMs) be at generating code? This may not seem like a very novel question, as several benchmarks (for example, HumanEval and MBPP, published in 2021) existed before LLMs burst into public view and started the current artificial intelligence (AI) inflation. However, as the paper s authors point out, code generation is very seldom done as an isolated function, but instead must be deployed in a coherent fashion together with the rest of the project or repository it is meant to be integrated into. Today, several benchmarks (for example, CoderEval or EvoCodeBench) measure the functional correctness of LLM-generated code via test case pass rates. This paper brings a new proposal to the table: comparing LLM-generated repository-level evaluated code by examining the hallucinations generated. The authors begin by running the Python code generation tasks proposed in the CoderEval benchmark against six code-generating LLMs. Next, they analyze the results and build a taxonomy to describe code-based LLM hallucinations, with three types of conflicts (task requirement, factual knowledge, and project context) as first-level categories and eight subcategories within them. The authors then compare the results of each of the LLMs per the main hallucination category. Finally, they try to find the root cause for the hallucinations. The paper is structured very clearly, not only presenting the three research questions (RQ) but also referring to them as needed to explain why and how each partial result is interpreted. RQ1 (establishing a hallucination taxonomy) is the most thoroughly explored. While RQ2 (LLM comparison) is clear, it just presents straightforward results without much analysis. RQ3 (root cause discussion) is undoubtedly interesting, but I feel it to be much more speculative and not directly related to the analysis performed. After tackling their research questions, Zhang et al. propose a possible mitigation to counter the effect of hallucinations: enhance the LLM with retrieval-augmented generation (RAG) so it better understands task requirements, factual knowledge, and project context. The presented results show that all of the models are clearly (though modestly) improved by the proposed RAG-based mitigation. The paper is clearly written and easy to read. It should provide its target audience with interesting insights and discussions. I would have liked more details on their RAG implementation, but I suppose that s for a follow-up work.

19 October 2025

Otto Kek l inen: Could the XZ backdoor have been detected with better Git and Debian packaging practices?

Featured image of post Could the XZ backdoor have been detected with better Git and Debian packaging practices?The discovery of a backdoor in XZ Utils in the spring of 2024 shocked the open source community, raising critical questions about software supply chain security. This post explores whether better Debian packaging practices could have detected this threat, offering a guide to auditing packages and suggesting future improvements. The XZ backdoor in versions 5.6.0/5.6.1 made its way briefly into many major Linux distributions such as Debian and Fedora, but luckily didn t reach that many actual users, as the backdoored releases were quickly removed thanks to the heroic diligence of Andres Freund. We are all extremely lucky that he detected a half a second performance regression in SSH, cared enough to trace it down, discovered malicious code in the XZ library loaded by SSH, and reported promtly to various security teams for quick coordinated actions. This episode makes software engineers pondering the following questions: As a Debian Developer, I decided to audit the xz package in Debian, share my methodology and findings in this post, and also suggest some improvements on how the software supply-chain security could be tightened in Debian specifically. Note that the scope here is only to inspect how Debian imports software from its upstreams, and how they are distributed to Debian s users. This excludes the whole story of how to assess if an upstream project is following software development security best practices. This post doesn t discuss how to operate an individual computer running Debian to ensure it remains untampered as there are plenty of guides on that already.

Downloading Debian and upstream source packages Let s start by working backwards from what the Debian package repositories offer for download. As auditing binaries is extremely complicated, we skip that, and assume the Debian build hosts are trustworthy and reliably building binaries from the source packages, and the focus should be on auditing the source code packages. As with everything in Debian, there are multiple tools and ways to do the same thing, but in this post only one (and hopefully the best) way to do something is presented for brevity. The first step is to download the latest version and some past versions of the package from the Debian archive, which is easiest done with debsnap. The following command will download all Debian source packages of xz-utils from Debian release 5.2.4-1 onwards:
$ debsnap --verbose --first 5.2.4-1 xz-utils
Getting json https://snapshot.debian.org/mr/package/xz-utils/
...
Getting dsc file xz-utils_5.2.4-1.dsc: https://snapshot.debian.org/file/a98271e4291bed8df795ce04d9dc8e4ce959462d
Getting file xz-utils_5.2.4.orig.tar.xz.asc: https://snapshot.debian.org/file/59ccbfb2405abe510999afef4b374cad30c09275
Getting file xz-utils_5.2.4-1.debian.tar.xz: https://snapshot.debian.org/file/667c14fd9409ca54c397b07d2d70140d6297393f
source-xz-utils/xz-utils_5.2.4-1.dsc:
Good signature found
validating xz-utils_5.2.4.orig.tar.xz
validating xz-utils_5.2.4.orig.tar.xz.asc
validating xz-utils_5.2.4-1.debian.tar.xz
All files validated successfully.
Once debsnap completes there will be a subfolder source-<package name> with the following types of files:
  • *.orig.tar.xz: source code from upstream
  • *.orig.tar.xz.asc: detached signature (if upstream signs their releases)
  • *.debian.tar.xz: Debian packaging source, i.e. the debian/ subdirectory contents
  • *.dsc: Debian source control file, including signature by Debian Developer/Maintainer
Example:
$ ls -1 source-xz-utils/
...
xz-utils_5.6.4.orig.tar.xz
xz-utils_5.6.4.orig.tar.xz.asc
xz-utils_5.6.4-1.debian.tar.xz
xz-utils_5.6.4-1.dsc
xz-utils_5.8.0.orig.tar.xz
xz-utils_5.8.0.orig.tar.xz.asc
xz-utils_5.8.0-1.debian.tar.xz
xz-utils_5.8.0-1.dsc
xz-utils_5.8.1.orig.tar.xz
xz-utils_5.8.1.orig.tar.xz.asc
xz-utils_5.8.1-1.1.debian.tar.xz
xz-utils_5.8.1-1.1.dsc
xz-utils_5.8.1-1.debian.tar.xz
xz-utils_5.8.1-1.dsc
xz-utils_5.8.1-2.debian.tar.xz
xz-utils_5.8.1-2.dsc

Verifying authenticity of upstream and Debian sources using OpenPGP signatures As seen in the output of debsnap, it already automatically verifies that the downloaded files match the OpenPGP signatures. To have full clarity on what files were authenticated with what keys, we should verify the Debian packagers signature with:
$ gpg --verify --auto-key-retrieve --keyserver hkps://keyring.debian.org xz-utils_5.8.1-2.dsc
gpg: Signature made Fri Oct 3 22:04:44 2025 UTC
gpg: using RSA key 57892E705233051337F6FDD105641F175712FA5B
gpg: requesting key 05641F175712FA5B from hkps://keyring.debian.org
gpg: key 7B96E8162A8CF5D1: public key "Sebastian Andrzej Siewior" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Sebastian Andrzej Siewior" [unknown]
gpg: aka "Sebastian Andrzej Siewior <bigeasy@linutronix.de>" [unknown]
gpg: aka "Sebastian Andrzej Siewior <sebastian@breakpoint.cc>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 6425 4695 FFF0 AA44 66CC 19E6 7B96 E816 2A8C F5D1
Subkey fingerprint: 5789 2E70 5233 0513 37F6 FDD1 0564 1F17 5712 FA5B
The upstream tarball signature (if available) can be verified with:
$ gpg --verify --auto-key-retrieve xz-utils_5.8.1.orig.tar.xz.asc
gpg: assuming signed data in 'xz-utils_5.8.1.orig.tar.xz'
gpg: Signature made Thu Apr 3 11:38:23 2025 UTC
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: key 38EE757D69184620: public key "Lasse Collin <lasse.collin@tukaani.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: 3690 C240 CE51 B467 0D30 AD1C 38EE 757D 6918 4620
Note that this only proves that there is a key that created a valid signature for this content. The authenticity of the keys themselves need to be validated separately before trusting they in fact are the keys of these people. That can be done by checking e.g. the upstream website for what key fingerprints they published, or the Debian keyring for Debian Developers and Maintainers, or by relying on the OpenPGP web-of-trust .

Verifying authenticity of upstream sources by comparing checksums In case the upstream in question does not publish release signatures, the second best way to verify the authenticity of the sources used in Debian is to download the sources directly from upstream and compare that the sha256 checksums match. This should be done using the debian/watch file inside the Debian packaging, which defines where the upstream source is downloaded from. Continuing on the example situation above, we can unpack the latest Debian sources, enter and then run uscan to download:
$ tar xvf xz-utils_5.8.1-2.debian.tar.xz
...
debian/rules
debian/source/format
debian/source.lintian-overrides
debian/symbols
debian/tests/control
debian/tests/testsuite
debian/upstream/signing-key.asc
debian/watch
...
$ uscan --download-current-version --destdir /tmp
Newest version of xz-utils on remote site is 5.8.1, specified download version is 5.8.1
gpgv: Signature made Thu Apr 3 11:38:23 2025 UTC
gpgv: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpgv: Good signature from "Lasse Collin <lasse.collin@tukaani.org>"
Successfully symlinked /tmp/xz-5.8.1.tar.xz to /tmp/xz-utils_5.8.1.orig.tar.xz.
The original files downloaded from upstream are now in /tmp along with the files renamed to follow Debian conventions. Using everything downloaded so far the sha256 checksums can be compared across the files and also to what the .dsc file advertised:
$ ls -1 /tmp/
xz-5.8.1.tar.xz
xz-5.8.1.tar.xz.sig
xz-utils_5.8.1.orig.tar.xz
xz-utils_5.8.1.orig.tar.xz.asc
$ sha256sum xz-utils_5.8.1.orig.tar.xz /tmp/xz-5.8.1.tar.xz
0b54f79df85912504de0b14aec7971e3f964491af1812d83447005807513cd9e xz-utils_5.8.1.orig.tar.xz
0b54f79df85912504de0b14aec7971e3f964491af1812d83447005807513cd9e /tmp/xz-5.8.1.tar.xz
$ grep -A 3 Sha256 xz-utils_5.8.1-2.dsc
Checksums-Sha256:
0b54f79df85912504de0b14aec7971e3f964491af1812d83447005807513cd9e 1461872 xz-utils_5.8.1.orig.tar.xz
4138f4ceca1aa7fd2085fb15a23f6d495d27bca6d3c49c429a8520ea622c27ae 833 xz-utils_5.8.1.orig.tar.xz.asc
3ed458da17e4023ec45b2c398480ed4fe6a7bfc1d108675ec837b5ca9a4b5ccb 24648 xz-utils_5.8.1-2.debian.tar.xz
In the example above the checksum 0b54f79df85... is the same across the files, so it is a match.

Repackaged upstream sources can t be verified as easily Note that uscan may in rare cases repackage some upstream sources, for example to exclude files that don t adhere to Debian s copyright and licensing requirements. Those files and paths would be listed under the Files-Excluded section in the debian/copyright file. There are also other situations where the file that represents the upstream sources in Debian isn t bit-by-bit the same as what upstream published. If checksums don t match, an experienced Debian Developer should review all package settings (e.g. debian/source/options) to see if there was a valid and intentional reason for divergence.

Reviewing changes between two source packages using diffoscope Diffoscope is an incredibly capable and handy tool to compare arbitrary files. For example, to view a report in HTML format of the differences between two XZ releases, run:
diffoscope --html-dir xz-utils-5.6.4_vs_5.8.0 xz-utils_5.6.4.orig.tar.xz xz-utils_5.8.0.orig.tar.xz
browse xz-utils-5.6.4_vs_5.8.0/index.html
Inspecting diffoscope output of differences between two XZ Utils releases If the changes are extensive, and you want to use a LLM to help spot potential security issues, generate the report of both the upstream and Debian packaging differences in Markdown with:
diffoscope --markdown diffoscope-debian.md xz-utils_5.6.4-1.debian.tar.xz xz-utils_5.8.1-2.debian.tar.xz
diffoscope --markdown diffoscope.md xz-utils_5.6.4.orig.tar.xz xz-utils_5.8.0.orig.tar.xz
The Markdown files created above can then be passed to your favorite LLM, along with a prompt such as:
Based on the attached diffoscope output for a new Debian package version compared with the previous one, list all suspicious changes that might have introduced a backdoor, followed by other potential security issues. If there are none, list a short summary of changes as the conclusion.

Reviewing Debian source packages in version control As of today only 93% of all Debian source packages are tracked in git on Debian s GitLab instance at salsa.debian.org. Some key packages such as Coreutils and Bash are not using version control at all, as their maintainers apparently don t see value in using git for Debian packaging, and the Debian Policy does not require it. Thus, the only reliable and consistent way to audit changes in Debian packages is to compare the full versions from the archive as shown above. However, for packages that are hosted on Salsa, one can view the git history to gain additional insight into what exactly changed, when and why. For packages that are using version control, their location can be found in the Git-Vcs header in the debian/control file. For xz-utils the location is salsa.debian.org/debian/xz-utils. Note that the Debian policy does not state anything about how Salsa should be used, or what git repository layout or development practices to follow. In practice most packages follow the DEP-14 proposal, and use git-buildpackage as the tool for managing changes and pushing and pulling them between upstream and salsa.debian.org. To get the XZ Utils source, run:
$ gbp clone https://salsa.debian.org/debian/xz-utils.git
gbp:info: Cloning from 'https://salsa.debian.org/debian/xz-utils.git'
At the time of writing this post the git history shows:
$ git log --graph --oneline
* bb787585 (HEAD -> debian/unstable, origin/debian/unstable, origin/HEAD) Prepare 5.8.1-2
* 4b769547 d: Remove the symlinks from -dev package.
* a39f3428 Correct the nocheck build profile
* 1b806b8d Import Debian changes 5.8.1-1.1
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
 \
  * fa1e8796 (origin/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
  * a522a226 Bump version and soname for 5.8.1
  * 1c462c2a Add NEWS for 5.8.1
  * 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
  * 48440e24 Tests: Add a fuzzing target for the multithreaded .xz decoder
  * 0c80045a liblzma: mt dec: Fix lack of parallelization in single-shot decoding
  * 81880488 liblzma: mt dec: Don't modify thr->in_size in the worker thread
  * d5a2ffe4 liblzma: mt dec: Don't free the input buffer too early (CVE-2025-31115)
  * c0c83596 liblzma: mt dec: Simplify by removing the THR_STOP state
  * 831b55b9 liblzma: mt dec: Fix a comment
  * b9d168ee liblzma: Add assertions to lzma_bufcpy()
  * c8e0a489 DOS: Update Makefile to fix the build
  * 307c02ed sysdefs.h: Avoid <stdalign.h> even with C11 compilers
  * 7ce38b31 Update THANKS
  * 688e51bd Translations: Update the Croatian translation
*   a6b54dde Prepare 5.8.0-1.
*   77d9470f Add 5.8 symbols.
*   9268eb66 Import 5.8.0
*   6f85ef4f Update upstream source from tag 'upstream/5.8.0'
 \ \
  *   afba662b New upstream version 5.8.0
   /
  * 173fb5c6 doc/SHA256SUMS: Add 5.8.0
  * db9258e8 Bump version and soname for 5.8.0
  * bfb752a3 Add NEWS for 5.8.0
  * 6ccbb904 Translations: Run "make -C po update-po"
  * 891a5f05 Translations: Run po4a/update-po
  * 4f52e738 Translations: Partially fix overtranslation in Serbian man pages
  * ff5d9447 liblzma: Count the extra bytes in LZMA/LZMA2 decoder memory usage
  * 943b012d liblzma: Use SSE2 intrinsics instead of memcpy() in dict_repeat()
This shows both the changes on the debian/unstable branch as well as the intermediate upstream import branch, and the actual real upstream development branch. See my Debian source packages in git explainer for details of what these branches are used for. To only view changes on the Debian branch, run git log --graph --oneline --first-parent or git log --graph --oneline -- debian. The Debian branch should only have changes inside the debian/ subdirectory, which is easy to check with:
$ git diff --stat upstream/v5.8
debian/README.source   16 +++
debian/autogen.sh   32 +++++
debian/changelog   949 ++++++++++++++++++++++++++
...
debian/upstream/signing-key.asc   52 +++++++++
debian/watch   4 +
debian/xz-utils.README.Debian   47 ++++++++
debian/xz-utils.docs   6 +
debian/xz-utils.install   28 +++++
debian/xz-utils.postinst   19 +++
debian/xz-utils.prerm   10 ++
debian/xzdec.docs   6 +
debian/xzdec.install   4 +
33 files changed, 2014 insertions(+)
All the files outside the debian/ directory originate from upstream, and for example running git blame on them should show only upstream commits:
$ git blame CMakeLists.txt
22af94128 (Lasse Collin 2024-02-12 17:09:10 +0200 1) # SPDX-License-Identifier: 0BSD
22af94128 (Lasse Collin 2024-02-12 17:09:10 +0200 2)
7e3493d40 (Lasse Collin 2020-02-24 23:38:16 +0200 3) ###############
7e3493d40 (Lasse Collin 2020-02-24 23:38:16 +0200 4) #
426bdc709 (Lasse Collin 2024-02-17 21:45:07 +0200 5) # CMake support for building XZ Utils
If the upstream in question signs commits or tags, they can be verified with e.g.:
$ git verify-tag v5.6.2
gpg: Signature made Wed 29 May 2024 09:39:42 AM PDT
gpg: using RSA key 3690C240CE51B4670D30AD1C38EE757D69184620
gpg: issuer "lasse.collin@tukaani.org"
gpg: Good signature from "Lasse Collin <lasse.collin@tukaani.org>" [expired]
gpg: Note: This key has expired!
The main benefit of reviewing changes in git is the ability to see detailed information about each individual change, instead of just staring at a massive list of changes without any explanations. In this example, to view all the upstream commits since the previous import to Debian, one would view the commit range from afba662b New upstream version 5.8.0 to fa1e8796 New upstream version 5.8.1 with git log --reverse -p afba662b...fa1e8796. However, a far superior way to review changes would be to browse this range using a visual git history viewer, such as gitk. Either way, looking at one code change at a time and reading the git commit message makes the review much easier. Browsing git history in gitk --all

Comparing Debian source packages to git contents As stated in the beginning of the previous section, and worth repeating, there is no guarantee that the contents in the Debian packaging git repository matches what was actually uploaded to Debian. While the tag2upload project in Debian is getting more and more popular, Debian is still far from having any system to enforce that the git repository would be in sync with the Debian archive contents. To detect such differences we can run diff across the Debian source packages downloaded with debsnap earlier (path source-xz-utils/xz-utils_5.8.1-2.debian) and the git repository cloned in the previous section (path xz-utils):
diff
$ diff -u source-xz-utils/xz-utils_5.8.1-2.debian/ xz-utils/debian/
diff -u source-xz-utils/xz-utils_5.8.1-2.debian/changelog xz-utils/debian/changelog
--- debsnap/source-xz-utils/xz-utils_5.8.1-2.debian/changelog 2025-10-03 09:32:16.000000000 -0700
+++ xz-utils/debian/changelog 2025-10-12 12:18:04.623054758 -0700
@@ -5,7 +5,7 @@
 * Remove the symlinks from -dev, pointing to the lib package.
 (Closes: #1109354)

- -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:32:16 +0200
+ -- Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Fri, 03 Oct 2025 18:36:59 +0200
In the case above diff revealed that the timestamp in the changelog in the version uploaded to Debian is different from what was committed to git. This is not malicious, just a mistake by the maintainer who probably didn t run gbp tag immediately after upload, but instead some dch command and ended up with having a different timestamps in the git compared to what was actually uploaded to Debian.

Creating syntetic Debian packaging git repositories If no Debian packaging git repository exists, or if it is lagging behind what was uploaded to Debian s archive, one can use git-buildpackage s import-dscs feature to create synthetic git commits based on the files downloaded by debsnap, ensuring the git contents fully matches what was uploaded to the archive. To import a single version there is gbp import-dsc (no s at the end), of which an example invocation would be:
$ gbp import-dsc --verbose ../source-xz-utils/xz-utils_5.8.1-2.dsc
Version '5.8.1-2' imported under '/home/otto/debian/xz-utils-2025-09-29'
Example commit history from a repository with commits added with gbp import-dsc:
$ git log --graph --oneline
* 86aed07b (HEAD -> debian/unstable, tag: debian/5.8.1-2, origin/debian/unstable) Import Debian changes 5.8.1-2
* f111d93b (tag: debian/5.8.1-1.1) Import Debian changes 5.8.1-1.1
* 1106e19b (tag: debian/5.8.1-1) Import Debian changes 5.8.1-1
 \
  * 08edbe38 (tag: upstream/5.8.1, origin/upstream/v5.8, upstream/v5.8) Import Upstream version 5.8.1
   \
    * a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
    * 1c462c2a Add NEWS for 5.8.1
    * 513cabcf Tests: Call lzma_code() in smaller chunks in fuzz_common.h
An online example repository with only a few missing uploads added using gbp import-dsc can be viewed at salsa.debian.org/otto/xz-utils-2025-09-29/-/network/debian%2Funstable An example repository that was fully crafted using gbp import-dscs can be viewed at salsa.debian.org/otto/xz-utils-gbp-import-dscs-debsnap-generated/-/network/debian%2Flatest. There exists also dgit, which in a similar way creates a synthetic git history to allow viewing the Debian archive contents via git tools. However, its focus is on producing new package versions, so fetching a package with dgit that has not had the history recorded in dgit earlier will only show the latest versions:
$ dgit clone xz-utils
canonical suite name for unstable is sid
starting new git history
last upload to archive: NO git hash
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1.orig.tar.xz.asc...
downloading http://ftp.debian.org/debian//pool/main/x/xz-utils/xz-utils_5.8.1-2.debian.tar.xz...
dpkg-source: info: extracting xz-utils in unpacked
dpkg-source: info: unpacking xz-utils_5.8.1.orig.tar.xz
dpkg-source: info: unpacking xz-utils_5.8.1-2.debian.tar.xz
synthesised git commit from .dsc 5.8.1-2
HEAD is now at f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium
dgit ok: ready for work in xz-utils
$ dgit/sid   git log --graph --oneline
* f9bcaf7 xz-utils (5.8.1-2) unstable; urgency=medium 9 days ago (HEAD -> dgit/sid, dgit/dgit/sid)
 \
  * 11d3a62 Import xz-utils_5.8.1-2.debian.tar.xz 9 days ago
* 15dcd95 Import xz-utils_5.8.1.orig.tar.xz 6 months ago
Unlike git-buildpackage managed git repositories, the dgit managed repositories cannot incorporate the upstream git history and are thus less useful for auditing the full software supply-chain in git.

Comparing upstream source packages to git contents Equally important to the note in the beginning of the previous section, one must also keep in mind that the upstream release source packages, often called release tarballs, are not guaranteed to have the exact same contents as the upstream git repository. Projects might strip out test data or extra development files from their release tarballs to avoid shipping unnecessary files to users, or projects might add documentation files or versioning information into the tarball that isn t stored in git. While a small minority, there are also upstreams that don t use git at all, so the plain files in a release tarball is still the lowest common denominator for all open source software projects, and exporting and importing source code needs to interface with it. In the case of XZ, the release tarball has additional version info and also a sizeable amount of pregenerated compiler configuration files. Detecting and comparing differences between git contents and tarballs can of course be done manually by running diff across an unpacked tarball and a checked out git repository. If using git-buildpackage, the difference between the git contents and tarball contents can be made visible directly in the import commit. In this XZ example, consider this git history:
* b1cad34b Prepare 5.8.1-1
* a8646015 Import 5.8.1
* 2808ec2d Update upstream source from tag 'upstream/5.8.1'
 \
  * fa1e8796 (debian/upstream/v5.8, upstream/v5.8) New upstream version 5.8.1
  * a522a226 (tag: v5.8.1) Bump version and soname for 5.8.1
  * 1c462c2a Add NEWS for 5.8.1
The commit a522a226 was the upstream release commit, which upstream also tagged v5.8.1. The merge commit 2808ec2d applied the new upstream import branch contents on the Debian branch. Between these is the special commit fa1e8796 New upstream version 5.8.1 tagged upstream/v5.8. This commit and tag exists only in the Debian packaging repository, and they show what is the contents imported into Debian. This is generated automatically by git-buildpackage when running git import-orig --uscan for Debian packages with the correct settings in debian/gbp.conf. By viewing this commit one can see exactly how the upstream release tarball differs from the upstream git contents (if at all). In the case of XZ, the difference is substantial, and shown below in full as it is very interesting:
$ git show --stat fa1e8796
commit fa1e8796dabd91a0f667b9e90f9841825225413a
(debian/upstream/v5.8, upstream/v5.8)
Author: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Date: Thu Apr 3 22:58:39 2025 +0200
New upstream version 5.8.1
.codespellrc   30 -
.gitattributes   8 -
.github/workflows/ci.yml   163 -
.github/workflows/freebsd.yml   32 -
.github/workflows/netbsd.yml   32 -
.github/workflows/openbsd.yml   35 -
.github/workflows/solaris.yml   32 -
.github/workflows/windows-ci.yml   124 -
.gitignore   113 -
ABOUT-NLS   1 +
ChangeLog   17392 +++++++++++++++++++++
Makefile.in   1097 +++++++
aclocal.m4   1353 ++++++++
build-aux/ci_build.bash   286 --
build-aux/compile   351 ++
build-aux/config.guess   1815 ++++++++++
build-aux/config.rpath   751 +++++
build-aux/config.sub   2354 +++++++++++++
build-aux/depcomp   792 +++++
build-aux/install-sh   541 +++
build-aux/ltmain.sh   11524 ++++++++++++++++++++++
build-aux/missing   236 ++
build-aux/test-driver   160 +
config.h.in   634 ++++
configure   26434 ++++++++++++++++++++++
debug/Makefile.in   756 +++++
doc/SHA256SUMS   236 --
doc/man/txt/lzmainfo.txt   36 +
doc/man/txt/xz.txt   1708 ++++++++++
doc/man/txt/xzdec.txt   76 +
doc/man/txt/xzdiff.txt   39 +
doc/man/txt/xzgrep.txt   70 +
doc/man/txt/xzless.txt   36 +
doc/man/txt/xzmore.txt   31 +
lib/Makefile.in   623 ++++
m4/.gitignore   40 -
m4/build-to-host.m4   274 ++
m4/gettext.m4   392 +++
m4/host-cpu-c-abi.m4   529 +++
m4/iconv.m4   324 ++
m4/intlmacosx.m4   71 +
m4/lib-ld.m4   170 +
m4/lib-link.m4   815 +++++
m4/lib-prefix.m4   334 ++
m4/libtool.m4   8488 +++++++++++++++++++++
m4/ltoptions.m4   467 +++
m4/ltsugar.m4   124 +
m4/ltversion.m4   24 +
m4/lt~obsolete.m4   99 +
m4/nls.m4   33 +
m4/po.m4   456 +++
m4/progtest.m4   92 +
po/.gitignore   31 -
po/Makefile.in.in   517 +++
po/Rules-quot   66 +
po/boldquot.sed   21 +
po/ca.gmo   Bin 0 -> 15587 bytes
po/cs.gmo   Bin 0 -> 7983 bytes
po/da.gmo   Bin 0 -> 9040 bytes
po/de.gmo   Bin 0 -> 29882 bytes
po/en@boldquot.header   35 +
po/en@quot.header   32 +
po/eo.gmo   Bin 0 -> 15060 bytes
po/es.gmo   Bin 0 -> 29228 bytes
po/fi.gmo   Bin 0 -> 28225 bytes
po/fr.gmo   Bin 0 -> 10232 bytes
To be able to easily inspect exactly what changed in the release tarball compared to git release tag contents, the best tool for the job is Meld, invoked via git difftool --dir-diff fa1e8796^..fa1e8796. Meld invoked by git difftool --dir-diff afba662b..fa1e8796 to show differences between git release tag and release tarball contents To compare changes across the new and old upstream tarball, one would need to compare commits afba662b New upstream version 5.8.0 and fa1e8796 New upstream version 5.8.1 by running git difftool --dir-diff afba662b..fa1e8796. Meld invoked by git difftool --dir-diff afba662b..fa1e8796 to show differences between to upstream release tarball contents With all the above tips you can now go and try to audit your own favorite package in Debian and see if it is identical with upstream, and if not, how it differs.

Should the XZ backdoor have been detected using these tools? The famous XZ Utils backdoor (CVE-2024-3094) consisted of two parts: the actual backdoor inside two binary blobs masqueraded as a test files (tests/files/bad-3-corrupt_lzma2.xz, tests/files/good-large_compressed.lzma), and a small modification in the build scripts (m4/build-to-host.m4) to extract the backdoor and plant it into the built binary. The build script was not tracked in version control, but generated with GNU Autotools at release time and only shipped as additional files in the release tarball. The entire reason for me to write this post was to ponder if a diligent engineer using git-buildpackage best practices could have reasonably spotted this while importing the new upstream release into Debian. The short answer is no . The malicious actor here clearly anticipated all the typical ways anyone might inspect both git commits, and release tarball contents, and masqueraded the changes very well and over a long timespan. First of all, XZ has for legitimate reasons for several carefully crafted .xz files as test data to help catch regressions in the decompression code path. The test files are shipped in the release so users can run the test suite and validate that the binary is built correctly and xz works properly. Debian famously runs massive amounts of testing in its CI and autopkgtest system across tens of thousands of packages to uphold high quality despite frequent upgrades of the build toolchain and while supporting more CPU architectures than any other distro. Test data is useful and should stay. When git-buildpackage is used correctly, the upstream commits are visible in the Debian packaging for easy review, but the commit cf44e4b that introduced the test files does not deviate enough from regular sloppy coding practices to really stand out. It is unfortunately very common for git commit to lack a message body explaining why the change was done, and to not be properly atomic with test code and test data together in the same commit, and for commits to be pushed directly to mainline without using code reviews (the commit was not part of any PR in this case). Only another upstream developer could have spotted that this change is not on par to what the project expects, and that the test code was never added, only test data, and thus that this commit was not just a sloppy one but potentially malicious. Secondly, the fact that a new Autotools file appeared (m4/build-to-host.m4) in the XZ Utils 5.6.0 is not suspicious. This is perfectly normal for Autotools. In fact, starting from XZ Utils version 5.8.1 it is now shipping a m4/build-to-host.m4 file that it actually uses now. Spotting that there is anything fishy is practically impossible by simply reading the code, as Autotools files are full custom m4 syntax interwoven with shell script, and there are plenty of backticks ( ) that spawn subshells and evals that execute variable contents further, which is just normal for Autotools. Russ Cox s XZ post explains how exactly the Autotools code fetched the actual backdoor from the test files and injected it into the build. Inspecting the m4/build-to-host.m4 changes in Meld launched via git difftool There is only one tiny thing that maybe a very experienced Autotools user could potentially have noticed: the serial 30 in the version header is way too high. In theory one could also have noticed this Autotools file deviates from what other packages in Debian ship with the same filename, such as e.g. the serial 3, serial 5a or 5b versions. That would however require and an insane amount extra checking work, and is not something we should plan to start doing. A much simpler solution would be to simply strongly recommend all open source projects to stop using Autotools to eventually get rid of it entirely.

Not detectable with reasonable effort While planting backdoors is evil, it is hard not to feel some respect to the level of skill and dedication of the people behind this. I ve been involved in a bunch of security breach investigations during my IT career, and never have I seen anything this well executed. If it hadn t slowed down SSH by ~500 milliseconds and been discovered due to that, it would most likely have stayed undetected for months or years. Hiding backdoors in closed source software is relatively trivial, but hiding backdoors in plain sight in a popular open source project requires some unusual amount of expertise and creativity as shown above.

Is the software supply-chain in Debian easy to audit? While maintaining a Debian package source using git-buildpackage can make the package history a lot easier to inspect, most packages have incomplete configurations in their debian/gbp.conf, and thus their package development histories are not always correctly constructed or uniform and easy to compare. The Debian Policy does not mandate git usage at all, and there are many important packages that are not using git at all. Additionally the Debian Policy also allows for non-maintainers to upload new versions to Debian without committing anything in git even for packages where the original maintainer wanted to use git. Uploads that bypass git unfortunately happen surpisingly often. Because of the situation, I am afraid that we could have multiple similar backdoors lurking that simply haven t been detected yet. More audits, that hopefully also get published openly, would be welcome! More people auditing the contents of the Debian archives would probably also help surface what tools and policies Debian might be missing to make the work easier, and thus help improve the security of Debian s users, and improve trust in Debian.

Is Debian currently missing some software that could help detect similar things? To my knowledge there is currently no system in place as part of Debian s QA or security infrastructure to verify that the upstream source packages in Debian are actually from upstream. I ve come across a lot of packages where the debian/watch or other configs are incorrect and even cases where maintainers have manually created upstream tarballs as it was easier than configuring automation to work. It is obvious that for those packages the source tarball now in Debian is not at all the same as upstream. I am not aware of any malicious cases though (if I was, I would report them of course). I am also aware of packages in the Debian repository that are misconfigured to be of type 1.0 (native) packages, mixing the upstream files and debian/ contents and having patches applied, while they actually should be configured as 3.0 (quilt), and not hide what is the true upstream sources. Debian should extend the QA tools to scan for such things. If I find a sponsor, I might build it myself as my next major contribution to Debian. In addition to better tooling for finding mismatches in the source code, Debian could also have better tooling for tracking in built binaries what their source files were, but solutions like Fraunhofer-AISEC s supply-graph or Sony s ESSTRA are not practical yet. Julien Malka s post about NixOS discusses the role of reproducible builds, which may help in some cases across all distros.

Or, is Debian missing some policies or practices to mitigate this? Perhaps more importantly than more security scanning, the Debian Developer community should switch the general mindset from anyone is free to do anything to valuing having more shared workflows. The ability to audit anything is severely hampered by the fact that there are so many ways to do the same thing, and distinguishing what is a normal deviation from a malicious deviation is too hard, as the normal can basically be almost anything. Also, as there is no documented and recommended default workflow, both those who are old and new to Debian packaging might never learn any one optimal workflow, and end up doing many steps in the packaging process in a way that kind of works, but is actually wrong or unnecessary, causing process deviations that look malicious, but turn out to just be a result of not fully understanding what would have been the right way to do something. In the long run, once individual developers workflows are more aligned, doing code reviews will become a lot easier and smoother as the excess noise of workflow differences diminishes and reviews will feel much more productive to all participants. Debian fostering a culture of code reviews would allow us to slowly move from the current practice of mainly solo packaging work towards true collaboration forming around those code reviews. I have been promoting increased use of Merge Requests in Debian already for some time, for example by proposing DEP-18: Encourage Continuous Integration and Merge Request based Collaboration for Debian packages. If you are involved in Debian development, please give a thumbs up in dep-team/deps!21 if you want me to continue promoting it.

Can we trust open source software? Yes and I would argue that we can only trust open source software. There is no way to audit closed source software, and anyone using e.g. Windows or MacOS just have to trust the vendor s word when they say they have no intentional or accidental backdoors in their software. Or, when the news gets out that the systems of a closed source vendor was compromised, like Crowdstrike some weeks ago, we can t audit anything, and time after time we simply need to take their word when they say they have properly cleaned up their code base. In theory, a vendor could give some kind of contractual or financial guarantee to its customer that there are no preventable security issues, but in practice that never happens. I am not aware of a single case of e.g. Microsoft or Oracle would have paid damages to their customers after a security flaw was found in their software. In theory you could also pay a vendor more to have them focus more effort in security, but since there is no way to verify what they did, or to get compensation when they didn t, any increased fees are likely just pocketed as increased profit. Open source is clearly better overall. You can, if you are an individual with the time and skills, audit every step in the supply-chain, or you could as an organization make investments in open source security improvements and actually verify what changes were made and how security improved. If your organisation is using Debian (or derivatives, such as Ubuntu) and you are interested in sponsoring my work to improve Debian, please reach out.

13 October 2025

Freexian Collaborators: Monthly report about Debian Long Term Support, September 2025 (by Roberto C. S nchez)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In September, 20 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 10.0h (out of 10.0h assigned and 4.0h from previous period), thus carrying over 4.0h to the next month.
  • Andreas Henriksson did 1.0h (out of 0.0h assigned and 20.0h from previous period), thus carrying over 19.0h to the next month.
  • Bastien Roucari s did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 20.0h (out of 21.0h assigned), thus carrying over 1.0h to the next month.
  • Carlos Henrique Lima Melara did 10.0h (out of 12.0h assigned), thus carrying over 2.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 21.0h (out of 21.0h assigned).
  • Emilio Pozuelo Monfort did 39.75h (out of 40.0h assigned), thus carrying over 0.25h to the next month.
  • Guilhem Moulin did 15.0h (out of 15.0h assigned).
  • Jochen Sprickerhof did 12.0h (out of 9.25h assigned and 11.75h from previous period), thus carrying over 9.0h to the next month.
  • Lee Garrett did 13.5h (out of 21.0h assigned), thus carrying over 7.5h to the next month.
  • Lucas Kanashiro did 8.0h (out of 20.0h assigned), thus carrying over 12.0h to the next month.
  • Markus Koschany did 15.0h (out of 3.25h assigned and 17.75h from previous period), thus carrying over 6.0h to the next month.
  • Paride Legovini did 6.0h (out of 8.0h assigned), thus carrying over 2.0h to the next month.
  • Roberto C. S nchez did 7.25h (out of 7.75h assigned and 13.25h from previous period), thus carrying over 13.75h to the next month.
  • Santiago Ruano Rinc n did 13.25h (out of 13.5h assigned and 1.5h from previous period), thus carrying over 1.75h to the next month.
  • Sylvain Beucler did 17.0h (out of 7.75h assigned and 13.25h from previous period), thus carrying over 4.0h to the next month.
  • Thorsten Alteholz did 21.0h (out of 21.0h assigned).
  • Tobias Frost did 5.0h (out of 0.0h assigned and 8.0h from previous period), thus carrying over 3.0h to the next month.
  • Utkarsh Gupta did 16.5h (out of 14.25h assigned and 6.75h from previous period), thus carrying over 4.5h to the next month.

Evolution of the situation In September, we released 38 DLAs.
  • Notable security updates:
    • modsecurity-apache prepared by Adrian Bunk, fixes a cross-site scripting vulnerability
    • cups, prepared by Thorsten Alteholz, fixes authentication bypass and denial of service vulnerabilities
    • jetty9, prepared by Adrian Bunk, fixes the MadeYouReset vulnerability (a recent, well-known denial of service vulnerability)
    • python-django, prepared by Chris Lamb, fixes a SQL injection vulnerability
    • firefox-esr and thunderbird, prepared by Emilio Pozuelo Monfort, were updated from the 128.x ESR series to the 140.x ESR series, fixing a number of vulnerabilities as well
  • Notable non-security updates:
    • wireless-regdb prepared by Ben Hutchings, updates information reflecting changes to radio regulations in many countries
There was one package update contributed by a Debian Developer outside of the LTS Team: an update of node-tar-fs, prepared by Xavier Guimard (a member of the Node packaging team). Finally, LTS Team members also contributed updates of the following packages:
  • libxslt (to stable and oldstable), prepared by Guilhem Moulin, to address a regression introduced in a previous security update
  • libphp-adodb (to stable and oldstable), prepared by Abhijith PA
  • cups (to stable and oldstable), prepared by Thorsten Alteholz
  • u-boot (to oldstable), prepared by Daniel Leidert and Jochen Sprickerhof
  • libcommongs-lang3-java (to stable and oldstable), prepared by Daniel Leidert
  • python-internetarchive (to oldstable), prepared by Daniel Leidert
One other notable contribution by a member of the LTS Team is that Sylvain Beucler proposed a fix upstream for CVE-2025-2760 in gimp2. Upstream no longer supports gimp2, but it is still present in Debian LTS, and so proposing this fix upstream is of benefit to other distros which may still be supporting the older gimp2 packages.

Thanks to our sponsors Sponsors that joined recently are in bold.

1 October 2025

Ben Hutchings: FOSS activity in September 2025

Last month I attended and spoke at Kangrejos, for which I will post a separate report later. Besides that, here s the usual categorised list of work:

Birger Schacht: Status update, September 2025

Regarding Debian packaging this was a rather quiet month. I uploaded version 1.24.0-1 of foot and version 2.8.0-1 of git-quick-stats. I took the opportunity and started migrating my packages to the new version 5 watch file format, which I think is much more readable than the previous format. I also uploaded version 0.1.1-1 of libscfg to NEW. libscfg is a C implementation of the scfg configuration file format and it is a dependency of recent version of kanshi. kanshi is a tool similar to autorandr which allows you define output profiles and kanshi switches to the correct output profile on hotplug events. Once libscfg is in unstable I can finally update kanshi to the latest version. A lot of time this month in finalizing a redesign of the output rendering of carl. carl is a small rust program I wrote that provides a calendar view similar to cal, but it comes with colors and ical file integration. That means that you can not only display a simple calendar, but also colorize/highlight dates based on various attributes or based on events on that day. In the initial versions of carl the output rendering was simply hardcoded into the app.
Screenshot of carl
This was a bit cumbersome to maintain and not configurable for users. I am using templating languages on a daily basis, so I decided I would reimplement the output generation of carl to use templates. I chose the minijinja Rust library which is based on the syntax and behavior of the Jinja2 template engine for Python . There are others out there, like tera, but minijinja seems to be more active in development currently. I worked on this implementation on and off for the last year and finally had the time to finish it up and write some additional tests for the outputs. It is easier to maintain templates than Rust code that uses write!() to format the output. I also implemented a configuration option for users to override the templates. Additional to the output refactoring I also fixed couple of bugs and finally released v0.4.0 of carl. In my dayjob I released version 0.53 of apis-core-rdf which contains the place lookup field which I implemented in August. A couple of weeks later we released version 0.54 which comes with a middleware to show pass on messages from the Django messages framework via response header to HTMX to trigger message popups. This implementation is based on the blog post Using the Django messages framework with HTMX. Version 0.55 was the last release in September. It contained preparations for refactoring the import logic as well as a couple of UX improvements.

30 September 2025

Russell Coker: Links September 2025

Werdahias wrote an informative blog post about Dark Mode for QT programs on non-QT environments (mostly GNOME based), we need more blog posts about this sort of thing [1]. Astral Codex Ten has an interesting blog post about the rise of Christianity, trying to work out why it took over so quickly [2]. Frances Haugen s whistleblower statement about Facebook is worth reading, Facebook seems to be one of the most evil companies in the world [3]. Interesting blog post by Philip Bennett about trying to repair a 28 player Galaxian game from 1990 [4]. Bruce Schneier and Nathan E. Sanders wrote an insightful article about AI in Government [5]. Krebs has an interesting analysis of Conservatives whinging about alleged discrimination due to their use of spam lists [6]. Nick Cheesman wrote an insightful article on the failures of Meritocracy with ANU as a case study [7]. I am mystified as to why ABC categorised it under Religion. David Brin wrote an interesting short SciFi story about dealing with blackmail [8]. Charles Stross has an interesting take on AI economics etc [9]. Corey Doctorow wrote an interesting blog post about the impending economic crash becuase of all the money tied up in AI investments [10].

Russ Allbery: Review: Deep Black

Review: Deep Black, by Miles Cameron
Series: Arcana Imperii #2
Publisher: Gollancz
Copyright: 2024
ISBN: 1-3996-1506-8
Format: Kindle
Pages: 509
Deep Black is a far-future science fiction novel and the direct sequel to Artifact Space. You do not want to start here. I regretted not reading the novels closer together and had to refresh my memory of what happened in the first book. The shorter fiction in Beyond the Fringe takes place between the two series novels and leads into some of the events in this book, although reading it is optional. Artifact Space left Marca Nbaro at the farthest point of the voyage of the Greatship Athens, an unexpected heroine and now well-integrated into the crew. On a merchant ship, however, there's always more work to be done after a heroic performance. Deep Black opens with that work: repairs from the events of the first book, the never-ending litany of tasks required to keep the ship running smoothly, and of course the trade with aliens that drew them so far out into the Deep Black. We knew early in the first book that this wouldn't be the simple, if long, trading voyage that most of the crew of the Athens was expecting, but now they have to worry about an unsettling second group of aliens on top of a potential major war between human factions. They don't yet have the cargo they came for, they have to reconstruct their trading post, and they're a very long way from home. Marca also knows, at this point in the story, that this voyage had additional goals from the start. She will slowly gain a more complete picture of those goals during this novel. Artifact Space was built around one of the most satisfying plots in military science fiction (at least to me): a protagonist who benefits immensely from the leveling effect and institutional inclusiveness of the military slowly discovering that, when working at its best, the military can be a true meritocracy. (The merchant marine of the Athens is not military, precisely, since it's modeled on the trading ships of Venice, but it's close enough for the purposes of this plot.) That's not a plot that lasts into a sequel, though, so Cameron had to find a new spine for the second half of the story. He chose first contact (of a sort) and space battle. The space battle parts are fine. I read a ton of children's World War II military fiction when I was a boy, and I always preferred the naval battles to the land battles. This part of Deep Black reminded me of those naval battles, particularly a book whose title escapes me about the Arctic convoys to the Soviet Union. I'm more interested in character than military adventure these days, but every once in a while I enjoy reading about a good space battle. This was not an exemplary specimen of the genre, but it delivered on all the required elements. The first contact part was more original, in part because Cameron chose an interesting medium ground between total incomprehensibility and universal translators. He stuck with the frustrations of communication for considerably longer than most SF authors are willing to write, and it worked for me. This is the first book I've read in a while where superficial alien fluency with the mere words of a human language masks continuing profound mutual incomprehension. The communication difficulties are neither malicious nor a setup for catastrophic misunderstanding, but an intrinsic part of learning about a truly alien species. I liked this, even though it makes for slower and more frustrating progress. It felt more believable than a lot of first contact, and it forced the characters to take risks and act on hunches and then live with the consequences. One of the other things that Cameron does well is maintain the steady rhythm of life on a working ship as a background anchor to the story. I've read a lot of science fiction that shows the day-to-day routine only until something more interesting and plot-focused starts happening and then seems to forget about it entirely. Not here. Marca goes through intense and adrenaline-filled moments requiring risk and fast reactions, and then has to handle promotion write-ups, routine watches, and studying for advancement. Cameron knows that real battles involve long periods of stressful waiting and incorporates them into the book without making them too boring, which requires a lot of writing skill. I prefer the emotional magic of finding a place where one belongs, so I was not as taken with Deep Black as I was with Artifact Space, but that's the inevitable result of plot progression and not really a problem with this book. Marca is absurdly central to the story in ways that have a whiff of "chosen one" dynamics, but if one can suspend one's disbelief about that, the rest of the book is solid. This is, fundamentally, a book about large space battles, so save it when you're in the mood for that sort of story, but it was a satisfying continuation of the series. I will definitely keep reading. Recommended if you enjoyed Artifact Space. If you didn't, Deep Black isn't going to change your mind. Followed by Whalesong, which is not yet released (and is currently in some sort of limbo for pre-orders in the US, which I hope will clear up). Rating: 7 out of 10

22 September 2025

Vincent Bernat: Akvorado release 2.0

Akvorado 2.0 was released today! Akvorado collects network flows with IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database. Users can browse the data through a web console. This release introduces an important architectural change and other smaller improvements. Let s dive in!
$ git diff --shortstat v1.11.5
 493 files changed, 25015 insertions(+), 21135 deletions(-)

New outlet service The major change in Akvorado 2.0 is splitting the inlet service into two parts: the inlet and the outlet. Previously, the inlet handled all flow processing: receiving, decoding, and enrichment. Flows were then sent to Kafka for storage in ClickHouse:
Akvorado flow processing before the change: flows are received and processed by the inlet, sent to Kafka and stored in ClickHouse
Akvorado flow processing before the introduction of the outlet service
Network flows reach the inlet service using UDP, an unreliable protocol. The inlet must process them fast enough to avoid losing packets. To handle a high number of flows, the inlet spawns several sets of workers to receive flows, fetch metadata, and assemble enriched flows for Kafka. Many configuration options existed for scaling, which increased complexity for users. The code needed to avoid blocking at any cost, making the processing pipeline complex and sometimes unreliable, particularly the BMP receiver.1 Adding new features became difficult without making the problem worse.2 In Akvorado 2.0, the inlet receives flows and pushes them to Kafka without decoding them. The new outlet service handles the remaining tasks:
Akvorado flow processing after the change: flows are received by the inlet, sent to Kafka, processed by the outlet and inserted in ClickHouse
Akvorado flow processing after the introduction of the outlet service
This change goes beyond a simple split:3 the outlet now reads flows from Kafka and pushes them to ClickHouse, two tasks that Akvorado did not handle before. Flows are heavily batched to increase efficiency and reduce the load on ClickHouse using ch-go, a low-level Go client for ClickHouse. When batches are too small, asynchronous inserts are used (e20645). The number of outlet workers scales dynamically (e5a625) based on the target batch size and latency (50,000 flows and 5 seconds by default). This new architecture also allows us to simplify and optimize the code. The outlet fetches metadata synchronously (e20645). The BMP component becomes simpler by removing cooperative multitasking (3b9486). Reusing the same RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on the garbage collector (8b580f). The effect on Akvorado s overall performance was somewhat uncertain, but a user reported 35% lower CPU usage after migrating from the previous version, plus resolution of the long-standing BMP component issue.

Other changes This new version includes many miscellaneous changes, such as completion for source and destination ports (f92d2e), and automatic restart of the orchestrator service (0f72ff) when configuration changes to avoid a common pitfall for newcomers. Let s focus on some key areas for this release: observability, documentation, CI, Docker, Go, and JavaScript.

Observability Akvorado exposes metrics to provide visibility into the processing pipeline and help troubleshoot issues. These are available through Prometheus HTTP metrics endpoints, such as /api/v0/inlet/metrics. With the introduction of the outlet, many metrics moved. Some were also renamed (4c0b15) to match Prometheus best practices. Kafka consumer lag was added as a new metric (e3a778). If you do not have your own observability stack, the Docker Compose setup shipped with Akvorado provides one. You can enable it by activating the profiles introduced for this purpose (529a8f). The prometheus profile ships Prometheus to store metrics and Alloy to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka metrics are collected through the exporter bundled with Alloy (560113). Other metrics are exposed using Prometheus metrics endpoints and are automatically fetched by Alloy with the help of some Docker labels, similar to what is done to configure Traefik. cAdvisor was also added (83d855) to provide some container-related metrics. The loki profile ships Loki to store logs (45c684). While Alloy can collect and ship logs to Loki, its parsing abilities are limited: I could not find a way to preserve all metadata associated with structured logs produced by many applications, including Akvorado. Vector replaces Alloy (95e201) and features a domain-specific language, VRL, to transform logs. Annoyingly, Vector currently cannot retrieve Docker logs from before it was started. Finally, the grafana profile ships Grafana, but the shipped dashboards are broken. This is planned for a future version.

Documentation The Docker Compose setup provided by Akvorado makes it easy to get the web interface up and running quickly. However, Akvorado requires a few mandatory steps to be functional. It ships with comprehensive documentation, including a chapter about troubleshooting problems. I hoped this documentation would reduce the support burden. It is difficult to know if it works. Happy users rarely report their success, while some users open discussions asking for help without reading much of the documentation. In this release, the documentation was significantly improved.
$ git diff --shortstat v1.11.5 -- console/data/docs
 10 files changed, 1873 insertions(+), 1203 deletions(-)
The documentation was updated (fc1028) to match Akvorado s new architecture. The troubleshooting section was rewritten (17a272). Instructions on how to improve ClickHouse performance when upgrading from versions earlier than 1.10.0 was added (5f1e9a). An LLM proofread the entire content (06e3f3). Developer-focused documentation was also improved (548bbb, e41bae, and 871fc5). From a usability perspective, table of content sections are now collapsable (c142e5). Admonitions help draw user attention to important points (8ac894).
Admonition in Akvorado documentation to ask a user not to open an issue or start a discussion before reading the documentation
Example of use of admonitions in Akvorado's documentation

Continuous integration This release includes efforts to speed up continuous integration on GitHub. Coverage and race tests run in parallel (6af216 and fa9e48). The Docker image builds during the tests but gets tagged only after they succeed (8b0dce).
GitHub workflow for CI with many jobs, some of them running in parallel, some not
GitHub workflow to test and build Akvorado
End-to-end tests (883e19) ensure the shipped Docker Compose setup works as expected. Hurl runs tests on various HTTP endpoints, particularly to verify metrics (42679b and 169fa9). For example:
## Test inlet has received NetFlow flows
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_flow_input_udp_packets_total job="akvorado-inlet",listener=":2055" )
HTTP 200
[Captures]
inlet_receivedflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_receivedflows" > 10
## Test inlet has sent them to Kafka
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_kafka_sent_messages_total job="akvorado-inlet" )
HTTP 200
[Captures]
inlet_sentflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_sentflows" >=   inlet_receivedflows  

Docker Akvorado ships with a comprehensive Docker Compose setup to help users get started quickly. It ensures a consistent deployment, eliminating many configuration-related issues. It also serves as a living documentation of the complete architecture. This release brings some small enhancements around Docker: Previously, many Docker images were pulled from the Bitnami Containers library. However, VMWare acquired Bitnami in 2019 and Broadcom acquired VMWare in 2023. As a result, Bitnami images were deprecated in less than a month. This was not really a surprise4. Previous versions of Akvorado had already started moving away from them. In this release, the Apache project s Kafka image replaces the Bitnami one (1eb382). Thanks to the switch to KRaft mode, Zookeeper is no longer needed (0a2ea1, 8a49ca, and f65d20). Akvorado s Docker images were previously compiled with Nix. However, building AArch64 images on x86-64 is slow because it relies on QEMU userland emulation. The updated Dockerfile uses multi-stage and multi-platform builds: one stage builds the JavaScript part on the host platform, one stage builds the Go part cross-compiled on the host platform, and the final stage assembles the image on top of a slim distroless image (268e95 and d526ca).
# This is a simplified version
FROM --platform=$BUILDPLATFORM node:20-alpine AS build-js
RUN apk add --no-cache make
WORKDIR /build
COPY console/frontend console/frontend
COPY Makefile .
RUN make console/data/frontend
FROM --platform=$BUILDPLATFORM golang:alpine AS build-go
RUN apk add --no-cache make curl zip
WORKDIR /build
COPY . .
COPY --from=build-js /build/console/data/frontend console/data/frontend
RUN go mod download
RUN make all-indep
ARG TARGETOS TARGETARCH TARGETVARIANT VERSION
RUN make
FROM gcr.io/distroless/static:latest
COPY --from=build-go /build/bin/akvorado /usr/local/bin/akvorado
ENTRYPOINT [ "/usr/local/bin/akvorado" ]
When building for multiple platforms with --platform linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted line execute only once for all platforms. This significantly speeds up the build. Akvorado now ships Docker images for these platforms: linux/amd64, linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU. On x86-64, there are two choices. If your CPU is recent enough, Docker downloads linux/amd64/v3. This version contains additional optimizations and should run faster than the linux/amd64 version. It would be interesting to ship an image for linux/arm64/v8.2, but Docker does not support the same mechanism for AArch64 yet (792808).

Go This release includes many changes related to Go but not visible to the users.

Toolchain In the past, Akvorado supported the two latest Go versions, preventing immediate use of the latest enhancements. The goal was to allow users of stable distributions to use Go versions shipped with their distribution to compile Akvorado. However, this became frustrating when interesting features, like go tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be compiled with older toolchains by automatically downloading a newer one (94fb1c).5 Users can still override GOTOOLCHAIN to revert this decision. The recommended toolchain updates weekly through CI to ensure we get the latest minor release (5b11ec). This change also simplifies updates to newer versions: only go.mod needs updating. Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have started converting some unit tests to the new test/synctest package (bd787e, 7016d8, and 159085).

Testing When testing equality, I use a helper function Diff() to display the differences when it fails:
got := input.Keys()
expected := []int 1, 2, 3 
if diff := helpers.Diff(got, expected); diff != ""  
    t.Fatalf("Keys() (-got, +want):\n%s", diff)
 
This function uses kylelemons/godebug. This package is no longer maintained and has some shortcomings: for example, by default, it does not compare struct private fields, which may cause unexpectedly successful tests. I replaced it with google/go-cmp, which is stricter and has better output (e2f1df).

Another package for Kafka Another change is the switch from Sarama to franz-go to interact with Kafka (756e4a and 2d26c5). The main motivation for this change is to get a better concurrency model. Sarama heavily relies on channels and it is difficult to understand the lifecycle of an object handed to this package. franz-go uses a more modern approach with callbacks6 that is both more performant and easier to understand. It also ships with a package to spawn fake Kafka broker clusters, which is more convenient than the mocking functions provided by Sarama.

Improved routing table for BMP To store its routing table, the BMP component used kentik/patricia, an implementation of a patricia tree focused on reducing garbage collection pressure. gaissmai/bart is a more recent alternative using an adaptation of [Donald Knuth s ART algorithm][] that promises better performance and delivers it: 90% faster lookups and 27% faster insertions (92ee2e and fdb65c). Unlike kentik/patricia, gaissmai/bart does not help efficiently store values attached to each prefix. I adapted the same approach as kentik/patricia to store route lists for each prefix: store a 32-bit index for each prefix, and use it to build a 64-bit index for looking up routes in a map. This leverages Go s efficient map structure. gaissmai/bart also supports a lockless routing table version, but this is not simple because we would need to extend this to the map storing the routes and to the interning mechanism. I also attempted to use Go s new unique package to replace the intern package included in Akvorado, but performance was worse.7

Miscellaneous Previous versions of Akvorado were using a custom Protobuf encoder for performance and flexibility. With the introduction of the outlet service, Akvorado only needs a simple static schema, so this code was removed. However, it is possible to enhance performance with planetscale/vtprotobuf (e49a74, and 8b580f). Moreover, the dependency on protoc, a C++ program, was somewhat annoying. Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf schema into Go code (f4c879). Another small optimization to reduce the size of the Akvorado binary by 10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It includes the ASN database, as well as the SVG images for the documentation. A small layer of code makes this change transparent (b1d638 and e69b91).

JavaScript Recently, two large supply-chain attacks hit the JavaScript ecosystem: one affecting the popular packages chalk and debug and another impacting the popular package @ctrl/tinycolor. These attacks also exist in other ecosystems, but JavaScript is a prime target due to heavy use of small third-party dependencies. The previous version of Akvorado relied on 653 dependencies. npm-run-all was removed (3424e8, 132 dependencies). patch-package was removed (625805 and e85ff0, 69 dependencies) by moving missing TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a linter written in Rust (97fd8c, 125 dependencies, including the plugins). I switched from npm to Pnpm, an alternative package manager (fce383). Pnpm does not run install scripts by default8 and prevents installing packages that are too recent. It is also significantly faster.9 Node.js does not ship Pnpm but it ships Corepack, which allows us to use Pnpm without installing it. Pnpm can also list licenses used by each dependency, removing the need for license-compliance (a35ca8, 42 dependencies). For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite was replaced with its faster Rolldown version (463827). After these changes, Akvorado only pulls 225 dependencies.

Next steps I would like to land three features in the next version of Akvorado:
  • Add Grafana dashboards to complete the observability stack. See issue #1906 for details.
  • Integrate OVH s Grafana plugin by providing a stable API for such integrations. Akvorado s web console would still be useful for browsing results, but if you want to build and share dashboards, you should switch to Grafana. See issue #1895.
  • Move some work currently done in ClickHouse (custom dictionaries, GeoIP and IP enrichment) back into the outlet service. This should give more flexibility for adding features like the one requested in issue #1030. See issue #2006.

I started working on splitting the inlet into two parts more than one year ago. I found more motivation in recent months, partly thanks to Claude Code, which I used as a rubber duck. Almost none of the produced code was kept:10 it is like an intern who does not learn.

  1. Many attempts were made to make the BMP component both performant and not blocking. See for example PR #254, PR #255, and PR #278. Despite these efforts, this component remained problematic for most users. See issue #1461 as an example.
  2. Some features have been pushed to ClickHouse to avoid the processing cost in the inlet. See for example PR #1059.
  3. This is the biggest commit:
    $ git show --shortstat ac68c5970e2c   tail -1
    231 files changed, 6474 insertions(+), 3877 deletions(-)
    
  4. Broadcom is known for its user-hostile moves. Look at what happened with VMWare.
  5. As a Debian developer, I dislike these mechanisms that circumvent the distribution package manager. The final straw came when Go 1.25 spent one month in the Debian NEW queue, an arbitrary mechanism I don t like at all.
  6. In the early years of Go, channels were heavily promoted. Sarama was designed during this period. A few years later, a more nuanced approach emerged. See notably Go channels are bad and you should feel bad.
  7. This should be investigated further, but my theory is that the intern package uses 32-bit integers, while unique uses 64-bit pointers. See commit 74e5ac.
  8. This is also possible with npm. See commit dab2f7.
  9. An even faster alternative is Bun, but it is less available.
  10. The exceptions are part of the code for the admonition blocks, the code for collapsing the table of content, and part of the documentation.

21 September 2025

Joey Hess: cheap DIY solar fence design

A year ago I installed a 4 kilowatt solar fence. I'm revisiting it this Sun Day, to share the design, now that I have prooved it out.
The solar fence and some other ground and pole mount solar panels, seen through leaves.
Solar fencing manufacturers have some good simple designs, but it's hard to buy for a small installation. They are selling to utility scale solar mostly. And those are installed by driving metal beams into the ground, which requires heavy machinery. Since I have experience with Ironridge rails for roof mount solar, I decided to adapt that system for a vertical mount. Which is something it was not designed for. I combined the Ironridge hardware with regular parts from the hardware store. The cost of mounting solar panels nowadays is often higher than the cost of the panels. I hoped to match the cost, and I nearly did. The solar panels cost $100 each, and the fence cost $110 per solar panel. This fence was significantly cheaper than conventional ground mount arrays that I considered as alternatives, and made a better use of a difficult hillside location. I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail. (Longer rails would need a center post anyway, and the 7 foot long rails have cheaper shipping, since they do not need to be shipped freight.) For the fence posts, I used regular 4x4" treated posts. 12 foot long, set in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6 inches of gravel on the bottom.
detail of how the rails are mounted to the posts, and the panels to the rails
To connect the Ironridge rails to the fence posts, I used the Ironridge LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8 x 3 inch hot-dipped galvanized lag screw. Since a treated post can react badly with an aluminum bracket, there needs to be some flashing between the post and bracket. I used Shurtape PW-100 tape for that. I see no sign of corrosion after 1 year. The rest of the Ironridge system is a T-bolt that connects the rail to the L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners (UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips. Since the Ironridge hardware is not designed to hold a solar panel at a 90 degree angle, I was concerned that the panels might slide downward over time. To help prevent that, I added some additional support brackets under the bottom of the panels. So far, that does not seem to have been a problem though. I installed Aptos 370 watt solar panels on the fence. They are bifacial, and while the posts block the back partially, there is still bifacial gain on cloudy days. I left enough space under the solar panels to be able to run a push mower under them.
Me standing in front of the solar fence at end of construction
I put pairs of posts next to one-another, so each 7 foot segment of fence had its own 2 posts. This is the least elegant part of this design, but fitting 2 brackets next to one-another on a single post isn't feasible. I bolted the pairs of posts together with some spacers. A side benefit of doing it this way is that treated lumber can warp as it dries, and this prevented much twisting of the posts. Using separate posts for each segment also means that the fence can traverse a hill easily. And it does not need to be perfectly straight. In fact, my fence has a 30 degree bend in the middle. This means it has both south facing and south-west facing panels, so can catch the light for longer during the day. After building the fence, I noticed there was a slight bit of sway at the top, since 9 feet of wooden post is not entirely rigid. My worry was that a gusty wind could rattle the solar panels. While I did not actually observe that happening, I added some diagonal back bracing for peace of mind.
view of rear upper corner of solar fence, showing back bracing connection
Inspecting the fence today, I find no problems after the first year. I hope it will last 30 years, with the lifespan of the treated lumber being the likely determining factor. As part of my larger (and still ongoing) ground mount solar install, the solar fence has consistently provided great power. The vertical orientation works well at latitude 36. It also turned out that the back of the fence was useful to hang conduit and wiring and solar equipment, and so it turned into the electrical backbone of my whole solar field. But that's another story.. solar fence parts list
quantity cost per unit description
10 $27.89 7 foot Ironridge XR-10 rail
12 $20.18 12 foot treated 4x4
30 $4.86 Ironridge UFO-CL-01-A1
20 $0.87 Ironridge UFO-STP-40MM-M1
1 $12.62 Ironridge XR-10 end caps (20 pack)
20 $2.63 Ironridge LFT-03-M1
20 $1.69 Ironridge BHW-SQ-02-A1
22 $2.65 5/8 x 3 inch hot-dipped galvanized lag screw
10 $0.50 6 gravel per post
30 $6.91 50 lb bags of quickcrete
1 $15.00 Shurtape PW-100 Corrosion Protection Pipe Wrap Tape
N/A $30 other bolts and hardware (approximate)
$1100 total (Does not include cost of panels, wiring, or electrical hardware.)

19 September 2025

Dirk Eddelbuettel: RcppArmadillo 15.0.2-2 on CRAN: Transition to Armadillo 15

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1261 other packages on CRAN, downloaded 41.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 647 times according to Google Scholar. This versions updates the 15.0.2-1 release from last week. Following fairly extensive email discussions with CRAN, we are now accelerating the transition to Armadillo. When C++14 or newer is used (which after all is the default since R 4.1.0 released May 2021, see WRE Section 1.2.4), or when opted into, the newer Armadillo is selected. If on the other hand either C++11 is still forced, or the legacy version is explicitly selected (which currently one package at CRAN does), then Armadillo 14.6.3 is selected. Most packages will not see a difference and automatically switch to the newer Armadillo. However, some packages will see one or two types of warning. First, if C++11 is still actively selected via for examples CXX_STD then CRAN will nudge a change to a newer compilation standard (as they have been doing for some time already). Preferably the change should be to simply remove the constraint and let R pick the standard based on its version and compiler availability. These days that gives us C++17 in most cases; see WRE Section 1.2.4 for details. (Some packages may need C++14 or C++17 or C++20 explicitly and can also do so.) Second, some packages may see a deprecation warning. Up until Armadillo 14.6.3, the package suppressed these and you can still get that effect by opting into that version by setting -DARMA_USE_LEGACY. (However this route will be sunset eventually too.) But one really should update the code to the non-deprecated version. In a large number of cases this simply means switching from using arma::is_finite() (typically called on a scalar double) to calling std::isfinite(). But there are some other cases, and we will help as needed. If you maintain a package showing deprecation warnings, and are lost here and cannot workout the conversion to current coding styles, please open an issue at the RcppArmadillo repository (i.e. here) or in your own repository and tag me. I will also reach out to the maintainers of a smaller set of packages with more than one reverse dependency. A few small changes have been made internal packaging and documentation, a small synchronization with upstream for two commits since the 15.0.2 release, as well as a link to the ldlasb2 repository and its demonstration regarding some ill-stated benchmarks done elsewhere. The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.0.2-2 (2025-09-18)
  • Minor update to skeleton Makevars,Makevars.win
  • Update README.md to mention ldlasb2 repository
  • Minor documentation update (#487)
  • Synchronized with Armadillo upstream (#488)
  • Refine Armadillo version selection in coordination with CRAN maintainers to support transition towards Armadillo 15.0.*

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

14 September 2025

Otto Kek l inen: Zero-configuration TLS and password management best practices in MariaDB 11.8

Featured image of post Zero-configuration TLS and password management best practices in MariaDB 11.8Locking down database access is probably the single most important thing for a system administrator or software developer to prevent their application from leaking its data. As MariaDB 11.8 is the first long-term supported version with a few new key security features, let s recap what the most important things are every DBA should know about MariaDB in 2025. Back in the old days, MySQL administrators had a habit of running the clumsy mysql-secure-installation script, but it has long been obsolete. A modern MariaDB database server is already secure by default and locked down out of the box, and no such extra scripts are needed. On the contrary, the database administrator is expected to open up access to MariaDB according to the specific needs of each server. Therefore, it is important that the DBA can understand and correctly configure three things:
  1. Separate application-specific users with granular permissions allowing only necessary access and no more.
  2. Distributing and storing passwords and credentials securely
  3. Ensuring all remote connections are properly encrypted
For holistic security, one should also consider proper auditing, logging, backups, regular security updates and more, but in this post we will focus only on the above aspects related to securing database access.

How encrypting database connections with TLS differs from web server HTTP(S) Even though MariaDB (and other databases) use the same SSL/TLS protocol for encrypting remote connections as web servers and HTTPS, the way it is implemented is significantly different, and the different security assumptions are important for a database administrator to grasp. Firstly, most HTTP requests to a web server are unauthenticated, meaning the web server serves public web pages and does not require users to log in. Traditionally, when a user logs in over a HTTP connection, the username and password were transmitted in plaintext as a HTTP POST request. Modern TLS, which was previously called SSL, does not change how HTTP works but simply encapsulates it. When using HTTPS, a web browser and a web server will start an encrypted TLS connection as the very first thing, and only once established, do they send HTTP requests and responses inside it. There are no passwords or other shared secrets needed to form the TLS connection. Instead, the web server relies on a trusted third party, a Certificate Authority (CA), to vet that the TLS certificate offered by the web server can be trusted by the web browser. For a database server like MariaDB, the situation is quite different. All users need to first authenticate and log in to the server before getting being allowed to run any SQL and getting any data out of the server. The database server and client programs have built-in authentication methods, and passwords are not, and have never been, sent in plaintext. Over the years, MySQL and its successor, MariaDB, have had multiple password authentication methods: the original SHA-1-based hashing, later double SHA-1-based mysql_native_password, followed by sha256_password and caching_sha256_password in MySQL and ed25519 in MariaDB. The MariaDB.org blog post by Sergei Golubchik recaps the history of these well. Even though most modern MariaDB installations should be using TLS to encrypt all remote connections in 2025, having the authentication method be as secure as possible still matters, because authentication is done before the TLS connection is fully established. To further harden the authentication agains man-in-the-middle attacks, a new password the authentication method PARSEC was introduced in MariaDB 11.8, which builds upon the previous ed25519 public-key-based verification (similar to how modern SSH does), and also combines key derivation with PBKDF2 with hash functions (SHA512,SHA256) and a high iteration count. At first it may seem like a disadvantage to not wrap all connections in a TLS tunnel like HTTPS does, but actually not having the authentication done in a MitM resistant way regardless of the connection encryption status allows a clever extra capability that is now available in MariaDB: as the database server and client already have a shared secret that is being used by the server to authenticate the user, it can also be used by the client to validate the server s TLS certificate and no third parties like CAs or root certificates are needed. MariaDB 11.8 was the first LTS version to ship with this capability for zero-configuration TLS. Note that the zero-configuration TLS also works on older password authentication methods and does not require users to have PARSEC enabled. As PARSEC is not yet the default authentication method in MariaDB, it is recommended to enable it in installations that use zero-configuration TLS encryption to maximize the security of the TLS certificate validation.

Why the root user in MariaDB has no password and how it makes the database more secure Relying on passwords for security is problematic, as there is always a risk that they could leak, and a malicious user could access the system using the leaked password. It is unfortunately far too common for database passwords to be stored in plaintext in configuration files that are accidentally committed into version control and published on GitHub and similar platforms. Every application or administrative password that exists should be tracked to ensure only people who need it know it, and rotated at regular intervals to ensure old employees etc won t be able to use old passwords. This password management is complex and error-prone. Replacing passwords with other authentication methods is always advisable when possible. On a database server, whoever installed the database by running e.g. apt install mariadb-server, and configured it with e.g. nano /etc/mysql/mariadb.cnf, already has full root access to the operating system, and asking them for a password to access the MariaDB database shell is moot, since they could circumvent any checks by directly accessing the files on the system anyway. Therefore, MariaDB, since version 10.4 stopped requiring the root user to enter a password when connecting locally, and instead checks using socket authentication whether the user is the operating-system root user or equivalent (e.g. running sudo). This is an elegant way to get rid of a password that was actually unnecessary to begin with. As there is no root password anymore, the risk of an external user accessing the database as root with a leaked password is fully eliminated. Note that socket authentication only works for local connections on the same server. If you want to access a MariaDB server remotely as the root user, you would need to configure a password for it first. This is not generally recommended, as explained in the next section.

Create separate database users for normal use and keep root for administrative use only Out of the box a MariaDB installation is already secure by default, and only the local root user can connect to it. This account is intended for administrative use only, and for regular daily use you should create separate database users with access limited to the databases they need and the permissions required. The most typical commands needed to create a new database for an app and a user the app can use to connect to the database would be the following:
sql
CREATE DATABASE app_db;
CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%';
FLUSH PRIVILEGES;
Alternatively, if you want to use the parsec authentication method, run this to create the user:
sql
CREATE OR REPLACE USER 'app_user'@'%'
 IDENTIFIED VIA parsec
 USING PASSWORD('your_secure_password');
Note that the plugin auth_parsec is not enabled by default. If you see the error message ERROR 1524 (HY000): Plugin 'parsec' is not loaded fix this by running INSTALL SONAME 'auth_parsec';. In the CREATE USER statements, the @'%' means that the user is allowed to connect from any host. This needs to be defined, as MariaDB always checks permissions based on both the username and the remote IP address or hostname of the user, combined with the authentication method. Note that it is possible to have multiple user@remote combinations, and they can have different authentication methods. A user could, for example, be allowed to log in locally using the socket authentication and over the network using a password. If you are running a custom application and you know exactly what permissions are sufficient for the database users, replace the ALL PRIVILEGES with a subset of privileges listed in the MariaDB documentation. For new permissions to take effect, restart the database or run FLUSH PRIVILEGES.

Allow MariaDB to accept remote connections and enforce TLS Using the above 'app_user'@'%' is not enough on its own to allow remote connections to MariaDB. The MariaDB server also needs to be configured to listen on a network interface to accept remote connections. As MariaDB is secure by default, it only accepts connections from localhost until the administrator updates its configuration. On a typical Debian/Ubuntu system, the recommended way is to drop a new custom config in e.g. /etc/mysql/mariadb.conf.d/99-server-customizations.cnf, with the contents:
[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on
For settings to take effect, restart the server with systemctl restart mariadb. After this, the server will accept connections on any network interface. If the system is using a firewall, the port 3306 would additionally need to be allow-listed. To confirm that the settings took effect, run e.g. mariadb -e "SHOW VARIABLES LIKE 'bind_address';" , which should now show 0.0.0.0. When allowing remote connections, it is important to also always define require-secure-transport = on to enforce that only TLS-encrypted connections are allowed. If the server is running MariaDB 11.8 and the clients are also MariaDB 11.8 or newer, no additional configuration is needed thanks to MariaDB automatically providing TLS certificates and appropriate certificate validation in recent versions. On older long-term-supported versions of the MariaDB server one would have had to manually create the certificates and configure the ssl_key, ssl_cert and ssl_ca values on the server, and distribute the certificate to the clients as well, which was cumbersome, so good it is not required anymore. In MariaDB 11.8 the only additional related config that might still be worth setting is tls_version = TLSv1.3 to ensure only the latest TLS protocol version is used. Finally, test connections to ensure they work and to confirm that TLS is used by running e.g.:
shell
mariadb --user=app_user --password=your_secure_password \
 --host=192.168.1.66 -e '\s'
The result should show something along:
--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...
If running a Debian/Ubuntu system, see the bundled README with zcat /usr/share/doc/mariadb-server/README.Debian.gz to read more configuration tips.

Should TLS encryption be used also on internal networks? If a database server and app are running on the same private network, the chances that the connection gets eavesdropped on or man-in-the-middle attacked by a malicious user are low. However, it is not zero, and if it happens, it can be difficult to detect or prove that it didn t happen. The benefit of using end-to-end encryption is that both the database server and the client can validate the certificates and keys used, log it, and later have logs audited to prove that connections were indeed encrypted and show how they were encrypted. If all the computers on an internal network already have centralized user account management and centralized log collection that includes all database sessions, reusing existing SSH connections, SOCKS proxies, dedicated HTTPS tunnels, point-to-point VPNs, or similar solutions might also be a practical option. Note that the zero-configuration TLS only works with password validation methods. This means that systems configured to use PAM or Kerberos/GSSAPI can t use it, but again those systems are typically part of a centrally configured network anyway and are likely to have certificate authorities and key distribution or network encryption facilities already set up. In a typical software app stac however, the simplest solution is often the best and I recommend DBAs use the end-to-end TLS encryption in MariaDB 11.8 in most cases. Hopefully with these tips you can enjoy having your MariaDB deployments both simpler and more secure than before!

12 September 2025

Christoph Berg: The Cost of TDE and Checksums in PGEE

It's been a while since the last performance check of Transparent Data Encryption (TDE) in Cybertec's PGEE distribution - that was in 2016. Of course, the question is still interesting, so I did some benchmarks. Since the difference is really small between running without any extras, with data checksums turned on, and with both encryption and checksums turned on, we need to pick a configuration that will stress-test these features the most. So in the spirit of making PostgreSQL deliberately run slow, I went with only 1MB of shared_buffers with a pgbench workload of scale factor 50. The 770MB of database size will easily fit into RAM. However, having such a small buffer cache setting will cause a lot of cache misses with pages re-read from the OS disk cache, checksums checked, and the page decrypted again. To further increase the effect, I ran pgbench --skip-some-updates so the smaller, in-cache-anyway pgbench tables are not touched. Overall, this yields a pretty consistent buffer cache hit rate of only 82.8%. Here are the PGEE 17.6 tps (transactions per second) numbers averaged over a few 1-minute 3-client pgbench runs for different combinations of data checksums on/off, TDE off, and the various supported key bit lengths:
no checksums data checksums
no TDE 2455,6 100,00 % 2449,7 99,76 %
128 bits 2440,9 99,40 % 2443,3 99,50 %
192 bits 2439,6 99,35 % 2446,1 99,61 %
256 bits 2450,3 99,78 % 2443,1 99,49 %
There is a lot of noise in the individual runtimes before averaging, so the numbers must be viewed with some care (192-bit TDE is certainly not faster with checksums than without), but if we dare to interpret these tiny differences, we can conclude the following: Any workload with a better shared_buffers cache hit rate would see a lower penalty of enabling checksums and TDE than that. The post The Cost of TDE and Checksums in PGEE appeared first on CYBERTEC PostgreSQL Services & Support.

Next.