Search Results: "cts"

20 April 2025

Russ Allbery: Review: Up the Down Staircase

Review: Up the Down Staircase, by Bel Kaufman
Publisher: Vintage Books
Copyright: 1964, 1991, 2019
Printing: 2019
ISBN: 0-525-56566-3
Format: Kindle
Pages: 360
Up the Down Staircase is a novel (in an unconventional format, which I'll describe in a moment) about the experiences of a new teacher in a fictional New York City high school. It was a massive best-seller in the 1960s, including a 1967 movie, but seems to have dropped out of the public discussion. I read it from the library sometime in the late 1980s or early 1990s and have thought about it periodically ever since. It was Bel Kaufman's first novel. Sylvia Barrett is a new graduate with a master's degree in English, where she specialized in Chaucer. As Up the Down Staircase opens, it is her first day as an English teacher in Calvin Coolidge High School. As she says in a letter to a college friend:
What I really had in mind was to do a little teaching. "And gladly wolde he lerne, and gladly teche" like Chaucer's Clerke of Oxenford. I had come eager to share all I know and feel; to imbue the young with a love for their language and literature; to instruct and to inspire. What happened in real life (when I had asked why they were taking English, a boy said: "To help us in real life") was something else again, and even if I could describe it, you would think I am exaggerating.
She instead encounters chaos and bureaucracy, broken windows and mindless regulations, a librarian who is so protective of her books that she doesn't let any students touch them, a school guidance counselor who thinks she's Freud, and a principal whose sole interaction with the school is to occasionally float through on a cushion of cliches, dispensing utterly useless wisdom only to vanish again.
I want to take this opportunity to extend a warm welcome to all faculty and staff, and the sincere hope that you have returned from a healthful and fruitful summer vacation with renewed vim and vigor, ready to gird your loins and tackle the many important and vital tasks that lie ahead undaunted. Thank you for your help and cooperation in the past and future. Maxwell E. Clarke
Principal
In practice, the school is run by James J. McHare, Clarke's administrative assistant, who signs his messages JJ McH, Adm. Asst. and who Sylvia immediately starts calling Admiral Ass. McHare is a micro-managing control freak who spends the book desperately attempting to impose order over school procedures, the teachers, and the students, with very little success. The title of the book comes from one of his detention slips:
Please admit bearer to class Detained by me for going Up the Down staircase and subsequent insolence. JJ McH
The conceit of this book is that, except for the first and last chapters, it consists only of memos, letters, notes, circulars, and other paper detritus, often said to come from Sylvia's wastepaper basket. Sylvia serves as the first-person narrator through her long letters to her college friend, and through shorter but more frequent exchanges via intraschool memo with Beatrice Schachter, another English teacher at the same school, but much of the book lies outside her narration. The reader has to piece together what's happening from the discarded paper of a dysfunctional institution. Amid the bureaucratic and personal communications, there are frequent chapters with notes from the students, usually from the suggestion box that Sylvia establishes early in the book. These start as chaotic glimpses of often-misspelled wariness or open hostility, but over the course of Up the Down Staircase, some of the students become characters with fragmentary but still visible story arcs. This remains confusing throughout the novel there are too many students to keep them entirely straight, and several of them use pseudonyms for the suggestion box but it's the sort of confusion that feels like an intentional authorial choice. It mirrors the difficulty a teacher has in piecing together and remembering the stories of individual students in overstuffed classrooms, even if (like Sylvia and unlike several of her colleagues) the teacher is trying to pay attention. At the start, Up the Down Staircase reads as mostly-disconnected humor. There is a strong "kids say the darnedest things" vibe, which didn't entirely work for me, but the send-up of chaotic bureaucracy is both more sophisticated and more entertaining. It has the "laugh so that you don't cry" absurdity of a system with insufficient resources, entirely absent management, and colleagues who have let their quirks take over their personalities. Sylvia alternates between incredulity and stubbornness, and I think this book is at its best when it shows the small acts of practical defiance that one uses to carve out space and coherence from mismanaged bureaucracy. But this book is not just a collection of humorous anecdotes about teaching high school. Sylvia is sincere in her desire to teach, which crystallizes around, but is not limited to, a quixotic attempt to reach one delinquent that everyone else in the school has written off. She slowly finds her footing, she has a few breakthroughs in reaching her students, and the book slowly turns into an earnest portrayal of an attempt to make the system work despite its obvious unfitness for purpose. This part of the book is hard to review. Parts of it worked brilliantly; I could feel myself both adjusting my expectations alongside Sylvia to something less idealistic and also celebrating the rare breakthrough with her. Parts of it were weirdly uncomfortable in ways that I'm not sure I enjoyed. That includes Sylvia's climactic conversation with the boy she's been trying to reach, which was weirdly charged and ambiguous in a way that felt like the author's reach exceeding their grasp. One thing that didn't help my enjoyment is Sylvia's relationship with Paul Barringer, another of the English teachers and a frustrated novelist and poet. Everyone who works at the school has found their own way to cope with the stress and chaos, and many of the ways that seem humorous turn out to have a deeper logic and even heroism. Paul's, however, is to retreat into indifference and alcohol. He is a believable character who works with Kaufman's themes, but he's also entirely unlikable. I never understood why Sylvia tolerated that creepy asshole, let alone kept having lunch with him. It is clear from the plot of the book that Kaufman at least partially understands Paul's deficiencies, but that did not help me enjoy reading about him. This is a great example of a book that tried to do something unusual and risky and didn't entirely pull it off. I like books that take a risk, and sometimes Up the Down Staircase is very funny or suddenly insightful in a way that I'm not sure Kaufman could have reached with a more traditional novel. It takes a hard look at what it means to try to make a system work when it's clearly broken and you can't change it, and the way all of the characters arrive at different answers that are much deeper than their initial impressions was subtle and effective. It's the sort of book that sticks in your head, as shown by the fact I bought it on a whim to re-read some 35 years after I first read it. But it's not consistently great. Some parts of it drag, the characters are frustratingly hard to keep track of, and the emotional climax points are odd and unsatisfying, at least to me. I'm not sure whether to recommend it or not, but it's certainly unusual. I'm glad I read it again, but I probably won't re-read it for another 35 years, at least. If you are considering getting this book, be aware that it has a lot of drawings and several hand-written letters. The publisher of the edition I read did a reasonably good job formatting this for an ebook, but some of the pages, particularly the hand-written letters, were extremely hard to read on a Kindle. Consider paper, or at least reading on a tablet or computer screen, if you don't want to have to puzzle over low-resolution images. The 1991 trade paperback had a new introduction by the author, reproduced in the edition I read as an afterward (which is a better choice than an introduction). It is a long and fascinating essay from Kaufman about her experience with the reaction to this book, culminating in a passionate plea for supporting public schools and public school teachers. Kaufman's personal account adds a lot of depth to the story; I highly recommend it. Content note: Self-harm, plus several scenes that are closely adjacent to student-teacher relationships. Kaufman deals frankly with the problems of mostly-poor high school kids, including sexuality, so be warned that this is not the humorous romp that it might appear on first glance. A couple of the scenes made me uncomfortable; there isn't anything explicit, but the emotional overtones can be pretty disturbing. Rating: 7 out of 10

17 April 2025

Simon Josefsson: Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data. The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That s where auditing time would help the most. Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let s consider some approaches: While I like the properties of the first solution, and have made effort to support that approach, I don t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren t entirely clear to us today. So let s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it. How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this. We can do better. I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here: https://gitlab.com/debdistutils/verify-reproducible-releases The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come. I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn t it? First I had to somehow mimic upstream s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.) I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven t managed to complete it. Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project. Happy Supply-Chain Security Hacking!

Arturo Borrero Gonz lez: My experience in the Debian LTS and ELTS projects

Debian Last year, I decided to start participating in the Debian LTS and ELTS projects. It was a great opportunity to engage in something new within the Debian community. I had been following these projects for many years, observing their evolution and how they gained traction both within the ecosystem and across the industry. I was curious to explore how contributors were working internally especially how they managed security patching and remediation for older software. I ve always felt this was a particularly challenging area, and I was fortunate to experience it firsthand. As of April 2025, the Debian LTS project was primarily focused on providing security maintenance for Debian 11 Bullseye. Meanwhile, the Debian ELTS project was targeting Debian 8 Jessie, Debian 9 Stretch, and Debian 10 Buster. During my time with the projects, I worked on a variety of packages and CVEs. Some of the most notable ones include: There are several technical highlights I d like to share things I learned or had to apply while participating: In March 2025, I decided to scale back my involvement in the projects due to some changes in my personal life. Still, this experience has been one of the highlights of my career, and I would definitely recommend it to others. I m very grateful for the warm welcome I received from the LTS/ELTS community, and I don t rule out the possibility of rejoining the LTS/ELTS efforts in the future. The Debian LTS/ELTS projects are currently coordinated by folks at Freexian. Many thanks to Freexian and sponsors for providing this opportunity!

16 April 2025

Otto Kek l inen: Going Full-Time as an Open Source Developer

Featured image of post Going Full-Time as an Open Source DeveloperAfter careful consideration, I ve decided to embark on a new chapter in my professional journey. I ve left my position at AWS to dedicate at least the next six months to developing open source software and strengthening digital ecosystems. My focus will be on contributing to Linux distributions (primarily Debian) and other critical infrastructure components that our modern society depends on, but which may not receive adequate attention or resources.

The Evolution of Open Source Open source won. Over the 25+ years I ve been involved in the open source movement, I ve witnessed its remarkable evolution. Today, Linux powers billions of devices from tiny embedded systems and Android smartphones to massive cloud datacenters and even space stations. Examine any modern large-scale digital system, and you ll discover it s built upon thousands of open source projects. I feel the priority for the open source movement should no longer be increasing adoption, but rather solving how to best maintain the vast ecosystem of software. This requires building robust institutions and processes to secure proper resourcing and ensure the collaborative development process remains efficient and leads to ever-increasing quality of software.

What is Special About Debian? Debian, established in 1993 by Ian Murdock, stands as one of these institutions that has demonstrated exceptional resilience. There is no single authority, but instead a complex web of various stakeholders, each with their own goals and sources of funding. Every idea needs to be championed at length to a wide audience and implemented through a process of organic evolution. Thanks to this approach, Debian has been consistently delivering production-quality, universally useful software for over three decades. Having been a Debian Developer for more than ten years, I m well-positioned to contribute meaningfully to this community. If your organization relies on Debian or its derivatives such as Ubuntu, and you re interested in funding cyber infrastructure maintenance by sponsoring Debian work, please don t hesitate to reach out. This could include package maintenance and version currency, improving automated upgrade testing, general quality assurance and supply chain security enhancements.
Best way to reach me is by e-mail otto at debian.org. You can also book a 15-minute chat with me for a quick introduction.

Grow or Die My four-year tenure as a Software Development Manager at Amazon Web Services was very interesting. I m grateful for my time at AWS and proud of my team s accomplishments, particularly for creating an open source contribution process that got Amazon from zero to the largest external contributor to the MariaDB open source database. During this time, I got to experience and witness a plethora of interesting things. I will surely share some of my key learnings in future blog posts. Unfortunately, the rate of progress in this mammoth 1.5 million employee organization was slowing down, and I didn t feel I learned much new in the last years. This realization, combined with the opportunity cost of not spending enough time on new cutting-edge technology, motivated me to take this leap. Being a full-time open source developer may not be financially the most lucrative idea, but I think it is an excellent way to force myself to truly assess what is important on a global scale and what areas I want to contribute to. Working fully on open source presents a fascinating duality: you re not bound by any external resource or schedule limitations, and can the progress you make is directly proportional to how much energy you decide to invest. Yet, you also depend on collaboration with people you might never meet and who are not financially incentivized to collaborate. This will undoubtedly expose me to all kinds of challenges. But what would be better in fostering holistic personal growth? I know that deep down in my DNA, I am not made to stay cozy or to do easy things. I need momentum. OK, let s get going

11 April 2025

Reproducible Builds: Reproducible Builds in March 2025

Welcome to the third report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Debian bookworm live images now fully reproducible from their binary packages
  2. How NixOS and reproducible builds could have detected the xz backdoor
  3. LWN: Fedora change aims for 99% package reproducibility
  4. Python adopts PEP standard for specifying package dependencies
  5. OSS Rebuild real-time validation and tooling improvements
  6. SimpleX Chat server components now reproducible
  7. Three new scholarly papers
  8. Distribution roundup
  9. An overview of Supply Chain Attacks on Linux distributions
  10. diffoscope & strip-nondeterminism
  11. Website updates
  12. Reproducibility testing framework
  13. Upstream patches

Debian bookworm live images now fully reproducible from their binary packages Roland Clobus announced on our mailing list this month that all the major desktop variants (ie. Gnome, KDE, etc.) can be reproducibly created for Debian bullseye, bookworm and trixie from their (pre-compiled) binary packages. Building reproducible Debian live images does not require building from reproducible source code, but this is still a remarkable achievement. Some large proportion of the binary packages that comprise these live images can (and were) built reproducibly, but live image generation works at a higher level. (By contrast, full or end-to-end reproducibility of a bootable OS image will, in time, require both the compile-the-packages the build-the-bootable-image stages to be reproducible.) Nevertheless, in response, Roland s announcement generated significant congratulations as well as some discussion regarding the finer points of the terms employed: a full outline of the replies can be found here. The news was also picked up by Linux Weekly News (LWN) as well as to Hacker News.

How NixOS and reproducible builds could have detected the xz backdoor Julien Malka aka luj published an in-depth blog post this month with the highly-stimulating title How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all . Starting with an dive into the relevant technical details of the XZ Utils backdoor, Julien s article goes on to describe how we might avoid the xz catastrophe in the future by building software from trusted sources and building trust into untrusted release tarballs by way of comparing sources and leveraging bitwise reproducibility, i.e. applying the practices of Reproducible Builds. The article generated significant discussion on Hacker News as well as on Linux Weekly News (LWN).

LWN: Fedora change aims for 99% package reproducibility Linux Weekly News (LWN) contributor Joe Brockmeier has published a detailed round-up on how Fedora change aims for 99% package reproducibility. The article opens by mentioning that although Debian has been working toward reproducible builds for more than a decade , the Fedora project has now:
progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora s package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal with minimal pain for packagers rather than whether to attempt it.
The Change Proposal itself is worth reading:
Over the last few releases, we [Fedora] changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. After this Change, package builds are expected to be reproducible. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.
Further discussion can be found on the Fedora mailing list as well as on Fedora s Discourse instance.

Python adopts PEP standard for specifying package dependencies Python developer Brett Cannon reported on Fosstodon that PEP 751 was recently accepted. This design document has the purpose of describing a file format to record Python dependencies for installation reproducibility . As the abstract of the proposal writes:
This PEP proposes a new file format for specifying dependencies to enable reproducible installation in a Python environment. The format is designed to be human-readable and machine-generated. Installers consuming the file should be able to calculate what to install without the need for dependency resolution at install-time.
The PEP, which itself supersedes PEP 665, mentions that there are at least five well-known solutions to this problem in the community .

OSS Rebuild real-time validation and tooling improvements OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io, npm registries) and publish signed attestations and build definitions for public use. OSS Rebuild is now attempting rebuilds as packages are published, shortening the time to validating rebuilds and publishing attestations. Aman Sharma contributed classifiers and fixes for common sources of non-determinism in JAR packages. Improvements were also made to some of the core tools in the project:
  • timewarp for simulating the registry responses from sometime in the past.
  • proxy for transparent interception and logging of network activity.
  • and stabilize, yet another nondeterminism fixer.

SimpleX Chat server components now reproducible SimpleX Chat is a privacy-oriented decentralised messaging platform that eliminates user identifiers and metadata, offers end-to-end encryption and has a unique approach to decentralised identity. Starting from version 6.3, however, Simplex has implemented reproducible builds for its server components. This advancement allows anyone to verify that the binaries distributed by SimpleX match the source code, improving transparency and trustworthiness.

Three new scholarly papers Aman Sharma of the KTH Royal Institute of Technology of Stockholm, Sweden published a paper on Build and Runtime Integrity for Java (PDF). The paper s abstract notes that Software Supply Chain attacks are increasingly threatening the security of software systems and goes on to compare build- and run-time integrity:
Build-time integrity ensures that the software artifact creation process, from source code to compiled binaries, remains untampered. Runtime integrity, on the other hand, guarantees that the executing application loads and runs only trusted code, preventing dynamic injection of malicious components.
Aman s paper explores solutions to safeguard Java applications and proposes some novel techniques to detect malicious code injection. A full PDF of the paper is available.
In addition, Hamed Okhravi and Nathan Burow of Massachusetts Institute of Technology (MIT) Lincoln Laboratory along with Fred B. Schneider of Cornell University published a paper in the most recent edition of IEEE Security & Privacy on Software Bill of Materials as a Proactive Defense:
The recently mandated software bill of materials (SBOM) is intended to help mitigate software supply-chain risk. We discuss extensions that would enable an SBOM to serve as a basis for making trust assessments thus also serving as a proactive defense.
A full PDF of the paper is available.
Lastly, congratulations to Giacomo Benedetti of the University of Genoa for publishing their PhD thesis. Titled Improving Transparency, Trust, and Automation in the Software Supply Chain, Giacomo s thesis:
addresses three critical aspects of the software supply chain to enhance security: transparency, trust, and automation. First, it investigates transparency as a mechanism to empower developers with accurate and complete insights into the software components integrated into their applications. To this end, the thesis introduces SUNSET and PIP-SBOM, leveraging modeling and SBOMs (Software Bill of Materials) as foundational tools for transparency and security. Second, it examines software trust, focusing on the effectiveness of reproducible builds in major ecosystems and proposing solutions to bolster their adoption. Finally, it emphasizes the role of automation in modern software management, particularly in ensuring user safety and application reliability. This includes developing a tool for automated security testing of GitHub Actions and analyzing the permission models of prominent platforms like GitHub, GitLab, and BitBucket.

Distribution roundup In Debian this month:
The IzzyOnDroid Android APK repository reached another milestone in March, crossing the 40% coverage mark specifically, more than 42% of the apps in the repository is now reproducible Thanks to funding by NLnet/Mobifree, the project was also to put more time into their tooling. For instance, developers can now run easily their own verification builder in less than 5 minutes . This currently supports Debian-based systems, but support for RPM-based systems is incoming. Future work in the pipeline, including documentation, guidelines and helpers for debugging.
Fedora developer Zbigniew J drzejewski-Szmek announced a work-in-progress script called fedora-repro-build which attempts to reproduce an existing package within a Koji build environment. Although the project s README file lists a number of fields will always or almost always vary (and there are a non-zero list of other known issues), this is an excellent first step towards full Fedora reproducibility (see above for more information).
Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for his work there.

An overview of Supply Chain Attacks on Linux distributions Fenrisk, a cybersecurity risk-management company, has published a lengthy overview of Supply Chain Attacks on Linux distributions. Authored by Maxime Rinaudo, the article asks:
[What] would it take to compromise an entire Linux distribution directly through their public infrastructure? Is it possible to perform such a compromise as simple security researchers with no available resources but time?

diffoscope & strip-nondeterminism diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 290, 291, 292 and 293 and 293 to Debian:
  • Bug fixes:
    • file(1) version 5.46 now returns XHTML document for .xhtml files such as those found nested within our .epub tests. [ ]
    • Also consider .aar files as APK files, at least for the sake of diffoscope. [ ]
    • Require the new, upcoming, version of file(1) and update our quine-related testcase. [ ]
  • Codebase improvements:
    • Ensure all calls to our_check_output in the ELF comparator have the potential CalledProcessError exception caught. [ ][ ]
    • Correct an import masking issue. [ ]
    • Add a missing subprocess import. [ ]
    • Reformat openssl.py. [ ]
    • Update copyright years. [ ][ ][ ]
In addition, Ivan Trubach contributed a change to ignore the st_size metadata entry for directories as it is essentially arbitrary and introduces unnecessary or even spurious changes. [ ]

Website updates Once again, there were a number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add links to two related bugs about buildinfos.debian.net. [ ]
    • Add an extra sync to the database backup. [ ]
    • Overhaul description of what the service is about. [ ][ ][ ][ ][ ][ ]
    • Improve the documentation to indicate that need to fix syncronisation pipes. [ ][ ]
    • Improve the statistics page by breaking down output by architecture. [ ]
    • Add a copyright statement. [ ]
    • Add a space after the package name so one can search for specific packages more easily. [ ]
    • Add a script to work around/implement a missing feature of debrebuild. [ ]
  • Misc:
    • Run debian-repro-status at the end of the chroot-install tests. [ ][ ]
    • Document that we have unused diskspace at Ionos. [ ]
In addition:
  • James Addison made a number of changes to the reproduce.debian.net homepage. [ ][ ].
  • Jochen Sprickerhof updated the statistics generation to catch No space left on device issues. [ ]
  • Mattia Rizzolo added a better command to stop the builders [ ] and fixed the reStructuredText syntax in the README.infrastructure file. [ ]
And finally, node maintenance was performed by Holger Levsen [ ][ ][ ] and Mattia Rizzolo [ ][ ].

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Bits from Debian: Bits from the DPL

Dear Debian community, this is bits from DPL for March (sorry for the delay, I was waiting for some additional input). Conferences In March, I attended two conferences, each with a distinct motivation. I joined FOSSASIA to address the imbalance in geographical developer representation. Encouraging more developers from Asia to contribute to Free Software is an important goal for me, and FOSSASIA provided a valuable opportunity to work towards this. I also attended Chemnitzer Linux-Tage, a conference I have been part of for over 20 years. To me, it remains a key gathering for the German Free Software community a place where contributors meet, collaborate, and exchange ideas. I have a remark about submitting an event proposal to both FOSDEM and FOSSASIA: Cross distribution experience exchange
As Debian Project Leader, I have often reflected on how other Free Software distributions address challenges we all face. I am interested in discussing how we can learn from each other to improve our work and better serve our users. Recognizing my limited understanding of other distributions, I aim to bridge this gap through open knowledge exchange. My hope is to foster a constructive dialogue that benefits the broader Free Software ecosystem. Representatives of other distributions are encouraged to participate in this BoF whether as contributors or official co-speakers. My intention is not to drive the discussion from a Debian-centric perspective but to ensure that all distributions have an equal voice in the conversation.
This event proposal was part of my commitment from my 2024 DPL platform, specifically under the section "Reaching Out to Learn". Had it been accepted, I would have also attended FOSDEM. However, both FOSDEM and FOSSASIA rejected the proposal. In hindsight, reaching out to other distribution contributors beforehand might have improved its chances. I may take this approach in the future if a similar opportunity arises. That said, rejecting an interdistribution discussion without any feedback is, in my view, a missed opportunity for collaboration. FOSSASIA Summit The 14th FOSSASIA Summit took place in Bangkok. As a leading open-source technology conference in Asia, it brings together developers, startups, and tech enthusiasts to collaborate on projects in AI, cloud computing, IoT, and more. With a strong focus on open innovation, the event features hands-on workshops, keynote speeches, and community-driven discussions, emphasizing open-source software, hardware, and digital freedom. It fosters a diverse, inclusive environment and highlights Asia's growing role in the global FOSS ecosystem. I presented a talk on Debian as a Global Project and led a packaging workshop. Additionally, to further support attendees interested in packaging, I hosted an extra self-organized workshop at a hacker caf , initiated by participants eager to deepen their skills. There was another Debian related talk given by Ananthu titled "The Herculean Task of OS Maintenance - The Debian Way!" To further my goal of increasing diversity within Debian particularly by encouraging more non-male contributors I actively engaged with attendees, seeking opportunities to involve new people in the project. Whether through discussions, mentoring, or hands-on sessions, I aimed to make Debian more approachable for those who might not yet see themselves as contributors. I was fortunate to have the support of Debian enthusiasts from India and China, who ran the Debian booth and helped create a welcoming environment for these conversations. Strengthening diversity in Free Software is a collective effort, and I hope these interactions will inspire more people to get involved. Chemnitzer Linuxtage The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and longest-running community-driven Linux and open-source conferences, held annually in Chemnitz since 2000. It has been my favorite conference in Germany, and I have tried to attend every year. Focusing on Free Software, Linux, and digital sovereignty, CLT offers a mix of expert talks, workshops, and exhibitions, attracting hobbyists, professionals, and businesses alike. With a strong grassroots ethos, it emphasizes hands-on learning, privacy, and open-source advocacy while fostering a welcoming environment for both newcomers and experienced Linux users. Despite my appreciation for the diverse and high-quality talks at CLT, my main focus was on connecting with people who share the goal of attracting more newcomers to Debian. Engaging with both longtime contributors and potential new participants remains one of the most valuable aspects of the event for me. I was fortunate to be joined by Debian enthusiasts staffing the Debian booth, where I found myself among both experienced booth volunteers who have attended many previous CLT events and young newcomers. This was particularly reassuring, as I certainly can't answer every detailed question at the booth. I greatly appreciate the knowledgeable people who represent Debian at this event and help make it more accessible to visitors. As a small point of comparison while FOSSASIA and CLT are fundamentally different events the gender ratio stood out. FOSSASIA had a noticeably higher proportion of women compared to Chemnitz. This contrast highlighted the ongoing need to foster more diversity within Free Software communities in Europe. At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand Volunteers, One Goal), which was video recorded. It took place in the grand auditorium and attracted a mix of long-term contributors and newcomers, making for an engaging and rewarding experience. Kind regards Andreas.

9 April 2025

Dirk Eddelbuettel: AsioHeaders 1.28.2-1 on CRAN: New Upstream

A new release of the AsioHeaders package arrived at CRAN earlier today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost). This update brings a new upstream version which helps the three dependent packages using AsiooHeaders to remain compliant at CRAN, and has been prepared by Charlie Gao. Otherwise I made some routine updates to packaging since the last release in late 2022. The short NEWS entry for AsioHeaders follows.

Changes in version 1.28.2-1 (2025-04-08)
  • Standard maintenance to CI and other packaging aspects
  • Upgraded to Asio 1.28.2 (Charlie Gao in #11 fixing #10)

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

Freexian Collaborators: Debian Contributions: Preparations for Trixie, Updated debvm, DebConf 25 registration website updates and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-03 Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for Trixie, by Rapha l Hertzog As we are approaching the trixie freeze, it is customary for Debian developers to review their packages and clean them up in preparation for the next stable release. That s precisely what Rapha l did with publican, a package that had not seen any change since the last Debian release and that partially stopped working along the way due to a major Perl upgrade. While upstream s activity is close to zero, hope is not yet entirely gone as the git repository moved to a new location a couple of months ago and contained the required fix. Rapha l also developed another fix to avoid an annoying warning that was seen at runtime. Rapha l also ensured that the last upstream version of zim was uploaded to Debian unstable, and developed a fix for gnome-shell-extension-hamster to make it work with GNOME 48 and thus ensure that the package does not get removed from trixie.

Abseil and re2 transition in Debian, by Stefano Rivera One of the last transitions to happen for trixie was an update to abseil, bringing it up to 202407. This library is a dependency for one of Freexian s customers, as well as blocking newer versions of re2, a package maintained by Stefano. The transition had been stalled for several months while some issues with reverse dependencies were investigated and dealt with. It took a final push to make the transition happen, including fixing a few newly discovered problems downstream. The abseil package s autopkgtests were (trivially) broken by newer cmake versions, and some tests started failing on PPC64 (a known issue upstream).

debvm uploaded, by Helmut Grohne debvm is a command line tool for quickly creating a Debian-based virtual machine for testing purposes. Over time, it accumulated quite a few minor issues as well as CI failures. The most notorious one was an ARM32 failure present since August. It was diagnosed down to a glibc bug by Tj and Chris Hofstaedtler and little has happened since then. To have debvm work somewhat, it now contains a workaround for this situation. Few changes are expected to be noticeable, but related tools such as apt, file, linux, passwd, and qemu required quite a few adaptations all over the place. Much of the necessary debugging was contributed by others.

DebConf 25 Registration website, by Stefano Rivera and Santiago Ruano Rinc n DebConf 25, the annual Debian developer conference, is now open for registration. Other than preparing the conference website, getting there always requires some last minute changes to the software behind the registration interface and this year was no exception. Every year, the conference is a little different to previous years, and has some different details that need to be captured from attendees. And every year we make minor incremental improvements to fix long-standing problems. New concepts this year included: brunch, the closing talks on the departure day, venue security clearance, partial contributions towards food and accommodation bursaries, and attendee-selected bursary budgets.

Miscellaneous contributions
  • Helmut uploaded guess-concurrency incorporating feedback from others.
  • Helmut reacted to rebootstrap CI results and adapted it to cope with changes in unstable.
  • Helmut researched real world /usr-move fallout though little was actually attributable. He also NMUed systemd unsuccessfully.
  • Helmut sent 12 cross build patches.
  • Helmut looked into undeclared file conflicts in Debian more systematically and filed quite some bugs.
  • Helmut attended the cross/bootstrap sprint in W rzburg. A report of the event is pending.
  • Lucas worked on the CFP and tracks definition for DebConf 25.
  • Lucas worked on some bits involving Rails 7 transition.
  • Carles investigated why the job piuparts on salsa-ci/pipeline was passing but was failing on piuparts.debian.org for simplemonitor package. Created an issue and MR with a suggested fix, under discussion.
  • Carles improved the documentation of salsa-ci/pipeline: added documentation for different variables.
  • Carles made debian-history package reproducible (with help from Chris Lamb).
  • Carles updated simplemonitor package (new upstream version), prepared a new qdacco version (fixed bugs in qdacco, packaged with the upgrade from Qt 5 to Qt 6).
  • Carles reviewed and submitted translations to Catalan for adduser, apt, shadow, apt-listchanges.
  • Carles reviewed, created merge-requests for translations to Catalan of 38 packages (using po-debconf-manager tooling). Created 40 bug reports for some merge requests that haven t been actioned for some time.
  • Colin Watson fixed 59 RC bugs (including 26 packages broken by the long-overdue removal of dh-python s dependency on python3-setuptools), and upgraded 38 packages (mostly Python-related) to new upstream versions.
  • Colin worked with Pranav P to track down and fix a dnspython autopkgtest regression on s390x caused by an endianness bug in pylsqpack.
  • Colin fixed a time-based test failure in python-dateutil that would have triggered in 2027, and contributed the fix upstream.
  • Colin fixed debconf to automatically use the noninteractive frontend if stdin is not a terminal.
  • Stefano bisected and fixed a pypy translation regression on Debian stable and older on 32-bit ARM.
  • Emilio coordinated and helped finish various transitions in light of the transition freeze.
  • Thorsten Alteholz uploaded cups-filters to fix an FTBFS with a new upstream version of qpdf.
  • With the aim of enhancing the support for packages related to Software Bill of Materials (SBOMs) in recent industrial standards, Santiago has worked on finishing the packaging of and uploaded CycloneDX python library. There is on-going work about SPDX python tools, but it requires (build-)dependencies currently not shipped in Debian, such as owlrl and pyshacl.
  • Anupa worked with the Publicity team to announce the Debian 12.10 point release.
  • Anupa with the support of Santiago prepared an announcement and announced the opening of CfP and Registrations for DebConf 25.

5 April 2025

Russell Coker: HP z840

Many PCs with DDR4 RAM have started going cheap on ebay recently. I don t know how much of that is due to Windows 11 hardware requirements and how much is people replacing DDR4 systems with DDR5 systems. I recently bought a z840 system on ebay, it s much like the z640 that I recently made my workstation [1] but is designed strictly as a 2 CPU system. The z640 can run with 2 CPUs if you have a special expansion board for a second CPU which is very expensive on eBay and and which doesn t appear to have good airflow potential for cooling. The z840 also has a slightly larger case which supports more DIMM sockets and allows better cooling. The z640 and z840 take the same CPUs if you use the E5-2xxx series of CPU that is designed for running in 2-CPU mode. The z840 runs DDR4 RAM at 2400 as opposed to 2133 for the z640 for reasons that are not explained. The z840 has more PCIe slots which includes 4*16x slots that support bifurcation. The z840 that I have has the HP Z-Cooler [2] installed. The coolers are mounted on a 45 degree angle (the model depicted at the right top of the first page of that PDF) and the system has a CPU shroud with fans that mount exactly on top of the CPU heatsinks and duct the hot air out without going over other parts. The technology of the z840 cooling is very impressive. When running two E5-2699A CPUs which are listed as 145W typical TDP with all 44 cores in use the system is very quiet. It s noticeably louder than the z640 but is definitely fine to have at your desk. In a typical office you probably wouldn t hear it when it s running full bore. If I was to have one desktop PC or server in my home the z840 would definitely be the machine I choose for that. I decided to make the z840 a build server to share the resource with friends and to use for group coding projects. I often have friends visit with laptops to work on FOSS stuff and a 44 core build server is very useful for that. The system is by far the fastest system I ve ever owned even though I don t have fast storage for it yet. But 256G of RAM allows enough caching that storage speed doesn t matter too much. Here is building the SE Linux refpolicy package on the z640 with E5-2696 v3 CPU and the z840 with two E5-2699A v4 CPUs:
257.10user 47.18system 1:40.21elapsed 303%CPU (0avgtext+0avgdata 416408maxresident)k
66904inputs+1519912outputs (74major+8154395minor)pagefaults 0swaps
222.15user 24.17system 1:13.80elapsed 333%CPU (0avgtext+0avgdata 416192maxresident)k
5416inputs+0outputs (64major+8030451minor)pagefaults 0swaps
Here is building Warzone2100 on the z640 and the z840:
6887.71user 178.72system 16:15.09elapsed 724%CPU (0avgtext+0avgdata 1682160maxresident)k
1555480inputs+8918768outputs (114major+27133734minor)pagefaults 0swaps
6055.96user 77.05system 8:00.20elapsed 1277%CPU (0avgtext+0avgdata 1682100maxresident)k
117640inputs+0outputs (46major+11460968minor)pagefaults 0swaps
It seems that the refpolicy package can t use many more than 18 cores as it is only 37% faster when building with 44 cores available. Building Warzone is slightly more than twice as fast so it can really use all the available cores. According to Passmark the E5-2699A v4 is 22% faster than the E5-2696 v3. I highly recommend buying a z640 if you see one at a good price.

4 April 2025

Johannes Schauer Marin Rodrigues: To boldly build what no one has built before

Last week, we (Helmut, Jochen, Holger, Gioele and josch) met in W rzburg for a Debian crossbuilding & bootstrap sprint. We would like to thank Angest pselt e. V. for generously providing us with their hacker space which we were able to use exclusively during the four-day-sprint. We d further like to thank Debian for their sponsorship of accommodation of Helmut and Jochen. The most important topics that we worked on together were: Our TODO items for after the sprint are: In addition to what was already listed above, people worked on the following tasks specifically: Thank you all for attending this sprint, for making it so productive and for the amazing atmosphere and enlightening discussions!

31 March 2025

Dirk Eddelbuettel: RProtoBuf 0.4.24 on CRAN: Minor Polish

A new maintenance release 0.4.24 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers ( ProtoBuf ) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. This release brings an both an upstream API update affecting one function, and an update to our use of the C API of R, also in one function. Nothing user-facing, and no surprises expected. The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.24 (2025-03-31)
  • Add bindings to EnumValueDescriptor::name (Mike Kruskal in #108)
  • Replace EXTPTR_PTR with R_ExternalPtrAddr (Dirk)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Russell Coker: Links March 2025

Anarcat s review of Fish is interesting and shows some benefits I hadn t previously realised, I ll have to try it out [1]. Longnow has an insightful article about religion and magic mushrooms [2]. Brian Krebs wrote an informative artivle about DOGE and the many security problems that it has caused to the US government [3]. Techdirt has an insightful article about why they are forced to become a democracy blog after the attacks by Trump et al [4]. Antoine wrote an insightful blog post about the war for the Internet and how in many ways we are losing to fascists [5]. Interesting story about people working for free at Apple to develop a graphing calculator [6]. We need ways for FOSS people to associate to do such projects. Interesting YouTube video about a wiki for building a cheap road legal car [7]. Interesting video about powering spacecraft with Plutonion 238 and how they are running out [8]. Interesting information about the search for mh370 [9]. I previously hadn t been convinced that it was hijacked but I am now. The EFF has an interesting article about the Rayhunter, a tool to detect cellular spying that can run with cheap hardware [10].
  • [1] https://anarc.at/blog/2025-02-28-fish/
  • [2] https://longnow.org/ideas/is-god-a-mushroom/
  • [3] https://tinyurl.com/27wbb5ec
  • [4] https://tinyurl.com/2cvo42ro
  • [5] https://anarc.at/blog/2025-03-21-losing-war-internet/
  • [6] https://www.pacifict.com/story/
  • [7] https://www.youtube.com/watch?v=x8jdx-lf2Dw
  • [8] https://www.youtube.com/watch?v=geIhl_VE0IA
  • [9] https://www.youtube.com/watch?v=HIuXEU4H-XE
  • [10] https://tinyurl.com/28psvpx7
  • Simon Josefsson: On Binary Distribution Rebuilds

    I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive. One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I m using the latest version of published packages for the rebuild. What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach. It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach. However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code. I ve made an illustration of the effort I m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.
    The illustration shows how the Debian main archive is used as input to rebuild another stage #0 archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the stage #1 archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution. How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve. What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well. Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don t know how they would look like in practice. An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible. I m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the build-essentials-depends package set. I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution. What do you think? PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

    29 March 2025

    Petter Reinholdtsen: Theora 1.2.0 released

    Following the 1.2.0beta1 release two weeks ago, a final 1.2.0 release of theora was wrapped up today. This new release is tagged in the Xiph gitlab theora instance and you can fetch it from the Theora home page as soon as someone with access find time to update the web pages. In the mean time (automatically removed after 14 days) the release tarball is also available as a git build artifact from CI build of the release tag. The list of changes since The 1.2.0beta release from the CHANGES file in the tarball look like this:
    libtheora 1.2.0 (2025 March 29)
    • Bumped minor SONAME versions as oc_comment_unpack() implementation changed.
    • Added example wrapper script encoder_example_ffmpeg (#1601 #2336).
    • Improve comment handling on platforms where malloc(0) return NULL (#2304).
    • Added pragma in example code to quiet clang op precedenca warnings.
    • Adjusted encoder_example help text.
    • Adjusted README, CHANGES, pkg-config and spec files to better reflect current release (#2331 #2328).
    • Corrected english typos in source and build system.
    • Switched http links to https in doc and comments where relevant. Did not touch RFC drafts.
    As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

    28 March 2025

    Ian Jackson: Rust is indeed woke

    Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars). I m going to argue that Rust, the language, is woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent. Community The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is not outstanding in this respect. It certainly has its problems. Many other projects do as well or better. And this is well-trodden ground. I have something more interesting to say: Technological values - particularly, compared to C/C++ Rust is woke technology that embodies a woke understanding of what it means to be a programming language. Ostensible values Let s start with Rust s strapline:
    A language empowering everyone to build reliable and efficient software.
    Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small). Empowering everyone is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.) This is all very airy-fairy, but it has concrete consequences: Attitude to the programmer s mistakes In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions. If you write a bug in your Rust program, Rust doesn t blame you. Rust asks how could the compiler have spotted that bug . This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault. These aren t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words: Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers. Sound familiar? The ideology of the hardcore programmer Programming has long suffered from the myth of the rockstar . Silicon Valley techbro culture loves this notion. In reality, though, modern information systems are far too complicated for a single person. Developing systems is a team sport. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance. The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel. These rockstars want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn t important. Sound familiar? Memory safety as a power struggle Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++. Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.) The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests. Sound familiar? Memory safety via Rust as a power struggle Addressing this problem via Rust is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or be replaced. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) rockstars . So established C programmer experts are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem. Sound familiar? Notes This is not a RIIR manifesto I m not saying we should rewrite all the world s C in Rust. We should not try to do that. Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we re going to need other techniques to deal with all of our existing C. CHERI is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet. But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK. Disclosure I first learned C from K&R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults). I like Rust because I care that the software I write actually works: I care that my code doesn t do harm in the world. On the meaning of woke The original meaning of woke is something much more specific, to do with racism. For the avoidance of doubt, I don t think Rust is particularly antiracist. I m using woke (like Rust s opponents are) in the much broader, and now much more prevalent, culture wars sense. Pithy conclusion If you re a senior developer who knows only C/C++, doesn t want their authority challenged, and doesn t want to have to learn how to write better software, you should hate Rust. Also you should be fired.
    Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".


    comment count unavailable comments

    Dirk Eddelbuettel: RcppArmadillo 14.4.1-1 on CRAN: Small Upstream Fix

    armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1236 other packages on CRAN, downloaded 39 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 620 times according to Google Scholar. This release brings a small upstream bug fix to the two FFTW3-interfacing functions, something not likely to hit many CRAN packages. The changes since the last and fairly recent CRAN release are summarised below.

    Changes in RcppArmadillo version 14.4.1-1 (2025-03-27)
    • Upgraded to Armadillo release 14.4.1 (Filtered Espresso)
      • Fix for fft() and ifft() when using FFTW3 in multi-threaded contexts (such as OpenMP)

    Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

    John Goerzen: Why You Should (Still) Use Signal As Much As Possible

    As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks. The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around. Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic. So let s dive in. I ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea. This post isn t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

    What makes communications secure? When most people are talking about secure communications, they mean some combination of these properties:
    1. Privacy - nobody except the intended recipient can decode a message.
    2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
    3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
    4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.
    If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can t really have privacy without authentication. I ll have more to say about these later. For now, let s discuss attack scenarios.

    What compromises security? There are a number of ways that security can be compromised. Let s think through some of them:

    Communications infrastructure snooping Let s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?
    • The owner of the coffee shop s WiFi
    • The coffee shop s Internet provider
    • The recipient s Internet provider
    • Any Internet providers along the network between the sender and the recipient
    • Any government or institution that can compel any of the above to hand over copies of the traffic
    • Any hackers that compromise any of the above systems
    Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing. Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity). Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone s retention. So defenses against this involve things like:
    • Strong end-to-end encryption, so no intermediate party even the people that make the app can snoop on it.
    • Using strong authentication of your peers
    • Taking steps to prevent even app developers from being able to see your contact list or communication history
    You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks. When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it even if they never open it or attempt to peak inside will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

    Device compromise Let s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn t take away all of them. What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what? An even simpler attack doesn t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right? Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later. An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on but still, it protects against a wide variety of attacks.

    Untrustworthy communication partner Perhaps you are sending sensitive information to a contact, but that person doesn t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

    Environmental compromise Perhaps your device is secure, but a hidden camera still captures what s on your screen. You can take some steps against things like this, of course.

    Human error Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

    Protecting yourself So how can you protect yourself against these attacks? Let s consider:
    • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can t access your messages
    • Keep your software and phone up-to-date
    • Be careful about phishing attacks and who you add to chat rooms
    • Be aware of your surroundings; don t send sensitive messages where people might be looking over your shoulder with their eyes or cameras
    There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your secure laptop, it wouldn t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?) But, that approach is hard to use. Many people aren t familiar with GnuPG. You don t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier. Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available. Signal is also open source; you don t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it s not federated, I previously addressed that.

    Government use If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised. I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn t have been possible. This doesn t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it. And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

    Conclusion Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history? I say no. So, go install Signal. It s the best, most practical tool we have.
    This post is also available on my website, where it may be periodically updated.

    Freexian Collaborators: Monthly report about Debian Long Term Support, February 2025 (by Roberto C. S nchez)

    Like each month, have a look at the work funded by Freexian s Debian LTS offering.

    Debian LTS contributors In February, 18 contributors have been paid to work on Debian LTS, their reports are available:
    • Abhijith PA did 10.0h (out of 8.0h assigned and 6.0h from previous period), thus carrying over 4.0h to the next month.
    • Adrian Bunk did 12.0h (out of 0.0h assigned and 63.5h from previous period), thus carrying over 51.5h to the next month.
    • Andrej Shadura did 10.0h (out of 6.0h assigned and 4.0h from previous period).
    • Bastien Roucari s did 20.0h (out of 20.0h assigned).
    • Ben Hutchings did 12.0h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 12.0h to the next month.
    • Chris Lamb did 18.0h (out of 18.0h assigned).
    • Daniel Leidert did 23.0h (out of 20.0h assigned and 6.0h from previous period), thus carrying over 3.0h to the next month.
    • Emilio Pozuelo Monfort did 53.0h (out of 53.0h assigned and 0.75h from previous period), thus carrying over 0.75h to the next month.
    • Guilhem Moulin did 11.0h (out of 3.25h assigned and 16.75h from previous period), thus carrying over 9.0h to the next month.
    • Jochen Sprickerhof did 27.0h (out of 30.0h assigned), thus carrying over 3.0h to the next month.
    • Lee Garrett did 11.75h (out of 9.5h assigned and 44.25h from previous period), thus carrying over 42.0h to the next month.
    • Markus Koschany did 40.0h (out of 40.0h assigned).
    • Roberto C. S nchez did 7.0h (out of 14.75h assigned and 9.25h from previous period), thus carrying over 17.0h to the next month.
    • Santiago Ruano Rinc n did 19.75h (out of 21.75h assigned and 3.25h from previous period), thus carrying over 5.25h to the next month.
    • Sean Whitton did 6.0h (out of 6.0h assigned).
    • Sylvain Beucler did 52.5h (out of 14.75h assigned and 39.0h from previous period), thus carrying over 1.25h to the next month.
    • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
    • Tobias Frost did 17.0h (out of 17.0h assigned).

    Evolution of the situation In February, we have released 38 DLAs.
    • Notable security updates:
      • pam-u2f, prepared by Patrick Winnertz, fixed an authentication bypass vulnerability
      • openjdk-17, prepared by Emilio Pozuelo Monfort, fixed an authorization bypass/information disclosure vulnerability
      • firefox-esr, prepared by Emilio Pozuelo Monfort, fixed several vulnerabilities
      • thunderbird, prepared by Emilio Pozuelo Monfort, fixed several vulnerabilities
      • postgresql-13, prepared by Christoph Berg, fixed an SQL injection vulnerability
      • freerdp2, prepared by Tobias Frost, fixed several vulnerabilities
      • openssh, prepared by Colin Watson, fixed a machine-in-the-middle vulnerability
    LTS contributors Emilio Pozuelo Monfort and Santiago Ruano Rinc n coordinated the administrative aspects of LTS updates of postgresql-13 and pam-u2f, which were prepared by the respective maintainers, to whom we are most grateful. As has become the custom of the LTS team, work is under way on a number of package updates targeting Debian 12 (codename bookworm ) with fixes for a variety of vulnerabilities. In February, Guilhem Moulin prepared an upload of sssd, while several other updates are still in progress. Bastien Roucari s prepared an upload of krb5 for unstable as well. Given the importance of the Debian Security Tracker to the work of the LTS Team, we regularly contribute improvements to it. LTS contributor Emilio Pozuelo Monfort reviewed and merged a change to improve performance, and then dealt with unexpected issues that arose as a result. He also made improvements in the processing of CVEs which are not applicable to Debian. Looking to the future (the release of Debian 13, codename trixie , and beyond), LTS contributor Santiago Ruano Rinc n has initiated a conversation among the broader community involved in the development of Debian. The purpose of the discussion is to explore ways to improve the long term supportability of packages in Debian, specifically by focusing effort on ensuring that each Debian release contains the best supported upstream version of packages with a history of security issues.

    Thanks to our sponsors Sponsors that joined recently are in bold.

    25 March 2025

    Dirk Eddelbuettel: RQuantLib 0.4.25 on CRAN: Fix Bashism in Configure

    A new minor release 0.4.25 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too. QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN. This release of RQuantLib is tickled by a request to remove bashisms in shell scripts, or, as in my case here, configure.ac where I used the non-portable form of string comparison. That has of course been there for umpteen years and not bitten anyone as the default shell for most is in fact bash but the change has the right idea. And is of course now mandatory affecting quite a few packages is I tooted yesterday. It also contains an improvement to the macOS 14 build kindly contributed by Jeroen.

    Changes in RQuantLib version 0.4.25 (2025-03-24)
    • Support macOS 14 with a new compiler flag (Jeroen in #190)
    • Correct two bashisms in configure.ac

    One more note, though: This may however be the last release I make with Windows support. CRAN now also checks for forbidden symbols (such as assert or (s)printf or ) in static libraries, and this release tickled one such warning from the Windows side (which only uses static libraries). I have no desire to get involved in also maintaing QuantLib (no R here) for Windows and may simply turn the package back to OS_type: unix to avoid the hassle. To avoid that, it would be fabulous if someone relying on RQuantLib on Windows could step up and lend a hand looking after that library build. Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Otto Kek l inen: Debian Salsa CI in Google Summer of Code 2025

    Featured image of post Debian Salsa CI in Google Summer of Code 2025Are you a student aspiring to participate in the Google Summer of Code 2025? Would you like to improve the continuous integration pipeline used at salsa.debian.org, the Debian GitLab instance, to help improve the quality of tens of thousands of software packages in Debian? This summer 2025, I and Emmanuel Arias will be participating as mentors in the GSoC program. We are available to mentor students who propose and develop improvements to the Salsa CI pipeline, as we are members of the Debian team that maintains it. A post by Santiago Ruano Rinc n in the GitLab blog explains what Salsa CI is and its short history since inception in 2018. At the time of the article in fall 2023 there were 9000+ source packages in Debian using Salsa CI. Now in 2025 there are over 27,000 source packages in Debian using it, and since summer 2024 some Ubuntu developers have started using it for enhanced quality assurance of packaging changes before uploading new package revisions to Ubuntu. Personally, I have been using Salsa CI since its inception, and contributing as a team member since 2019. See my blog post about GitLab CI for MariaDB in Debian for a description of an advanced and extensive use case. Helping Salsa CI is a great way to make a global impact, as it will help avoid regressions and improve the quality of Debian packages. The benefits reach far beyond just Debian, as it will also help hundreds of Debian derivatives, such as Ubuntu, Linux Mint, Tails, Purism PureOS, Pop!_OS, Zorin OS, Raspberry Pi OS, a large portion of Docker containers, and even the Windows Subsystem for Linux.

    Improving Salsa CI: more features, robustness, speed While Salsa CI with contributions from 71 people is already quite mature and capable, there are many ideas floating around about how it could be further extended. For example, Salsa CI issue #147 describes various static analyzers and linters that may be generally useful. Issue #411 proposes using libfaketime to run autopkgtest on arbitrary future dates to test for failures caused by date assumptions, such as the Y2038 issue. There are also ideas about making Salsa CI more robust and code easier to reuse by refactoring some of the yaml scripts into independent scripts in #230, which could make it easier to run Salsa CI locally as suggested in #169. There are also ideas about improving the Salsa CI s own CI to avoid regressions from pipeline changes in #318. The CI system is also better when it s faster, and some speed improvement ideas have been noted in #412. Improvements don t have to be limited to changes in the pipeline itself. A useful project would also be to update more Debian packages to use Salsa CI, and ensure they adopt it in an optimal way as noted in #416. It would also be nice to have a dashboard with statistics about all public Salsa CI pipeline runs as suggested in #413. These and more ideas can be found in the issue list by filtering for tags Newcomer, Nice-To-Have or Accepting MRs. A Google Summer of Code proposal does not have to be limited to these existing ideas. Participants are also welcome to propose completely novel ideas!

    Good time to also learn Debian packaging Anyone working with Debian team should also take the opportunity to learn Debian packaging, and contribute to the packaging or maintenance of 1-2 packages in parallel to improving the Salsa CI. All Salsa CI team members are also Debian Developers who can mentor and sponsor uploads to Debian. Maintaining a few packages is a great way to eat your own cooking and experience Salsa CI from the user perspective, and likely to make you better at Salsa CI development.

    Apply now! The contributor applications opened yesterday on March 24, so to participate act now! If you are an eligible student and want to attend, head over to summerofcode.withgoogle.com to learn more. There are over a thousand participating organizations, with Debian, GitLab and MariaDB being some examples. Within these organizations there may be multiple subteams and projects to choose from. The full list of participating Debian projects can be found in the Debian wiki. If you are interested in GSoC for Salsa CI specifically, feel free to
    1. Reach out to me and Emmanuel by email at otto@ and eamanu@ (debian.org).
    2. Sign up at salsa.debian.org for an account (note it takes a few days due to manual vetting and approval process)
    3. Read the project README, STRUCTURE and CONTRIBUTING to get a developer s overview
    4. Participate in issue discussions at https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/
    Note that you don t have to wait for GSoC to officially start to contribute. In fact, it may be useful to start immediately by submitting a Merge Request to do some small contribution, just to learn the process and to get more familiar with how everything works, and the team maintaining Salsa CI. Looking forward to seeing new contributors!

    Next.