My Debian contributions this month were all
sponsored by
Freexian. I had a bit less time than usual, because Freexian collaborators
gathered in Marseille this month for our yearly sprint, doing some planning
for next year.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
OpenSSH
I began preparing for the second stage of the GSS-API key exchange package
split (some
details have changed since that message). It seems that we ll need to wait
until Ubuntu 26.04 LTS has been released, but that s close enough that it s
worth making sure we re ready. This month I just did some packaging
cleanups that would otherwise have been annoying to copy, such as removing
support for direct upgrades from pre-bookworm. I m considering some other
package rearrangements to make the split easier to manage, but haven t made
any decisions here yet.
This also led me to start on a long-overdue bug triage pass, mainly
consisting of applying usertags to lots of our open bugs to sort them by
which program they apply to, and also closing a few that have been fixed,
since some bugs will eventually need to be reassigned to GSS-API packages
and it would be helpful to make them easier to find. At the time of
writing, about 30% of the bug list remains to be categorized this way.
Python packaging
I upgraded these packages to new upstream versions:
Welcome to the report for November 2025 from the Reproducible Builds project!
These monthly reports outline what we ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.
Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris talk:
[ ] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren t all the builds reproducible now?
In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian s Salsa Continuous Integration (CI) pipeline with debrebuild. Joschen cites the advantages as being threefold: firstly, that only one extra build needed ; it uses the same sbuild and ccache tooling as the normal build ; and works for any Debian release . The merge request was merged by Emmanuel Arias and is now active.
kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary before installing Debian packages. Configuration can be done through a config file, or through a curses-like user interface.
Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and J rg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.
Elsewhere, Roland Clobus performed some analysis on the live Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.
Lastly, Jochen Sprickerhof filed a bug announcing their intention to binary NMU a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.
Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
Julien Malka and Arnout Engelen launched the new hash collection
server for NixOS. Aside from improved reporting to help focus reproducible builds
efforts within NixOS, it collects build hashes as individually-signed attestations
from independent builders, laying the groundwork for further tooling.
307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [ ][ ][ ]
In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:
debian/watch file, as Lintian now flags this as error for native Debian packages. [ ][ ]Standards-Version to 4.7.2, with no changes needed. [ ]Rules-Requires-Root header as it is no longer required.. [ ]
Once again, there were a number of improvements made to our website this month including:
SOURCE_DATE_EPOCH page to fix the Lisp example syntax. [ ]
CONFIG_MODULE_HASHES patchset for the Linux kernel, which aims to enable reproducible kernel packages for Linux distributions .
.rpms.
Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:
Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.
SARndbox (race)clamav (rust toolchain)contrast/identity/loupe/mousai (need glib-macros update)cosmic (cosmic* HashMap)dealers-choice (nocheck)falcon (python-falcon date)FreeDoko (date)gnutls (FTBFS-CPU)gods-deluxe (jar mtimes)Kinect (date)libplasma6 (qmlcachegen race)llvm (rocm-omp date)rnp (FTBFS-2041)rocsolver (FTBFS-j1)switcheroo (FTBFS-j1)vdrift (date)ibus (parallelism)qmlcachegen (with Ulf Hermann)python-gffutils.python-biom-format.python-requests-cache.python-tld.smart-open.vanguards.pycifrw.golang-github-apptainer-container-library-client.python-ofxhome.python-lupa.mu-editor.python-spdx-tools.python-django-waffle.biosquid.dateparser.parsinsert.rdf2rml.python-et-xmlfile.deblur.ytcc.pgpainless.trillian.pywavelets.jsonpath-ng.presto.python-pyutil.python-os-apply-config.pydata-sphinx-theme.python-ciso8601.python-pymummer.qcat.tkgate.tkgate.ruby-gnuplot.python-nixio.python-altair.python-graphene.python-phabricator.python-slimmer.python-kafka.python-sshsig.python-babelgladeextractor.python-genson.flawfinder.crasm.insilicoseq.pychopper.pycparser.whipper.vt.pyxnat.golang-github-kshedden-statmodel.nim-hts.golang-github-emicklei-dot.golang-gonum-v1-plot.beangulp.virulencefinder.ansible-lint.entropybroker.namecheap.spopt.pyasn.python-pyvcf.python-pysaml2.#reproducible-builds on irc.oftc.net.
rb-general@lists.reproducible-builds.org
Another short status update of what happened on my side last month. Hand holding the
release machinery for Phosh 0.51.0 but
there's more:
See below for details on the above and more:
phosh
DebugControl interface (MR)org.freedesktop.FileManager1 in the demo (MR, MR, MR)make run invocation (MR)None for parent in adw_dialog_choose (MR)| Series: | The Echo Archives #2 |
| Publisher: | Orbit |
| Copyright: | August 2025 |
| ISBN: | 0-316-30404-2 |
| Format: | Kindle |
| Pages: | 355 |
I am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits.
Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, forge websites and CI systems.
Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each commit is a new upload to the archive, and the commit message is the debian/changelog entry. The commit log is available at snapshots.debian.org.
In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org the GitLab instance of Debian. This is, however, based on each DD s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.
unstable area is equivalent to the main development branch).Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them.
This also makes contributing to multiple packages in parallel hard. One can t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed.
To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.
The recommended first step in contributing to a package is to use the built-in Fork feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of Forks enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users rights. Git supports this by making both temporary and permanent forks easy to create and maintain.Further, it states:
Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work. Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa, and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa which is based on GitLab the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and are license-compliant, but as they don t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.
git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone.
Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution and so forth.
Details on this vision will be in a later blog post, so subscribe to updates!
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language and is
widely used by (currently) 1286 other packages on CRAN, downloaded 42.6 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 659 times according
to Google Scholar.
This versions updates to the 15.2.2 upstream Armadillo release made two days
ago. It brings a few changes over the RcppArmadillo 15.2.0 release made
only to GitHub (and described in
this post), and of course even more changes relative to the last CRAN release described in
this earlier post. As described previously, and due to both the
upstream transition to C++14 coupled with the CRAN move away from C++11, the
package offers a transition by allowing packages to remain with the
older, pre-15.0.0 legacy Armadillo yet offering the
current version as the default. During the transition we did not make
any releases to CRAN allowing both the upload cadence to settle back to
the desired about six in six months that the CRAN Policy asks for, and
for packages to adjust to any potential changes. Most affected packages
have done so (as can be seen in the GitHub issues #489 and
#491)
which is good to see. We appreciate all the work done by the respective
package maintainers. A number of packages are still under a (now
formally expired) deadline at CRAN and may get removed. Our offer to
help where we can still stands, so please get in touch if we can be of
assistance. As a reminder, the meta-issue #475
regroups all the resources for the transition.
With respect to changes in the package, we once more overhauled the
OpenMP detection and setup, following the approach take by package
data.table but sticking with an autoconf-based
configure. The detailed changes since the last CRAN release
follow.
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.Changes in RcppArmadillo version 15.2.2-1 (2025-11-21)
- Upgraded to Armadillo release 15.2.2 (Medium Roast Deluxe)
- Improved reproducibility of random number generation when using OpenMP
- Skip a unit test file under macOS as complex algebra seems to fail under newer macOS LAPACK setting
- Further OpenMP detection rework for macOS (Dirk in #497, #499)
- Define ARMA_CRIPPLED_LAPACK on Windows only if 'LEGACY' Armadillo selected
Changes in RcppArmadillo version 15.2.1-0 (2025-10-28) (GitHub Only)
- Upgraded to Armadillo release 15.2.1 (Medium Roast Deluxe)
- Faster handling of submatrices with one row
- Improve OpenMP detection (Dirk in #495 fixing #493)
Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)
- Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)
- Added
rande()for generating matrices with elements from exponential distributionsshift()has been deprecated in favour ofcircshift(), for consistency with Matlab/Octave- Reworked detection of aliasing, leading to more efficient compiled code
- OpenMP detection in
configurehas been simplified
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
$ faketime '2008-12-24 08:15:42' qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...
With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.
There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the exec() calls and now it rewrites them on the fly whenever the executed binary would be statically linked!
$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ( ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: FileFD ofd= FileO
FD #0 type=FD_PIPE_IN r cloexec=false , 1: FileFD ofd= FileOFD #3 type=FD_PIPE_OUT w Pipe #0 close_o
n_popen=false cloexec=false , 2: FileFD ofd= FileOFD #4 type=FD_PIPE_OUT w Pipe #1 close_on_popen=fal
se cloexec=false , 3: FileFD NULL /* times 2 */] )
"[FBBCOMM_TAG]": "exec",
"file": "test_static",
"// fd": null,
"// dirfd": null,
"arg": [
"./test_static"
],
"env": [
"SHELL=/bin/bash",
...
"FB_SOCKET=/tmp/firebuild.cpMn75/socket",
"_=./test_static"
],
"with_p": false,
"// path": null,
"utime_u": 0,
"stime_u": 1017
FIREBUILD: -> proc_ic_msg() (message_processor.cc:782) proc= ExecedProcess 161077.1, running, "bash -c
./test_static", fds=[0: FileFD ofd= FileOFD #0 type=FD_PIPE_IN r cloexec=false , 1: FileFD ofd= FileOF
D #3 type=FD_PIPE_OUT w Pipe #0 close_on_popen=false cloexec=false , 2: FileFD ofd= FileOFD #4 type=F
D_PIPE_OUT w Pipe #1 close_on_popen=false cloexec=false , 3: FileFD NULL /* times 2 */] , fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD: -> send_fbb() (utils.cc:292) conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
"[FBBCOMM_TAG]": "rewritten_args",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"path": "/usr/bin/qemu-user-interposable"
...
FIREBUILD: -> accept_ic_conn() (firebuild.cc:139) listener=6
...
FIREBUILD: fd 9.2: ( Process NULL )
"[FBBCOMM_TAG]": "scproc_query",
"pid": 161077,
"ppid": 161073,
"cwd": "/home/rbalint/projects/firebuild/test",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"env_var": [
"CCACHE_DISABLE=1",
...
"SHELL=/bin/bash",
"SHLVL=0",
"_=./test_static"
],
"umask": "0002",
"jobserver_fds": [],
"// jobserver_fifo": null,
"executable": "/usr/bin/qemu-user-interposable",
"// executed_path": null,
"// original_executed_path": null,
"libs": [
"/lib/x86_64-linux-gnu/libatomic.so.1",
"/lib/x86_64-linux-gnu/libc.so.6",
"/lib/x86_64-linux-gnu/libglib-2.0.so.0",
"/lib/x86_64-linux-gnu/libm.so.6",
"/lib/x86_64-linux-gnu/libpcre2-8.so.0",
"/lib64/ld-linux-x86-64.so.2"
],
"version": "0.8.5.1"
The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit not just Firebuild.
For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously invisible steps in your builds? All now fair game for caching.
Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you re using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.
Static binaries, welcome to the party!
Welcome to post 55 in the R4 series.
r2u brings CRAN packages for R to Ubuntu. We mentioned it in the
R4 series within the
last year in posts #54
about faster CI, #48
about the r2u keynote
at U Mons, #47
reviewing r2u it at its
third birthday, #46
about adding arm64 support, and #44
about the r2u for mlops
talk.
Today brings news of an important (internal) update. Following both
the arm64 builds as well as the last bi-annual BioConductor package
update (and the extension of BioConductor coverage to arm64), more and
more of our build setup became automated at GitHub. This has now been
unified. We dispatch builds for amd64 packages for jammy (22.04) and
noble (24.04) (as well as for the arm64 binaries for noble ) from the
central build repository and enjoy the highly parallel build of the up
to fourty available GitHub Runners. In the process we also switched
fully to source builds.
In the past, we had relied on p3m.dev (formerly known as ppm and
rspm) using its binaries. These so-called naked binaries are what R
produces when called as R CMD INSTALL --build. They are
portable with the same build architecture and release, but do not carry
packaging information. Now, when a Debian or Ubuntu .deb binary is
built, the same step of R CMD INSTALL --build happens. So
our earlier insight was to skip the compilation step, use the p3m
binary, and then wrap the remainder of a complete package around it.
Which includes the all-important dependency information for both the R
package relations (from hard Depends / Imports / LinkingTo or soft
Suggests declarations) as well as the shared library dependency
resolution we can do when building for a Linux
distribution.
That served us well, and we remain really grateful for the p3m.dev
build service. But it also meant were dependending on the clock and
cadence of p3m.dev. Which was not really a problem when it ran
reliably daily, and early too, included weekends, and showed a timestamp
of last updates. By now it is a bit more erratic, frequently late, skips
weekends more regularly and long stopped showing when it was last
updated. Late afternoon releases reflecting the CRAN updates ending one
and half-days earlier is still good, it s just not all that
current. Plus there was always the very opaque occurrencem where maybe
one in 50 packages or so would not even be provided as a binary so we
had to build it anyway the fallback always existing, and was used for
both BioConductor (no binaries) and arm64 (no binaries at first, this
now changed). So now we just build packages the standard way, albeit as
GitHub Actions.
In doing so we can ignore p3m.dev, and rather follow the CRAN clock and cadence (as for
example CRANberries does),
and can update several times a day. For example early this morning
(Central time) we ran update for the then-new 28 source packages
resulting in 28 jammy and 36 noble binary packages; right now in
mid-afternoon we are running another build for 37 source packages
resuling in 37 jammy and 47 noble packages. (Packages without a
src/ directory and hence no compilation can be used across
amd64 and arm64; those that do have src/ are rebuilt for
arm64 hence the different sets of jammy and noble packages as only the
latter has arm64 now.) This gets us packages from this morning into r2u which p3m.dev should
have by tomorrow afternoon or so.
And with that r2u
remains Fast. Easy. Reliable. Pick all three! and also a little more
predictable and current in its delivery. What s not to like?
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
About 95% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
OpenSSH
OpenSSH upstream released
10.1p1 this month, so I
upgraded to that. In the process, I reverted a Debian patch that changed IP
quality-of-service defaults, which made sense at the
time but has since been reworked upstream
anyway, so it makes sense to find out whether we still have similar
problems. So far I haven t heard anything bad in this area.
10.1p1 caused a regression in the ssh-agent-filter package s tests, which I
bisected and chased up with
upstream.
10.1p1 also had a few other user-visible regressions
(#1117574,
#1117594,
#1117638,
#1117720); I upgraded to
10.2p1 which fixed some of
these, and contributed some upstream debugging
help to clear up the
rest. While I was there, I also fixed ssh-session-cleanup: fails due to
wrong $ssh_session_pattern in our packaging.
Finally, I got all this into trixie-backports, which I intend to keep up to
date throughout the forky development cycle.
Python packaging
For some time, ansible-core has had occasional autopkgtest failures that
usually go away before anyone has a chance to look into them properly. I
ran into these via openssh recently and decided to track them down. It
turns out that they only happened when the libpython3.13-stdlib package
had different versions in testing and unstable, because an integration test
setup script made a change that would be reverted if that package was ever
upgraded in the testbed, and one of the integration tests accidentally
failed to disable system apt sources comprehensively enough while testing
the behaviour of the ansible.builtin.apt module. I fixed this in
Debian
and contributed the relevant part
upstream.
We ve started working on enabling Python 3.14 as a supported version in
Debian. I fixed or helped to fix a number of packages for this:
nocheck build
profile, and I fixed several of
these (generally just a matter of adjusting build-dependencies):
/usr/bin/env: 'python': No such file or
directorykernel: amdgpu 0000:02:00.0: [drm] *ERROR* [CRTC:58:crtc-0] flip_done timed outThen I got the following errors from kwin_wayland:
kwin_wayland_wrapper[19598]: kwin_wayland_drm: Pageflip timed out! This is a bug in the amdgpu kernel driver kwin_wayland_wrapper[19598]: kwin_wayland_drm: Please report this at https://gitlab.freedesktop.org/drm/amd/-/issues kwin_wayland_wrapper[19598]: kwin_wayland_drm: With the output of 'sudo dmesg' and 'journalctl --user-unit plasma-kwin_wayland --boot 0'In another instance running Debian kernel 6.12.48+deb13 I got the kernel errors at the bottom of the post (not in the RSS feed). A google result suggested putting the following on the kernel command line which has the downside of increasing the idle power, but given that it s a low power GPU (that I selected when I was using a system without a PCIe power cable) a bit of extra power use shouldn t matter much. But it didn t seem to change anything.
amdgpu.runpm=0 amdgpu.dcdebugmask=0x10I had tried out the Debian/Unstable kernel 6.16.12-2 which didn t work with my USB speakers and had problems with the HDMI sound through my monitor but still had AMD GPU issues. This all seemed to start with the PCIe errors being reported on this system [1]. So I m now wondering if the PCIe errors were from the GPU not the socket/motherboard. The GPU in question is a Radeon RX560 4G which cost $246.75 back in about 2021 [2]. I could buy a new one of those on ebay for $149 or one of the faster AMD cards like Radeon RX570 that are around the same price. I probably have a Radeon R7 260X in my collection of spare parts that would do the job too (2G of VRAM is more than sufficient for my desktop computing needs). Any suggestions on how I should proceed from here?
[419976.222647] amdgpu 0000:02:00.0: amdgpu: GPU fault detected: 146 0x0138482c [419976.222659] amdgpu 0000:02:00.0: amdgpu: for process mpv pid 141328 thread vo pid 141346 [419976.222662] amdgpu 0000:02:00.0: amdgpu: VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x00101427 [419976.222664] amdgpu 0000:02:00.0: amdgpu: VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x0404802C [419976.222666] amdgpu 0000:02:00.0: amdgpu: VM fault (0x2c, vmid 2, pasid 32810) at page 1053735, read from 'TC0' (0x54433000) (72) [419986.245051] amdgpu 0000:02:00.0: amdgpu: Dumping IP State [419986.245061] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed [419986.255152] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839646, emitted seq=11839648 [419986.255158] amdgpu 0000:02:00.0: amdgpu: Process information: process mpv pid 141328 thread vo pid 141346 [419986.255209] amdgpu 0000:02:00.0: amdgpu: GPU reset begin! [419986.503030] amdgpu: cp is busy, skip halt cp [419986.658198] amdgpu: rlc is busy, skip halt rlc [419986.659270] amdgpu 0000:02:00.0: amdgpu: BACO reset [419986.884672] amdgpu 0000:02:00.0: amdgpu: GPU reset succeeded, trying to resume [419986.885398] [drm] PCIE GART of 256M enabled (table at 0x000000F402000000). [419986.885413] [drm] VRAM is lost due to GPU reset! [419987.021051] [drm] UVD and UVD ENC initialized successfully. [419987.120999] [drm] VCE initialized successfully. [419987.193302] amdgpu 0000:02:00.0: amdgpu: GPU reset(1) succeeded! [419987.194117] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125! [419997.509120] amdgpu 0000:02:00.0: amdgpu: Dumping IP State [419997.509131] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed [419997.519145] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839650, emitted seq=11839652 [419997.519152] amdgpu 0000:02:00.0: amdgpu: Process information: process kwin_wayland pid 3577 thread kwin_wayla:cs0 pid 3615 [419997.519158] amdgpu 0000:02:00.0: amdgpu: GPU reset begin! [419997.772966] amdgpu: cp is busy, skip halt cp [419997.928138] amdgpu: rlc is busy, skip halt rlc [419997.929165] amdgpu 0000:02:00.0: amdgpu: BACO reset [419998.164705] amdgpu 0000:02:00.0: amdgpu: GPU reset succeeded, trying to resume [419998.165412] [drm] PCIE GART of 256M enabled (table at 0x000000F402000000). [419998.165427] [drm] VRAM is lost due to GPU reset! [419998.311054] [drm] UVD and UVD ENC initialized successfully. [419998.411006] [drm] VCE initialized successfully. [419998.476272] amdgpu 0000:02:00.0: amdgpu: GPU reset(2) succeeded! [419998.476363] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125! [420008.773202] amdgpu 0000:02:00.0: amdgpu: Dumping IP State [420008.773212] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed [420008.773240] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, but soft recovered === the above sequence of 3 repeated many times (narrator's voice "but it did not recover") === [420130.933612] rfkill: input handler disabled [420135.594195] rfkill: input handler enabled [420145.734076] amdgpu 0000:02:00.0: amdgpu: Dumping IP State [420145.734085] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed [420145.744099] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839790, emitted seq=11839792 [420145.744105] amdgpu 0000:02:00.0: amdgpu: Process information: process kwin_wayland pid 3577 thread kwin_wayla:cs0 pid 3615 [420145.744111] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!There were more kernel messages, but they were just repeats and after a certain stage there probably isn t any more data worth getting.
Welcome to the October 2025 report from the Reproducible Builds project!
Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
We were thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. During this event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim was to create an inclusive space that fosters collaboration, innovation and problem-solving.
The agenda of the three main days is available online however, some working sessions may still lack notes at time of publication.
One tangible outcome of the summit is that Johannes Starosta finished their rebuilderd tutorial, which is now available online and Johannes is actively seeking feedback.
On the issue tracker for the popular Signal messenger app, developer Greyson Parrelli reports that updates to the Google Play store have, in effect, broken reproducible builds:
The most recent issues have to do with changes to the APKs that are made by the Play Store. Specifically, they add some attributes to some .xml files around languages are resources, which is not unexpected because of how the whole bundle system works. This is trickier to resolve, because unlike current expected differences (like signing information), we can t just exclude a whole file from the comparison. We have to take a more nuanced look at the diff. I ve been hesitant to do that because it ll complicate our currently-very-readable comparison script, but I don t think there s any other reasonable option here.
The full thread with additional context is available on GitHub.
malloc are used to determine some order of execution.
.buildinfo files).
xfsprogs 6.17.0 which specifically includes a commit that implements the functionality to populate a newly created XFS filesystem directly from an existing directory structure which makes it easier to create populated filesystems
without having to mount them [and thus is] particularly useful for reproducible builds . Luca asked the list how they might contribute to the docs of the System images page.
Popular YouTuber @laurewired published a video this month with an engaging take on the Trusting Trust problem. Titled The Original Sin of Computing that no one can fix, the video touches on David A. Wheeler s Diverse Double-Compiling dissertation.
GNU developer Janneke Nieuwenhuizen followed-up with an email (additionally sent to our mailing list) as well, underscoring that GNU Mes s current solution [to this issue] uses ancient softwares in its bootstrap path, such as gcc-2.95.3 and glibc-2.2.5 . (According to Colby Russell, the GNU Mes bootstrapping sequence is shown at 18m54s in the video.)
Holger Levsen gave a talk at this year s Transparency.dev summit in Gothenburg, Sweden, outlining the achievements of the Reproducible Builds project in the last 12 years, covering both upstream developments as well as some distribution-specific details. As mentioned on the talk s page, Holger s presentation concluded with an outlook into the future and an invitation to collaborate to bring transparency logs into Reproducible Builds projects .
The slides of the talk are available, although a video has yet to be released. Nevertheless, as a result of the discussions at Transparency.dev there is a new page on the Debian wiki with the aim of describing a potential transparency log setup for Debian.
Andrew Ayer has setup a new service at sourcespotter.com that aims to monitor the supply chain security for Go releases. It consists of four separate trackers:
go command.
Julien Malka of the Institut Polytechnique de Paris published an exciting paper this month on How NixOS could have detected the XZ supply-chain attack for the benefit of all thanks to reproducible-builds. Julien outlines his paper as follows:
In March 2024, a sophisticated backdoor was discovered in xz, a core compression library in Linux distributions, covertly inserted over three years by a malicious maintainer, Jia Tan. The attack, which enabled remote code execution via ssh, was only uncovered by chance when Andres Freund investigated a minor performance issue. This incident highlights the vulnerability of the open-source supply chain and the effort attackers are willing to invest in gaining trust and access. In this article, I analyze the backdoor s mechanics and explore how bitwise build reproducibility could have helped detect it.A PDF of the paper is available online.
Iy n M ndez Veiga and Esther H nggi (of the Lucerne University of Applied Sciences and Arts and ETH Zurich) published a paper this month on the topic of Reproducible Builds for Quantum Computing. The abstract of their paper mentions the following:
Although quantum computing is a rapidly evolving field of research, it can already benefit from adopting reproducible builds. This paper aims to bridge the gap between the quantum computing and reproducible builds communities. We propose a generalization of the definition of reproducible builds in the quantum setting, motivated by two threat models: one targeting the confidentiality of end users data during circuit preparation and submission to a quantum computer, and another compromising the integrity of quantum computation results. This work presents three examples that show how classical information can be hidden in transpiled quantum circuits, and two cases illustrating how even minimal modifications to these circuits can lead to incorrect quantum computation results.A full PDF of their paper is available.
Congratulations to Georg Kofler who submitted their Master s thesis for the Johannes Kepler University of Linz, Austria on the topic of Reproducible builds of E2EE-messengers for Android using Nix hermetic builds:
The thesis focuses on providing a reproducible build process for two open-source E2EE messaging applications: Signal and Wire. The motivation to ensure reproducibility and thereby the integrity of E2EE messaging applications stems from their central role as essential tools for modern digital privacy. These applications provide confidentiality for private and sensitive communications, and their compromise could undermine encryption mechanisms, potentially leaking sensitive data to third parties.A full PDF of their thesis is available online.
Shawkot Hossain of Aalto University, Finland has also submitted their Master s thesis on the The Role of SBOM in Modern Development with a focus on the extant tooling:
Currently, there are numerous solutions and techniques available in the market to tackle supply chain security, and all claim to be the best solution. This thesis delves deeper by implementing those solutions and evaluates them for better understanding. Some of the tools that this thesis implemented are Syft, Trivy, Grype, FOSSA, dependency-check, and Gemnasium. Software dependencies are generated in a Software Bill of Materials (SBOM) format by using these open-source tools, and the corresponding results have been analyzed. Among these tools, Syft and Trivy outperform others as they provide relevant and accurate information on software dependencies.A PDF of the thesis is also available.
Michael Plura published an interesting article on Heise.de on the topic of Trust is good, reproducibility is better:
In the wake of growing supply chain attacks, the FreeBSD developers are relying on a transparent build concept in the form of Zero-Trust Builds. The approach builds on the established Reproducible Builds, where binary files can be rebuilt bit-for-bit from the published source code. While reproducible builds primarily ensure verifiability, the zero-trust model goes a step further and removes trust from the build process itself. No single server, maintainer, or compiler can be considered more than potentially trustworthy.The article mentions that this goal has now been achieved with a slight delay and can be used in the current development branch for FreeBSD 15 .
In Debian this month, 7 reviews of Debian packages were added, 5 were updated and 11 were removed this month adding to our knowledge about identified issues.
For the Debian CI tests Holger fixed #786644 and set nocheck in DEB_BUILD_OPTIONS for the 2nd build..
python-can.rsbackup.mobilitydb.pyraf.ne.qt6-lottie, plasma6-print-manager, plasma6-nm (avoid race in qmlcachegen)xfishtank (date, regression)gstreamer-plugins-rsgpg2 (FTBFS-2038)rocclr (PID)kf6-breeze-icons (parallelism)opencloud-server (random tmp path)python-awscrt (FTBFS-j1)glib-macros/contrast/fractal/Fragments/identity/mousai/loupe/gstreamer-plugins-rs (rust HashMap)deno (rust order)
Once again, there were a number of improvements made to our website this month including:
git archive to the Archive metadata page. [ ]
307 was uploaded to Debian unstable by Chris Lamb, who made a number of changes including fixing compatibility with LLVM version 21 [ ], an attempt to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [ ] In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 307.
#reproducible-builds on irc.oftc.net.
rb-general@lists.reproducible-builds.org
This was the first year I attended Kernel
Recipes and I have nothing but say how
much I enjoyed it and how grateful I m for the opportunity to talk more about
kworkflow to very experienced kernel developers. What
I mostly like about Kernel Recipes is its intimate format, with only one track
and many moments to get closer to experts and people that you commonly talk
online during your whole year.
In the beginning of this year, I gave the talk Don t let your motivation go,
save time with kworkflow at
FOSDEM,
introducing kworkflow to a more diversified audience, with different levels of
involvement in the Linux kernel development.
At this year s Kernel Recipes I presented
the second talk of the first day: Kworkflow - mix & match kernel recipes end-to-end.
The Kernel Recipes audience is a bit different from FOSDEM, with mostly
long-term kernel developers, so I decided to just go directly to the point. I
showed kworkflow being part of the daily life of a typical kernel developer
from the local setup to install a custom kernel in different target machines to
the point of sending and applying patches to/from the mailing list. In short, I
showed how to mix and match kernel workflow recipes end-to-end.
As I was a bit fast when showing some features during my presentation, in this
blog post I explain each slide from my speaker notes. You can see a summary of
this presentation in the Kernel Recipe Live Blog Day 1: morning.
Hi, I m Melissa Wen from Igalia. As we already started sharing kernel recipes
and even more is coming in the next three days, in this presentation I ll talk
about kworkflow: a cookbook to mix & match kernel recipes end-to-end.
This is my first time attending Kernel Recipes, so lemme introduce myself
briefly.
And what s this cookbook called kworkflow?
Kworkflow is a tool created by Rodrigo Siqueira, my colleague at Igalia. It s a
single platform that combines software and tools to:
It s mostly done by volunteers, kernel developers using their spare time. Its
features cover real use cases according to kernel developer needs.
Basically it s mixing and matching the daily life of a typical kernel developer
with kernel workflow recipes with some secret sauces.
So, it s time to start the first recipe: A good GPU driver for my AMD laptop.
Before starting any recipe we need to check the necessary ingredients and
tools. So, let s check what you have at home.
With kworkflow, you can use:
kw device: to get information about the target machine, such as: CPU model,
kernel version, distribution, GPU model,
kw remote: to set the address of this machine for remote access
kw config: you can configure kw with kw config. With this command you can
basically select the tools, flags and preferences that kw will use to build
and deploy a custom kernel in a target machine. You can also define recipients
of your patches when sending it using kw send-patch. I ll explain more about
each feature later in this presentation.
kw kernel-config manager (or just kw k): to fetch the kernel .config file
from a given machine, store multiple .config files, list and retrieve them
according to your needs.
Now, with all ingredients and tools selected and well portioned, follow the
right steps to prepare your custom kernel!
First step: Mix ingredients with kw build or just kw b
kw b and its options wrap many routines of compiling a custom kernel.
kw b -i to check the name and kernel version and the number
of modules that will be compiled and kw b --menu to change kernel
configurations.kw b to compile the custom kernel for a target
machine.kw deploy or just kw d
After compiling the custom kernel, we want to install it in the target machine.
Check the name of the custom kernel built: 6.17.0-rc6 and with kw s SSH
access the target machine and see it s running the kernel from the Debian
distribution 6.16.7+deb14-amd64.
As with building settings, you can also pre-configure some deployment settings,
such as compression type, path to device tree binaries, target machine (remote,
local, vm), if you want to reboot the target machine just after deploying your
custom kernel, and if you want to boot in the custom kernel when restarting the
system after deployment.
If you didn t pre-configured some options, you can still customize as a command
option, for example: kw d --reboot will reboot the system after deployment,
even if I didn t set this in my preference.
With just running kw d --reboot I have installed the kernel in a given target
machine and rebooted it. So when accessing the system again I can see it was
booted in my custom kernel.
Third step: Time to taste with kw debug
kw debug wraps many tools for validating a kernel in a target machine. We
can log basic dmesg info but also tracking events and ftrace.
kw debug --dmesg --history we can grab the full dmesg log from a
remote machine, if you use the --follow option, you will monitor dmesg
outputs. You can also run a command with kw debug --dmesg --cmd="<my
command>" and just collect the dmesg output related to this specific execution
period.kw drm
--gui-off to drop the graphical interface and release the amdgpu for
unloading it. So I run kw debug --dmesg --cmd="modprobe -r amdgpu" to unload
the amdgpu driver, but it fails and I couldn t unload it.
Oh no! That custom kernel isn t tasting good. Don t worry, as in many recipes
preparations, we can search on the internet to find suggestions on how to make
it tasteful, alternative ingredients and other flavours according to your
taste.
With kw patch-hub you can search on the lore kernel mailing list for possible
patches that can fix your kernel issue. You can navigate in the mailing lists,
check series, bookmark it if you find it relevant and apply it in your local
kernel tree, creating a different branch for tasting oops, for testing. In
this example, I m opening the amd-gfx mailing list where I can find
contributions related to the AMD GPU driver, bookmark and/or just apply the
series to my work tree and with kw bd I can compile & install the custom kernel
with this possible bug fix in one shot.
As I changed my kw config to reboot after deployment, I just need to wait for
the system to boot to try again unloading the amdgpu driver with kw debug
--dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for
this command, the driver was unloaded, the problem is fixed by this series and
the kernel tastes good now.
If I m satisfied with the solution, I can even use kw patch-hub to access the
bookmarked series and marking the checkbox that will reply the patch thread
with a Reviewed-by tag for me.
As in all recipes, we need ingredients and tools, but with kworkflow you can
get everything set as when changing scenarios in a TV show. We can use kw env
to change to a different environment with all kw and kernel configuration set
and also with the latest compiled kernel cached.
I was preparing the first recipe for a x86 AMD laptop and with kw env --use
RPI_64 I use the same worktree but moved to a different kernel workflow, now
for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+
is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I
just built&deployed. kw build settings are also different, now I m targeting
a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu-
cross-compilation tool and my kernel image calls kernel8 now.
If you didn t plan for this recipe in advance, don t worry. You can create a
new environment with kw env --create RPI_64_V2 and run kw init --template
to start preparing your kernel recipe with the mirepoix ready.
I mean, with the basic ingredients already cut
I mean, with the kw configuration set from a template.
And you can use kw remote to set the IP address of your target machine and
kw kernel-config-manager to fetch/retrieve the .config file from your target
machine. So just run kw bd to compile and install a upstream kernel for
Raspberry Pi 4.
Let s show you how easy is to build, install and test a custom kernel for Steam
Deck with Kworkflow. It s a live demo, but I also recorded it because I know
the risks I m exposed to and something can go very wrong just because of
reasons :)
As I started the demo in the kw environment for Raspberry Pi 4, I first moved
to another environment previously used for Steam Deck. In this STEAMDECK
environment, the mainline kernel was already compiled and cached, and all
settings for accessing the target machine, compiling and installing a custom
kernel were retrieved automatically.
My live demo followed these steps:
kw env --use STEAMDECK, switch to a kworkflow environment for Steam
Deck kernel development.
kw b -i, shows that kw will compile and install a kernel with 285
modules named 6.17.0-rc6-mainline-for-deck.
kw config to show that, in this environment, kw configuration changes
to x86 architecture and without cross-compilation.
kw device to display information about the Steam Deck device, i.e. the
target machine. It also proves that the remote access - user and IP - for
this Steam Deck was already configured when using the STEAMDECK environment, as
expected.
git am, as usual, apply a hot fix on top of the mainline kernel.
This hot fix makes the audio play again on Steam Deck.
kw b, build the kernel with the audio change. It will be fast because
we are only compiling the affected files since everything was previously
done and cached. Compiled kernel, kw configuration and kernel configuration is
retrieved by just moving to the STEAMDECK environment.
kw d --force --reboot to deploy the new custom kernel to the target
machine. The --force option enables us to install the mainline kernel even
if mkinitcpio complains about missing support for downstream packages when
generating initramfs. The --reboot option makes the device reboot the Steam
Deck automatically, just after the deployment completion.
kw send-patch and kw would automatically add
subsystem maintainers, reviewers and mailing lists for the affected files as
recipients, and send the patch to the upstream community assessment. As I
didn t want to create unnecessary noise, I just did a dry-run with kw
send-patch -s --simulate to explain how it looks.
:wq for today.
Welcome to post 54 in the R4 series.
The topic of continuous integration has been a recurrent theme here
at the R4 series. Post #32
introducess r-ci,
while post #41
brings r2u to r-ci, but does not show a
matrix deployment. Post #45
describes the updated r-ci setup that is now
the default and contains a macOS and Ubuntu matrix, where the latter
relies on r2u to keep
things fast, easy, reliable . Last but not least more recent post #52
shares a trick for ensuring coverage reports.
Following #45,
use of r-ci at for
example GitHub Actions has seen steady use and very reliable
performance. With the standard setup, a vanilla Ubuntu setup is changed
into one supported by r2u. This requires
downloading and installating a few Ubuntu packages, and has
generally been fairly quick on the order of around fourty
seconds. Now, the general variability of run-times for identical
tasks in GitHub Actions is well documented by the results of the
setup described in post #39
which still runs weekly. It runs the identical SQL query against a
remote backend using two different package families. And lo and behold,
the intra-method variability on unchanged code or setup and
therefore due solely to system variability is about as large as the
inter-method variability. In short, GitHub Actions performance varies
randomly with significant variability. See the repo README.md for
chart that updates weekly (and see #39
for background).
Of late, this variability became more noticeable during standard
GitHub Actions runs where it would regularly take more than two minutes
of setup time before actual continuous integration work was done. Some
caching seems to be in effect, so subsequent runs in the same
repo seem faster and often came back to one minute or less. For
lightweight and small packages, loosing two minutes to setup when the
actual test time is a minute or less gets old fast.
Looking around, we noticed that container use can be combined with
matrix use. So we have now been deploying the following setup (not
always over all the matrix elements though)
jobs:
ci:
strategy:
matrix:
include:
- name: container, os: ubuntu-latest, container: rocker/r2u4ci
- name: macos, os: macos-latest
- name: ubuntu, os: ubuntu-latest
runs-on: $ matrix.os
container: $ matrix.container NULL for
container in the two other cases, so
container: $ matrix.container is ignored there. But
when container is set as here for the ci-enhanced version
of r2u (which adds a few binaries commonly needed such as
git, curl, wget etc needed for
CI) then the CI jobs runs inside the container. And thereby skips most
of the setup time as the container is already prepared.
This also required some small adjustments in the underlying shell
script doing the work. To not disrupt standard deployment, we placed
these into a release candidate / development version one can op into
via an new variable dev_version
Everything else remains the same and works as before. But faster as
much less time is spent on setup. You can see the actual full yaml file
and actions in my repositories for rcpparmadillo and
rcppmlpack-examples.
Additional testing would be welcome, so feel free to deploy this in your
actions now. Otherwise I will likely carry this over and make it the
defaul in a few weeks time. It will still work as before but when
the added container: line is used will run much faster
thanks to rocker/r2u4ci being already set up for CI.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
| Series: | Kindom Trilogy #2 |
| Publisher: | Orbit |
| Copyright: | October 2024 |
| ISBN: | 0-316-46362-0 |
| Format: | Kindle |
| Pages: | 444 |
| Series: | White Space #1 |
| Publisher: | Saga Press |
| Copyright: | 2019 |
| ISBN: | 1-5344-0300-0 |
| Format: | Kindle |
| Pages: | 501 |
Eventually, I realized that I was wasting my time, and if I wanted to hide from humanity in a bottle, I was better off making it a titanium one with a warp drive and a couple of carefully selected companions.Halmey does salvage: finding ships lost in white space and retrieving them. One of her partners is Connla, a pilot originally from a somewhat atavistic world called Spartacus. The other is their salvage tug.
The boat didn't have a name. He wasn't deemed significant enough to need a name by the authorities and registries that govern such things. He had a registration number 657-2929-04, Human/Terra and he had a class, salvage tug, but he didn't have a name. Officially. We called him Singer. If Singer had an opinion on the issue, he'd never registered it but he never complained. Singer was the shipmind as well as the ship or at least, he inhabited the ship's virtual spaces the same way we inhabited the physical ones but my partner Connla and I didn't own him. You can't own a sentience in civilized space.As Ancestral Night opens, the three of them are investigating a tip of a white space anomoly well off the beaten path. They thought it might be a lost ship that failed a transition. What they find instead is a dead Ativahika and a mysterious ship equipped with artificial gravity. The Ativahikas are a presumed sentient race of living ships that are on the most alien outskirts of the Synarche confederation. They don't communicate, at least so far as Halmey is aware. She also wasn't aware they died, but this one is thoroughly dead, next to an apparently abandoned ship of unknown origin with a piece of technology beyond the capabilities of the Synarche. The three salvagers get very little time to absorb this scene before they are attacked by pirates. I have always liked Bear's science fiction better than her fantasy, and this is no exception. This was great stuff. Halmey is a talkative, opinionated infodumper, which is a great first-person protagonist to have in a fictional universe this rich with delightful corners. There are some Big Dumb Object vibes (one of my favorite parts of salvage stories), solid character work, a mysterious past that has some satisfying heft once it's revealed, and a whole lot more moral philosophy than I was expecting from the setup. All of it is woven together with experienced skill, unsurprising given Bear's long and prolific career. And it's full of delightful world-building bits: Halmey's afthands (a surgical adaptation for zero gravity work) and grumpiness at the sheer amount of gravity she has to deal with over the course of this book, the Culture-style ship names, and a faster-than-light travel system that of course won't pass physics muster but provides a satisfying quantity of hooky bits for plot to attach to. The backbone of this book is an ancient artifact mystery crossed with a murder investigation. Who killed the Ativahika? Where did the gravity generator come from? Those are good questions with interesting answers. But the heart of the book is a philosophical conflict: What are the boundaries between identity and society? How much power should society have to reshape who we are? If you deny parts of yourself to fit in with society, is this necessarily a form of oppression? I wrote a couple of paragraphs of elaboration, and then deleted them; on further thought, I don't want to give any more details about what Bear is doing in this book. I will only say that I was not expecting this level of thoughtfulness about a notoriously complex and tricky philosophical topic in a full-throated adventure science fiction novel. I think some people may find the ending strange and disappointing. I loved it, and weeks after finishing this book I'm still thinking about it. Ancestral Night has some pacing problems. There is a long stretch in the middle of the book that felt repetitive and strained, where Bear holds the reader at a high level of alert and dread for long enough that I found it enervating. There are also a few political cheap shots where Bear picks the weakest form of an opposing argument instead of the strongest. (Some of the cheap shots are rather satisfying, though.) The dramatic arc of the book is... odd, in a way that I think was entirely intentional given how well it works with the thematic message, but which is also unsettling. You may not get the catharsis that you're expecting. But all of this serves a purpose, and I thought that purpose was interesting. Ancestral Night is one of those books that I liked more a week after I finished it than I did when I finished it.
Epiphanies are wonderful. I m really grateful that our brains do so much processing outside the line of sight of our consciousnesses. Can you imagine how downright boring thinking would be if you had to go through all that stuff line by line?Also, for once, I think Bear hit on exactly the right level of description rather than leaving me trying to piece together clues and hope I understood the plot. It helps that Halmey loves to explain things, so there are a lot of miniature infodumps, but I found them interesting and a satisfying throwback to an earlier style of science fiction that focused more on world-building than on interpersonal drama. There is drama, but most of it is internal, and I thought the balance was about right. This is solid, well-crafted work and a good addition to the genre. I am looking forward to the rest of the series. Followed by Machine, which shifts to a different protagonist. Rating: 8 out of 10
We at Fre i e Software GmbH now have a confirmed budget for working on Debian based tablets with the special goal to use them for educational purposes (i.e. in schools).
Those Debian Edu tablets shall be powered by the Lomiri Operating Environment (that same operating environment that is powering Ubuntu Touch).
That said, we are hiring developers (full time, part time) [*] [**]:
| Publisher: | Penguin Books |
| Copyright: | 2023, 2025 |
| Printing: | 2025 |
| ISBN: | 979-8-217-06167-9 |
| Format: | Kindle |
| Pages: | 429 |
I had entered Iraq supporting the war on the grounds that we could at least produce a better society than Saddam Hussein's. It was one of the greatest mistakes in my life. We attempted to impose programmes made up by Washington think tanks, and reheated in air-conditioned palaces in Baghdad a new taxation system modelled on Hong Kong; a system of ministers borrowed from Singapore; and free ports, modelled on Dubai. But we did it ultimately at the point of a gun, and our resources, our abstract jargon and optimistic platitudes could not conceal how much Iraqis resented us, how much we were failing, and how humiliating and degrading our work had become. Our mission was a grotesque satire of every liberal aspiration for peace, growth and democracy.This quote comes from the beginning of this book and is a sentiment Stewart already expressed in The Prince of the Marshes, but he appears to have taken this so seriously that it becomes a theme of his political career. He not only realized how wrong he was on Iraq, he abandoned the entire neoliberal nation-building project without abandoning his belief in the moral obligation of international aid. And he, I think correctly, identified a key source of the error: an ignorant, condescending superiority that dismissed the importance of deep expertise.
Neither they, nor indeed any of the 12,000 peacekeepers and policemen who had been posted to South Sudan from sixty nations, had spent a single night in a rural house, or could complete a sentence in Dinka, Nuer, Azande or Bande. And the international development strategy written jointly between the donor nations resembled a fading mission statement found in a new space colony, whose occupants had all been killed in an alien attack.Second, Stewart sincerely likes ordinary people. This shone through The Places in Between and recurs here in his descriptions of his constituents. He has a profound appreciation for individual people who have spent their life learning some trade or skill, expresses thoughtful and observant appreciation for aspects of local culture, and appears to deeply appreciate time spent around people from wildly different social classes and cultures than his own. Every successful politician can at least fake gregariousness, and perhaps that's all Stewart is doing, but there is something specific and attentive about his descriptions of other people, including long before he decided to enter politics, that makes me think it goes deeper than political savvy. Third, Stewart has a visceral hatred of incompetence. I think this is the strongest through-line of his politics in this book: Jobs in government are serious, important work; they should be done competently and well; and if one is not capable of doing that, one should not be in government. Stewart himself strikes me as an insecure overachiever: fiercely ambitious, self-critical, a bit of a micromanager (I suspect he would be difficult to work for), but holding himself to high standards and appalled when others do not do the same. This book is scathing towards multiple politicians, particularly Boris Johnson whom Stewart clearly despises, but no one comes off worse than Liz Truss.
David Cameron, I was beginning to realise, had put in charge of environment, food and rural affairs a Secretary of State who openly rejected the idea of rural affairs and who had little interest in landscape, farmers or the environment. I was beginning to wonder whether he could have given her any role she was less suited to apart perhaps from making her Foreign Secretary. Still, I could also sense why Cameron was mesmerised by her. Her genius lay in exaggerated simplicity. Governing might be about critical thinking; but the new style of politics, of which she was a leading exponent, was not. If critical thinking required humility, this politics demanded absolute confidence: in place of reality, it offered untethered hope; instead of accuracy, vagueness. While critical thinking required scepticism, open-mindedness and an instinct for complexity, the new politics demanded loyalty, partisanship and slogans: not truth and reason but power and manipulation. If Liz Truss worried about the consequences of any of this for the way that government would work, she didn't reveal it.And finally, Stewart has a deeply-held belief in state capacity and capability. He and I may disagree on the appropriate size and role of the government in society, but no one would be more disgusted by an intentional project to cripple government in order to shrink it than Stewart. One of his most-repeated criticisms of the UK political system in this book is the way the cabinet is formed. All ministers and secretaries come from members of Parliament and therefore branches of government are led by people with no relevant expertise. This is made worse by constant cabinet reshuffles that invalidate whatever small amounts of knowledge a minister was able to gain in nine months or a year in post. The center portion of this book records Stewart's time being shuffled from rural affairs to international development to Africa to prisons, with each move representing a complete reset of the political office and no transfer of knowledge whatsoever.
A month earlier, they had been anticipating every nuance of Minister Rogerson's diary, supporting him on shifts twenty-four hours a day, seven days a week. But it was already clear that there would be no pretence of a handover no explanation of my predecessor's strategy, and uncompleted initiatives. The arrival of a new minister was Groundhog Day. Dan Rogerson was not a ghost haunting my office, he was an absence, whose former existence was suggested only by the black plastic comb.After each reshuffle, Stewart writes of trying to absorb briefings, do research, and learn enough about his new responsibilities to have the hope of making good decisions, while growing increasingly frustrated with the system and the lack of interest by most of his colleagues in doing the same. He wants government programs to be successful and believes success requires expertise and careful management by the politicians, not only by the civil servants, a position that to me both feels obviously correct and entirely at odds with politics as currently practiced. I found this a fascinating book to read during the accelerating collapse of neoliberalism in the US and, to judge by current polling results, the UK. I have a theory that the political press are so devoted to a simplistic left-right political axis based on seating arrangements during the French Revolution that they are missing a significant minority whose primary political motivation is contempt for arrogant incompetence. They could be convinced to vote for Sanders or Trump, for Polanski or Farage, but will never vote for Biden, Starmer, Romney, or Sunak. Such voters are incomprehensible to those who closely follow and debate policies because their hostile reaction to the center is not about policies. It's about lack of trust and a nebulous desire for justice. They've been promised technocratic competence and the invisible hand of market forces for most of their lives, and all of it looks like lies. Everyday living is more precarious, more frustrating, more abusive and dehumanizing, and more anxious, despite (or because of) this wholehearted embrace of economic "freedom." They're sick of every complaint about the increasing difficulty of life being met with accusations about their ability and work ethic, and of being forced to endure another round of austerity by people who then catch a helicopter ride to a party on some billionaire's yacht. Some of this is inherent in the deep structural weaknesses in neoliberal ideology, but this is worse than an ideological failure. The degree to which neoliberalism started as a project of sincere political thinkers is arguable, but that is clearly not true today. The elite class in politics and business is now thoroughly captured by people whose primary skill is the marginal manipulation of complex systems for their own power and benefit. They are less libertarian ideologues than narcissistic mediocrities. We are governed by management consultants. They are firmly convinced their organizational expertise is universal, and consider the specific business of the company, or government department, irrelevant. Given that context, I found Stewart's instinctive revulsion towards David Cameron quite revealing. Stewart, later in the book, tries to give Cameron some credit by citing several policy accomplishments and comparing him favorably to Boris Johnson (which, true, is a bar Cameron probably flops over). But I think Stewart's baffled astonishment at Cameron's vapidity says a great deal about how we have ended up where we are. This last quote is long, but I think it provides a good feel for Stewart's argument in this book.
But Cameron, who was rumoured to be sceptical about nation-building projects, only nodded, and then looking confidently up and down the table said, "Well, at least we all agree on one extremely straightforward and simple point, which is that our troops are doing very difficult and important work and we should all support them." It was an odd statement to make to civilians running humanitarian operations on the ground. I felt I should speak. "No, with respect, we do not agree with that. Insofar as we have focused on the troops, we have just been explaining that what the troops are doing is often futile, and in many cases making things worse." Two small red dots appeared on his cheeks. Then his face formed back into a smile. He thanked us, told us he was out of time, shook all our hands, and left the room. Later, I saw him repeat the same line in interviews: "the purpose of this visit is straightforward... it is to show support for what our troops are doing in Afghanistan". The line had been written, in London, I assumed, and tested on focus groups. But he wanted to convince himself it was also a position of principle. "David has decided," one of his aides explained, when I met him later, "that one cannot criticise a war when there are troops on the ground." "Why?" "Well... we have had that debate. But he feels it is a principle of British government." "But Churchill criticised the conduct of the Boer War; Pitt the war with America. Why can't he criticise wars?" "British soldiers are losing their lives in this war, and we can't suggest they have died in vain." "But more will die, if no one speaks up..." "It is a principle thing. And he has made his decision. For him and the party." "Does this apply to Iraq too?" "Yes. Again he understands what you are saying, but he voted to support the Iraq War, and troops are on the ground." "But surely he can say he's changed his mind?" The aide didn't answer, but instead concentrated on his food. "It is so difficult," he resumed, "to get any coverage of our trip." He paused again. "If David writes a column about Afghanistan, we will struggle to get it published." "But what would he say in an article anyway?" I asked. "We can talk about that later. But how do you get your articles on Afghanistan published?" I remembered how the US politicians and officials had shown their mastery of strategy and detail. I remembered the earnestness of Gordon Brown when I had briefed him on Iraq. Cameron seemed somehow less serious. I wrote as much in a column in the New York Times, saying that I was afraid the party of Churchill was becoming the party of Bertie Wooster.I don't know Stewart's reputation in Britain, or in the constituency that he represented. I know he's been accused of being a self-aggrandizing publicity hound, and to some extent this is probably true. It's hard to find an ambitious politician who does not have that instinct. But whatever Stewart's flaws, he can, at least, defend his politics with more substance than a corporate motto. One gets the impression that he would respond favorably to demonstrated competence linked to a careful argument, even if he disagreed. Perhaps this is an illusion created by his writing, but even if so, it's a step in the right direction. When people become angry enough at a failing status quo, any option that promises radical change and punishment for the current incompetents will sound appealing. The default collapse is towards demagogues who are skilled at expressing anger and disgust and are willing to promise simple cures because they are indifferent to honesty. Much of the political establishment in the US, and possibly (to the small degree that I can analyze it from an occasional news article) in the UK, can identify the peril of the demagogue, but they have no solution other than a return to "politics as usual," represented by the amoral mediocrity of a McKinsey consultant. The rare politicians who seem to believe in something, who will argue for personal expertise and humility, who are disgusted by incompetence and have no patience for facile platitudes, are a breath of fresh air. There are a lot of policies on which Stewart and I would disagree, and perhaps some of his apparent humility is an affectation from the rhetorical world of the 1800s that he clearly wishes he were inhabiting, but he gives the strong impression of someone who would shoulder a responsibility and attempt to execute it with competence and attention to detail. He views government as a job, where coworkers should cooperate to achieve defined goals, rather than a reality TV show. The arc of this book, like the arc of current politics, is the victory of the reality TV show over the workplace, and the story of Stewart's run against Boris Johnson is hard reading because of it, but there's a portrayal here of a different attitude towards politics that I found deeply rewarding. If you liked Stewart's previous work, or if you want an inside look at parliamentary politics, highly recommended. I will be thinking about this book for a long time. Rating: 9 out of 10
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language and is
widely used by (currently) 1270 other packages on CRAN, downloaded 42 million times
(per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 650 times according
to Google Scholar.
This versions updates to the 15.2.0 upstream release made today. It
brings a few changes over Armadillo 15.0 (see below for more). It
follows the most recent RcppArmadillo
15.0.2-2 release and the Armadillo
15 upstream transition with its dual focus on moving on from C++11
and deprecation of a number of API access points. As we had a few
releases last month to manage the transition, we will sit this upgrade
out and not upload to CRAN in order to normalize our
update cadence towards the desired about six in six months (that the
CRAN Policy asks for). One can of course install as usual directly from
the GitHub
repository as well as from r-universe
which also offers binaries for all CRAN platforms.
The transition to Armadillo 15 appears to be going slowly but
steadily. We had well over 300 packages with either a need to relax the
C++11 setting and/or update away from now-deprecated API access points.
That number has been cut in half thanks to a lot of work from
a lot of package maintainers which is really appreciated! Of
course, a lot remains to be done. Issues #489 and
#491
contain the over sixty PRs and patches I prepared for all packages with
at least one reverse dependency. Most (but not all) have aided in CRAN updates, some packages are
still outstanding in terms of updates. As before meta-issue #475
regroups all the resources for the transition. If you, dear
reader, have a package that is affected and I could be of assistance
please do reach out.
The other change we made is to greatly simplify the detection and
setup of OpenMP. As before, we rely on configure to attempt
compilation of a minimal OpenMP-using program in order to pass the
success or failure onto Armadillo as a can-or-cannot use OpenMP. In
the year 2025 one of the leading consumer brands still cannot ship an OS
where this works out of the box, so we try to aide there. For all others
systems, R actually covers this pretty well and has a reliable
configuration variable that we rely upon. Just as we recommend for
downstream users of the package. This setup should be robust, but is a
change so by all means if you knowingly rely on OpenMP please test and
report back.
The detailed changes since the last CRAN release follow.
More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)
- Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)
- Added
rande()for generating matrices with elements from exponential distributionsshift()has been deprecated in favour ofcircshift(), for consistency with Matlab/Octave- Reworked detection of aliasing, leading to more efficient compiled code
- OpenMP detection in
configurehas been simplified
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
Next.