Scarlett Gately Moore: KDE Application snaps 25.04.2 released!

time_t
fallout and strictness due to -Wformat-security
) in cooperation with upstream. With luck, it will migrate in time for trixie
.
overlayfs
. This is a feature that was
lost in transitioning from sbuild
s schroot
backend to its unshare
backend.
unschroot
implements the schroot
API just enough to be usable with sbuild
and otherwise works a lot like the unshare
backend. As a result,
apt.postgresql.org
now performs its
builds contained in a user namespace.rebootstrap
failures most of which
related to musl
or gcc-15
and imported patches or workarounds to make those
builds proceed.sqop
fixing earlier PGP verification problems thanks to Justus Winter
and Neal Walfield explaining a lot of sequoia
at MiniDebConf Hamburg.zutils
update for /usr
-move wrong again and had to
send another update.debvm
s autopkgtest
s were flaky and with lots of
help from Paul Gevers and Michael Tokarev tracked it down to a
race condition in qemu. He updated debvm
to
trigger the problem less often and also fixed a wrong dependency using
Luca Boccassi s patch.sshd
crashes to a
root cause, and issued
bookworm and bullseye updates for CVE-2025-32728.sshd
crashes, Michel
Casabona helped me put together an environment where I could reproduce it,
which allowed me to track it down to a root
cause and fix it. (I
also found a misuse of
strlcpy
affecting at
least glibc-based systems in passing, though I think that was unrelated.)
I worked with Daniel Kahn Gillmor to fix a regression in ssh-agent
socket
handling.
I fixed a reproducibility bug depending on whether passwd
is installed on
the build system, which would have
affected security updates during the lifetime of trixie.
I backported openssh 1:10.0p1-5 to bookworm-backports.
I issued bookworm and
bullseye
updates for CVE-2025-32728.
groff
I backported a fix for incorrect output when formatting multiple documents
as PDF/PostScript at once.
debmirror
I added a simple
autopkgtest.
Python team
I upgraded these packages to new upstream versions:
[Colleagues] approached me to talk about a reproducibility issue they d been having with some R code. They d been running simulations that rely on generating samples from a multivariate normal distribution, and despite doing the prudent thing and using set.seed()
to control the state of the random number generator (RNG), the results were not computationally reproducible. The same code, executed on different machines, would produce different random numbers. The numbers weren t just a little bit different in the way that we ve all wearily learned to expect when you try to force computers to do mathematics. They were painfully, brutally, catastrophically, irreproducible different. Somewhere, somehow, something broke.
Thanks to David Wheeler for posting about this article on our mailing list
present attestable builds, a new paradigm to provide strong source-to-binary correspondence in software artifacts. We tackle the challenge of opaque build pipelines that disconnect the trust between source code, which can be understood and audited, and the final binary artifact, which is difficult to inspect. Our system uses modern trusted execution environments (TEEs) and sandboxed build containers to provide strong guarantees that a given artifact was correctly built from a specific source code snapshot. As such it complements existing approaches like reproducible builds which typically require time-intensive modifications to existing build configurations and dependencies, and require independent parties to continuously build and verify artifacts.The authors compare attestable builds with reproducible builds by noting an attestable build requires only minimal changes to an existing project, and offers nearly instantaneous verification of the correspondence between a given binary and the source code and build pipeline used to construct it , and proceed by determining that t he overhead (42 seconds start-up latency and 14% increase in build duration) is small in comparison to the overall build time.
However, the popular scripting language ecosystems potentially face unique issues given the systematic difference in distributed artifacts. This Systemization of Knowledge (SoK) [paper] provides an overview of existing research, aiming to highlight future directions, as well as chances to transfer existing knowledge from compiled language ecosystems. To that end, we work out key aspects in current research, systematize identified challenges for software reproducibility, and map them between the ecosystems.Ultimately, the three authors find that the literature is sparse , focusing on few individual problems and ecosystems, and therefore identify space for more critical research.
debian-policy
package in order to delve into an issue affecting Debian s support for cross-architecture compilation, multiple-architecture systems, reproducible builds SOURCE_DATE_EPOCH
environment variable and the ability to recompile already-uploaded packages to Debian with a new/updated toolchain (binNMUs). Ian identifies a specific case, specifically in the libopts25-dev
package, involving a manual page that had interesting downstream effects, potentially affecting backup systems. The bug generated a large number of replies, some of which have references to similar or overlapping issues, such as this one from 2016/2017.
There is now a Reproducibility Status link for each app on f-droid.org
, listed on every app s page. Our verification server shows or based on its build results, where means our rebuilder reproduced the same APK file and means it did not. The IzzyOnDroid repository has developed a more elaborate system of badges which displays a for each rebuilder. Additionally, there is a sketch of a five-level graph to represent some aspects about which processes were run.
Hans compares the approach with projects such as Arch Linux and Debian that provide developer-facing tools to give feedback about reproducible builds, but do not display information about reproducible builds in the user-facing interfaces like the package management GUIs.
295
, 296
and 297
to Debian:
--walk
argument being available, and only add that argument on newer versions after we test for that. [ ]lzma
comparator from Will Hollywood. [ ][ ]0.6.0-1
.
Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 296 [ ][ ] and 297 [ ][ ], and disorderfs to version 0.6.0 [ ][ ].
SOURCE_DATE_EPOCH
example page [ ]SOURCE_DATE_EPOCH
snippet from Sebastian Davis, which did not handle non-integer values correctly. [ ]LICENSE
file. [ ]SOURCE_DATE_EPOCH
page. [ ]SOURCE_DATE_EPOCH
page. [ ]jenkins.debian.net
server AMD Opteron to Intel Haswell CPUs. Thanks to IONOS for hosting this server since 2012.i386
architecture has been dropped from tests.reproducible-builds.org. This is because that, with the upcoming release of Debian trixie, i386
is no longer supported as a regular architecture there will be no official kernel and no Debian installer for i386
systems. As a result, a large number of nodes hosted by Infomaniak have been retooled from i386
to amd64
.ionos17-amd64.debian.net
, which is used for verifying packages for all.reproduce.debian.net (hosted by IONOS) has had its memory increased from 40 to 64GB, and the number of cores doubled to 32 as well. In addition, two nodes generously hosted by OSUOSL have had their memory doubled to 16GB.riscv64
architecture boards, so now we have seven such nodes, all with 16GB memory and 4 cores that are verifying packages for riscv64.reproduce.debian.net. Many thanks to PLCT Lab, ISCAS for providing those.ppc64el
architecture due to RAM size. [ ]nginx_request
and nginx_status
with the Munin monitoring system. [ ][ ]rebuilderd-cache-cleanup.service
and run it daily via timer. [ ][ ][ ][ ][ ]$HOSTNAME
variable in the rebuilderd logfiles. [ ]equivs
package on all worker nodes. [ ][ ]sudo
tool to fix up permission issues. [ ][ ]riscv64
, FreeBSD, etc.. [ ][ ][ ][ ]ntpsec-ntpdate
(instead of ntpdate
) as the former is available on Debian trixie and bookworm. [ ][ ]ControlPath
for all nodes. [ ]munin
user uses the same SSH config as the jenkins
user. [ ]debrebuild
line number. [ ]rebuilder-debian.sh
script. [ ] ]rebuildctl
to sync only arch-specific packages. [ ][ ]cmake/musescore
netdiscover
autotrace
, ck
, cmake
, crash
, cvsps
, gexif
, gq
, gtkam
, ibus-table-others
, krb5-appl
, ktoblzcheck-data
, leafnode
, lib2geom
, libexif-gtk
, libyui
, linkloop
, meson
, MozillaFirefox
, ncurses
, notify-sharp
, pcsc-acr38
, pcsc-asedriveiiie-serial
, pcsc-asedriveiiie-usb
, pcsc-asekey
, pcsc-eco5000
, pcsc-reflex60
, perl-Crypt-RC
, python-boto3
, python-gevent
, python-pytest-localserver
, qt6-tools
, seamonkey
, seq24
, smictrl
, sobby
, solfege
, urfkill
, uwsgi
, wsmancli
, xine-lib
, xkeycaps
, xquarto
, yast-control-center
, yast-ruby-bindings
and yast
libmfx-gen
, libmfx
, liboqs
jabber-muc
.golang-github-lucas-clemente-quic-go
.--mtime
and --clamp-mtime
to bsdtar
.python3
requested enabling a LTO-adjacent option that should improve build reproducibility.freezegun
for a timezone issue causing unit tests to fail during testing.tutanota
in an attempt to resolve a long-standing reproducibility issue.0xFFFF
: Use SOURCE_DATE_EPOCH
for date in manual pages.#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
renv
by Rcpp collaborator and pal Kevin. The expressed hope is
that by nailing down a (sub)set of packages, outcomes are constrained to
be unchanged. Hope springs eternal, clearly. (Personally, if need be, I
do the same with Docker containers and their respective
Dockerfile
.)
On the other hand, rolling is fundamentally different approach. One
(well known) example is Google building everything at @HEAD . The entire (ginormous)
code base is considered as a mono-repo which at any point in
time is expected to be buildable as is. All changes made are pre-tested
to be free of side effects to other parts. This sounds hard, and likely
is more involved than an alternative of a whatever works approach of
independent changes and just hoping for the best.
Another example is a rolling (Linux) distribution as for example Debian. Changes are first committed to
a staging place (Debian calls this the unstable distribution) and,
if no side effects are seen, propagated after a fixed number of days to
the rolling distribution (called testing ). With this mechanism,
testing should always be installable too. And based on the rolling
distribution, at certain times (for Debian roughly every two years) a
release is made from testing into stable (following more elaborate
testing). The released stable version is then immutable (apart from
fixes for seriously grave bugs and of course security updates). So this
provides the connection between frequent and rolling updates, and
produces immutable fixed set: a release.
This Debian approach has been influential for any other
projects including CRAN as can
be seen in aspects of its system providing a rolling set of curated
packages. Instead of a staging area for all packages, extensive tests
are made for candidate packages before adding an update. This aims to
ensure quality and consistence and has worked remarkably well. We argue
that it has clearly contributed to the success and renown of CRAN.
Now, when accessing CRAN
from R, we fundamentally have
two accessor functions. But seemingly only one is widely known
and used. In what we may call the Jeff model , everybody is happy to
deploy install.packages()
for initial
installations.
That sentiment is clearly expressed by
this bsky post:
One of my #rstats coding rituals is that every time I load a @vincentab.bsky.social package I go check for a new version because invariably it s been updated with 18 new major featuresAnd that is why we have two cultures. Because some of us, yours truly included, also use
update.packages()
at recurring (frequent !!) intervals:
daily or near-daily for me. The goodness and, dare I say, gift of
packages is not limited to those by my pal Vincent. CRAN updates all the time, and
updates are (generally) full of (usually excellent) changes, fixes, or
new features. So update frequently! Doing (many but small) updates
(frequently) is less invasive than (large, infrequent) waterfall -style
changes!
But the fear of change, or disruption, is clearly pervasive. One can
only speculate why. Is the experience of updating so painful on other
operating systems? Is it maybe a lack of exposure / tutorials on best
practices?
These Two Cultures coexist. When I delivered the talk in Mons, I
briefly asked for a show of hands among all the R users in the audience to see who
in fact does use update.packages()
regularly. And maybe a
handful of hands went up: surprisingly few!
Now back to the context of installing packages: Clearly only
installing has its uses. For continuous integration checks we generally
install into ephemeral temporary setups. Some debugging work may be with
one-off container or virtual machine setups. But all other uses may well
be under maintained setups. So consider calling
update.packages()
once in while. Or even weekly or daily.
The rolling feature of CRAN is a real benefit, and it is
there for the taking and enrichment of your statistical computing
experience.
So to sum up, the real power is to use
install.packages()
to obtain fabulous new statistical
computing resources, ideally in an instant; andupdate.packages()
to keep these fabulous resources
current and free of (known) bugs.This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
noinsttest
profiles (MR) - no need for these in nightly builds and helps nocheck
triggering errorskey- pressed,released
events (MR)ginsttest-runner
(MR).dir-locales.el
and make tests easier to parse (MR)-Wswitch-default
(MR)Python
, Golang
and Cloud
teams, packaging dependencies and maintaining various tools. However, I soon felt the need to focus on packaging software I truly enjoyed, tools I was passionate about using and maintaining.
That s when I turned my attention to Kubernetes within Debian.
1.20.5
.
Due to this abandonment, critical bugs emerged and the package was removed from Debian s testing channel, as we can see in the package tracker.
kubectx
, kubernetes-split-yaml
and kubetail
into a dedicated namespace on Salsa, Debian s GitLab instance.
Many of these tools were stored across different teams (like the Go team), and consolidating them helped us organize development and focus our efforts.
kubectl
, Kubernetes CLI.Files-Excluded
in debian/copyright
to cleanly drop unneeded files during source imports.uscan
, a standard Debian packaging tool that fetches upstream tarballs and prepares them accordingly. The Files-Excluded
directive in our debian/copyright
file instructed uscan
to automatically remove unnecessary files during the repackaging process:
$ uscan
Newest version of kubernetes on remote site is 1.32.3, specified download version is 1.32.3
Successfully repacked ../v1.32.3 as ../kubernetes_1.32.3+ds.orig.tar.gz, deleting 30616 files from it.
75%
:
$ du -h upstream-v1.32.3.tar.gz kubernetes_1.32.3+ds.orig.tar.gz
14M upstream-v1.32.3.tar.gz
3.2M kubernetes_1.32.3+ds.orig.tar.gz
30,000
files, we simplified the package, making it more maintainable. Each dependency could now be properly tracked, updated, and patched independently, resolving the security concerns that had plagued the previous packaging approach.
debtree
, visualizing all the Go modules and other dependencies required to build the kubectl
binary.
kubectl
. Each box is a Debian package, and the lines connecting them show how deeply intertwined the ecosystem is. What might look like a mess of blue spaghetti is actually a clear demonstration of the vast and interconnected upstream world that tools like kubectl rely on.
But more importantly, this graph is a testament to the effort that went into making kubectl build entirely using Debian-packaged dependencies only, no vendoring, no downloading from the internet, no proprietary blobs.
1.32.3+ds
of kubectl
to Debian unstable.
kubernetes/-/merge_requests/1
Zsh
, Fish
, and Bash completions
installed automaticallyMan pages
and metadata for improved discoverabilitykind
and docker
for testing purposeskubectl (new binary)
: Already installed on 2,124 systems.golang-k8s-kubectl-dev
: This is the Go development package (a library), useful for other packages and developers who want to interact with Kubernetes programmatically.kubernetes-client
: The legacy package that kubectl is replacing. We expect this number to decrease in future releases as more systems transition to the new package.Also worth mentioning: this number is not the real total number of installations, since users can choose not to participate in the popularity contest. So the actual adoption is likely higher than what popcon reflects.
kubectl version 1.32.3
, built from a clean, de-vendorized source. This version includes nearly all the latest upstream features, and will be the first time in years that Debian users can rely on an up-to-date, policy-compliant kubectl directly from the archive.
By comparing with upstream, our Debian package even delivers more out of the box, including shell completions
, which the upstream still requires users to generate manually.
In 2025, the Debian Kubernetes team will continue expanding our packaging efforts for the Kubernetes ecosystem.
Our roadmap includes:
kubeadm
in Debian, users will then be able to bootstrap minimum viable clusters directly from the official repositories.
Python
, Golang
and Cloud
teams, packaging dependencies and maintaining various tools. However, I soon felt the need to focus on packaging software I truly enjoyed, tools I was passionate about using and maintaining.
That s when I turned my attention to Kubernetes within Debian.
1.20.5
.
Due to this abandonment, critical bugs emerged and the package was removed from Debian s testing channel, as we can see in the package tracker.
kubectx
, kubernetes-split-yaml
and kubetail
into a dedicated namespace on Salsa, Debian s GitLab instance.
Many of these tools were stored across different teams (like the Go team), and consolidating them helped us organize development and focus our efforts.
kubectl
, Kubernetes CLI.Files-Excluded
in debian/copyright
to cleanly drop unneeded files during source imports.uscan
, a standard Debian packaging tool that fetches upstream tarballs and prepares them accordingly. The Files-Excluded
directive in our debian/copyright
file instructed uscan
to automatically remove unnecessary files during the repackaging process:
$ uscan
Newest version of kubernetes on remote site is 1.32.3, specified download version is 1.32.3
Successfully repacked ../v1.32.3 as ../kubernetes_1.32.3+ds.orig.tar.gz, deleting 30616 files from it.
75%
:
$ du -h upstream-v1.32.3.tar.gz kubernetes_1.32.3+ds.orig.tar.gz
14M upstream-v1.32.3.tar.gz
3.2M kubernetes_1.32.3+ds.orig.tar.gz
30,000
files, we simplified the package, making it more maintainable. Each dependency could now be properly tracked, updated, and patched independently, resolving the security concerns that had plagued the previous packaging approach.
debtree
, visualizing all the Go modules and other dependencies required to build the kubectl
binary.
kubectl
. Each box is a Debian package, and the lines connecting them show how deeply intertwined the ecosystem is. What might look like a mess of blue spaghetti is actually a clear demonstration of the vast and interconnected upstream world that tools like kubectl rely on.
But more importantly, this graph is a testament to the effort that went into making kubectl build entirely using Debian-packaged dependencies only, no vendoring, no downloading from the internet, no proprietary blobs.
1.32.3+ds
of kubectl
to Debian unstable.
kubernetes/-/merge_requests/1
Zsh
, Fish
, and Bash completions
installed automaticallyMan pages
and metadata for improved discoverabilitykind
and docker
for testing purposeskubectl (new binary)
: Already installed on 2,124 systems.golang-k8s-kubectl-dev
: This is the Go development package (a library), useful for other packages and developers who want to interact with Kubernetes programmatically.kubernetes-client
: The legacy package that kubectl is replacing. We expect this number to decrease in future releases as more systems transition to the new package.Also worth mentioning: this number is not the real total number of installations, since users can choose not to participate in the popularity contest. So the actual adoption is likely higher than what popcon reflects.
kubectl version 1.32.3
, built from a clean, de-vendorized source. This version includes nearly all the latest upstream features, and will be the first time in years that Debian users can rely on an up-to-date, policy-compliant kubectl directly from the archive.
By comparing with upstream, our Debian package even delivers more out of the box, including shell completions
, which the upstream still requires users to generate manually.
In 2025, the Debian Kubernetes team will continue expanding our packaging efforts for the Kubernetes ecosystem.
Our roadmap includes:
kubeadm
in Debian, users will then be able to bootstrap minimum viable clusters directly from the official repositories.
+
/-
on the keyboard, mouse wheel,
gesture) works.
End-to-end, the major development for this release was done over
around two weeks, which is pretty short: I extensively used Claude
Sonnet and Grok to unblock myself. Not to write code per se - although
there is code written 1:1 by LLMs, but most of the code is weirdly
wrong, and I have to either correct it or just use it as a starter and
rewrite most of it. But to discuss and unblock, and learn about new
things, the current LLMs are very good at.
And yet, sometimes even what they re good at, fails hard. I asked for
ideas to simplify a piece of code, and it went nowhere, even if there
were significant rewrite possibilities. I spent the brain cycles on
it, reverse engineered my own code, then simplified. I ll have to
write a separate blog post on this
In any case, this (zooming) was the last major feature I was
missing. There are image viewer libraries, but most of them slow,
compared to the bare-bones (well, now not so much anymore) viewer that
I use as main viewer. From now on, it will me minor incremental
features, mostly around Exif management/handling, etc. Or, well,
internal cleanups: extend test coverage, remove use of JQuery in the
frontend, etc., there are tons of things to do.
Fun fact: I managed to discover a Safari iOS bug. Or at least I think
it s a bug, so reported
it and curious
what ll come out of it.
Finally, I still couldn t fix the GitHub actions bug where the git
describe doesn t see the just pushed tag, sigh, so the demo site still
lists Corydalis v2024.12.0-133-g00edf63
as the version .
s390x
and mips64el
.
The ppc64el
architecture was added through the generous support of Oregon State University Open Source Laboratory (OSUOSL), and we can support the armel
architecture thanks to CodeThink.
We are all struggling with a massive shift that has happened in the past 10 or 20 years in the software industry. For decades, software reuse was only a lofty goal. Now it s very real. Modern programming environments such as Go, Node and Rust have made it trivial to reuse work by others, but our instincts about responsible behaviors have not yet adapted to this new reality. We all have more work to do.
maven-lockfile
(Lockfiles for Java and Maven)observer
(Generating SBOMs for C/C++)dirty-waters
(Transparency checks for software supply chains)ext4
and an EFI system partition. They go on to list the various methods, and the thread generated at least fifteen replies.
repro-env
tool. As first reported in our July 2023 report, kpcyrd mentions that:
My initial interest in reproducible builds was how do I distribute pre-compiled binaries on GitHub without people raising security concerns about them . I ve cycled back to this original problem about 5 years later and built a tool that is meant to address this. [ ]
[ ] Achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild, a tool that raises reproducibility success from 9.48% to 26.89% on 12,283 unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.A full PDF of their article is available from arXiv.
equivs
. This is a tool to create trivial Debian packages that might Depend
on other packages. As Roland writes, building the [equivs
] package has been reproducible for a while, [but] now the output of the [tool] has become reproducible as well .
rbuilder_setup
tool can now setup the entire framework within less than five minutes. The process is configurable, too, so everything from just the basics to verify builds up to a fully-fledged RB environment is also possible.
libpinyin
issue. Tessy James fixed an issue in arandr
and helped analyze one in libvlc
that led to a proposed upstream fix. Finally, 3pleX fixed an issue which was accepted in upstream kitty
, one in upstream maturin
, one in upstream python-sip
and one in the Nix packaging of python-libbytesize
. Sadly, the funding for this internship fell through, so NixOS were forced to abandon their search.
--walk
argument over the potentially dangerous alternative --scan
when calling out to zipdetails(1)
. [ ]>
-based version tests used in conditional fixtures were broken. This was used to ensure that specific tests were only run when the version on the system was newer than a particular number. Thanks to Colin Watson for the report (Debian bug #1102658) [ ]test_versions
testsuite as well, where we weren t actually testing the greater-than comparisons mentioned above, as it was masked by the tests for equality. [ ]debian/salsa-ci.yml
instead of using .gitlab-ci.yml
. [ ]
stabilize
tool to the Tools page. [ ][ ]
configure.ac
(GNU Autotools) example for using SOURCE_DATE_EPOCH
. [ ]. Chris also updated the SOURCE_DATE_EPOCH
snippet and move the archive metadata to a more suitable location. [ ]
armel
architecture. [ ][ ]codethink05
. [ ][ ]ppc64el
architecture. [ ][ ][ ]ppc64el
nodes. [ ][ ]9[ ][ ]arm64
and armhf
nodes. [ ][ ][ ][ ]rebuilderd-worker
entry point. [ ][ ][ ]pkgsync
script. [ ][ ][ ][ ][ ][ ][ ][ ]riscv64
architecture. [ ][ ]rebuilderd
service. [ ][ ][ ][ ]risvc64
host names [ ] and requested access to all the rebuilderd
nodes [ ].
osuosl3
node to designate 4 workers for bigger builds. [ ]
command-not-found
.command-not-found
.cc65
.vcsh
.schism
.magic-wormhole-mailbox-server
.openvpn3-client
.courier
.cross-toolchain-base
.#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
setuptools
in the archive to 78.1.0. This brings support
for more comprehensive license expressions
(PEP-639), that people are expected to
adopt soon upstream. While the reverse-autopkgtests all passed, it all came with
some unexpected complications, and turned into a mini-transition. The new
setuptools
broke shebangs for scripts
(pypa/setuptools#4952).
It also required a bump of wheel
to 0.46 and wheel
0.46 now has a dependency
outside the standard library (it de-vendored packaging
). This meant it was no
longer suitable to distribute a standalone wheel.whl
file to seed into new
virtualenvs, as virtualenv
does by default. The good news here is that
setuptools
doesn t need wheel
any more, it included its own
implementation of the bdist_wheel
command, in 70.1. But the world hadn t
adapted to take advantage of this, yet. Stefano scrambled to get all of these
issues resolved upstream and in Debian:
pip
: Don t check for wheel when invoked with --no-use-pep517
(pypa/pip#13330), automatically do
--no-use-pep517
builds without wheel
(pypa/pip#13358, rejected).virtualenv
: Don t include wheel
(pypa/virtualenv#2868) except on
Python 3.8
(pypa/virtualenv#2876) as pip
dropped Python 3.8 support in the same release that included #13330.python3.13
: Update bundled setuptools in test.wheeldata
(python/cpython#132415).python-cffi
: No need to install wheel any more
(python-cffi/cffi#165).python3-wheel-whl
is no longer needed in Debian
unstable, and it should migrate to trixie.
libcrypt-dev
from build-essential
, by Helmut Grohne
The crypt
function was originally part of glibc
, but it got separated to
libxcrypt
. As a result, libc6-dev
now depends on libcrypt-dev
. This poses
a cycle during architecture cross bootstrap. As the number of packages actually
using crypt
is relatively small, Helmut
proposed removing
the dependency. He analyzed an archive rebuild kindly performed by Santiago Vila
(not affiliated with Freexian) and estimated the necessary changes. It looks
like we may complete this with modifications to less than 300 source packages in
the forky
cycle. Half of the bugs have been filed at this time. They are
tracked with libcrypt-*
usertags.
wtmpdb last
now reports the correct tty for SSH connections.debefivm-create
to generate EFI-bootable disk images compatible with other tools such as libvirt
or VirtualBox
. Much of the work was prototyped in earlier months. This
generalizes mmdebstrap-autopkgtest-build-qemu
.unstable
.systemd
would comply with Debian s policy.
Dumat continues to locate problems
here and there yielding discussion occasionally. He sent a patch for an upgrade
problem in zutils.bsp-2025-04-brazil
usertag to mark those bugs that were
touched by us. You can see the full list of bugs here, although the
current list (as of 2025-05-11) is smaller than the one we had by
the end of April. I don t know what happened; maybe it s some glitch
with the BTS, or maybe someone removed the usertag by mistake.
Hi Joseph and all drivers of the "End of 10" campaign, On behalf of the entire Debian project, I would like to say that we proudly join your great campaign. We stand with you in promoting Free Software, defending users' freedoms, and protecting our planet by avoiding unnecessary hardware waste. Thank you for leading this important initiative.I have some goals I would like to share with you for my second term. Ftpmaster delegation This splits up into tasks that can be done before and after Trixie release. Before Trixie: 1. Reducing Barriers to DFSG Compliance Checks Back in 2002, Debian established a way to distribute cryptographic software in the main archive, whereas such software had previously been restricted to the non-US archive. One result of this arrangement which influences our workflow is that all packages uploaded to the NEW queue must remain on the server that hosts it. This requirement means that members of the ftpmaster team must log in to that specific machine, where they are limited to a restricted set of tools for reviewing uploaded code. This setup may act as a barrier to participation--particularly for contributors who might otherwise assist with reviewing packages for DFSG compliance. I believe it is time to reassess this limitation and work toward removing such hurdles. In October last year, we had some initial contact with SPI's legal counsel, who noted that US regulations around cryptography have been relaxed somewhat in recent years (as of 2021). This suggests it may now be possible to revisit and potentially revise the conditions under which we manage cryptographic software in the NEW queue. I plan to investigate this further. If you have expertise in software or export control law and are interested in helping with this topic, please get in touch with me. The ultimate goal is to make it easier for more people to contribute to ensuring that code in the NEW queue complies with the DFSG. 2. Discussing Alternatives My chances to reach out to other distributions remained limited. However, regarding the processing of new software, I learned that OpenSUSE uses a Git-based workflow that requires five "LGTM" approvals from a group of trusted developers. As far as I know, Fedora follows a similar approach. Inspired by this, a recent community initiative--the Gateway to NEW project--enables peer review of new packages for DFSG compliance before they enter the NEW queue. This effort allows anyone to contribute by reviewing packages and flagging potential issues in advance via Git. I particularly appreciate that the DFSG review is coupled with CI, allowing for both license and technical evaluation. While this process currently results in some duplication of work--since final reviews are still performed by the ftpmaster team--it offers a valuable opportunity to catch issues early and improve the overall quality of uploads. If the community sees long-term value in this approach, it could serve as a basis for evolving our workflows. Integrating it more closely into DAK could streamline the process, and we've recently seen that merge requests reflecting community suggestions can be accepted promptly. For now, I would like to gather opinions about how such initiatives could best complement the current NEW processing, and whether greater consensus on trusted peer review could help reduce the burden on the team doing DFSG compliance checks. Submitting packages for review and automated testing before uploading can improve quality and encourage broader participation in safeguarding Debian's Free Software principles. My explicit thanks go out to the Gateway to NEW team for their valuable and forward-looking contribution to Debian. 3. Documenting Critical Workflows Past ftpmaster trainees have told me that understanding the full set of ftpmaster workflows can be quite difficult. While there is some useful documentation thanks in particular to Sean Whitton for his work on documenting NEW processing rules many other important tasks carried out by the ftpmaster team remain undocumented or only partially so. Comprehensive and accessible documentation would greatly benefit current and future team members, especially those onboarding or assisting in specific workflows. It would also help ensure continuity and transparency in how critical parts of the archive are managed. If such documentation already exists and I have simply overlooked it, I would be happy to be corrected. Otherwise, I believe this is an area where we need to improve significantly. Volunteers with a talent for writing technical documentation are warmly invited to contact me--I'd be happy to help establish connections with ftpmaster team members who are willing to share their knowledge so that it can be written down and preserved. Once Trixie is released (hopefully before DebConf): 4. Split of the Ftpmaster Team into DFSG and Archive Teams As discussed during the "Meet the ftpteam" BoF at DebConf24, I would like to propose a structural refinement of the current Ftpmaster team by introducing two different delegated teams:Andreas Tille Debian Project Leader
Next.