Akvorado 2.0 was released today! Akvorado collects network flows with
IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database.
Users can browse the data through a web console. This release introduces an
important architectural change and other smaller improvements. Let s dive in!
New outlet service
The major change in Akvorado 2.0 is splitting the inlet service into two
parts: the inlet and the outlet. Previously, the inlet handled all flow
processing: receiving, decoding, and enrichment. Flows were then sent to Kafka
for storage in ClickHouse:
Akvorado flow processing before the introduction of the outlet service
Network flows reach the inlet service using UDP, an unreliable protocol. The
inlet must process them fast enough to avoid losing packets. To handle a
high number of flows, the inlet spawns several sets of workers to receive flows,
fetch metadata, and assemble enriched flows for Kafka. Many configuration
options existed for scaling, which increased complexity for users. The code
needed to avoid blocking at any cost, making the processing pipeline complex
and sometimes unreliable, particularly the BMP receiver.1 Adding new
features became difficult without making the problem worse.2
In Akvorado 2.0, the inlet receives flows and pushes them to Kafka without
decoding them. The new outlet service handles the remaining tasks:
Akvorado flow processing after the introduction of the outlet service
This change goes beyond a simple split:3 the outlet now reads flows from
Kafka and pushes them to ClickHouse, two tasks that Akvorado did not handle
before. Flows are heavily batched to increase efficiency and reduce the load
on ClickHouse using ch-go, a low-level Go client for ClickHouse. When
batches are too small, asynchronous inserts are used (e20645). The number of
outlet workers scales dynamically (e5a625) based on the target batch
size and latency (50,000 flows and 5 seconds by default).
This new architecture also allows us to simplify and optimize the code. The
outlet fetches metadata synchronously (e20645). The BMP component becomes
simpler by removing cooperative multitasking (3b9486). Reusing the same
RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on
the garbage collector (8b580f).
The effect on Akvorado s overall performance was somewhat uncertain, but a
user reported 35% lower CPU usage after migrating from the previous
version, plus resolution of the long-standing BMP component issue.
Other changes
This new version includes many miscellaneous changes, such as completion for
source and destination ports (f92d2e), and automatic restart of the
orchestrator service (0f72ff) when configuration changes to avoid a common
pitfall for newcomers.
Let s focus on some key areas for this release: observability,
documentation, CI,
Docker, Go, and JavaScript.
Observability
Akvorado exposes metrics to provide visibility into the processing pipeline and
help troubleshoot issues. These are available through Prometheus HTTP metrics
endpoints, such as /api/v0/inlet/metrics. With the introduction
of the outlet, many metrics moved. Some were also renamed (4c0b15) to match
Prometheus best practices. Kafka consumer lag was added as a new metric
(e3a778).
If you do not have your own observability stack, the Docker Compose setup
shipped with Akvorado provides one. You can enable it by activating the profiles
introduced for this purpose (529a8f).
The prometheus profile ships Prometheus to store metrics and Alloy
to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka
metrics are collected through the exporter bundled with Alloy (560113).
Other metrics are exposed using Prometheus metrics endpoints and are
automatically fetched by Alloy with the help of some Docker labels, similar to
what is done to configure Traefik. cAdvisor was also added (83d855) to
provide some container-related metrics.
The loki profile ships Loki to store logs (45c684). While Alloy
can collect and ship logs to Loki, its parsing abilities are limited: I could
not find a way to preserve all metadata associated with structured logs produced
by many applications, including Akvorado. Vector replaces Alloy (95e201)
and features a domain-specific language, VRL, to transform logs. Annoyingly,
Vector currently cannot retrieve Docker logs from before it was
started.
Finally, the grafana profile ships Grafana, but the shipped dashboards are
broken. This is planned for a future version.
Documentation
The Docker Compose setup provided by Akvorado makes it easy to get the web
interface up and running quickly. However, Akvorado requires a few mandatory
steps to be functional. It ships with comprehensive documentation, including
a chapter about troubleshooting problems. I hoped this documentation would
reduce the support burden. It is difficult to know if it works. Happy users
rarely report their success, while some users open discussions asking for help
without reading much of the documentation.
In this release, the documentation was significantly improved.
The documentation was updated (fc1028) to match Akvorado s new architecture.
The troubleshooting section was rewritten (17a272). Instructions on how to
improve ClickHouse performance when upgrading from versions earlier than 1.10.0
was added (5f1e9a). An LLM proofread the entire content (06e3f3).
Developer-focused documentation was also improved (548bbb, e41bae, and
871fc5).
From a usability perspective, table of content sections are now collapsable
(c142e5). Admonitions help draw user attention to important points
(8ac894).
Example of use of admonitions in Akvorado's documentation
Continuous integration
This release includes efforts to speed up continuous integration on GitHub.
Coverage and race tests run in parallel (6af216 and fa9e48). The Docker
image builds during the tests but gets tagged only after they succeed
(8b0dce).
GitHub workflow to test and build Akvorado
End-to-end tests (883e19) ensure the shipped Docker Compose setup works as
expected. Hurl runs tests on various HTTP endpoints, particularly to verify
metrics (42679b and 169fa9). For example:
## Test inlet has received NetFlow flowsGEThttp://127.0.0.1:8080/prometheus/api/v1/query[Query]query:sum(akvorado_inlet_flow_input_udp_packets_total job="akvorado-inlet",listener=":2055" )
HTTP200[Captures]inlet_receivedflows:jsonpath "$.data.result[0].value[1]" toInt
[Asserts]variable"inlet_receivedflows">10## Test inlet has sent them to KafkaGEThttp://127.0.0.1:8080/prometheus/api/v1/query[Query]query:sum(akvorado_inlet_kafka_sent_messages_total job="akvorado-inlet" )
HTTP200[Captures]inlet_sentflows:jsonpath "$.data.result[0].value[1]" toInt
[Asserts]variable"inlet_sentflows">=inlet_receivedflows
Docker
Akvorado ships with a comprehensive Docker Compose setup to help users get
started quickly. It ensures a consistent deployment, eliminating many
configuration-related issues. It also serves as a living documentation of the
complete architecture.
This release brings some small enhancements around Docker:
Previously, many Docker images were pulled from the Bitnami Containers
library. However, VMWare acquired Bitnami in 2019 and Broadcom acquired
VMWare in 2023. As a result, Bitnami images were deprecated in less than a
month. This was not really a surprise4. Previous versions of Akvorado
had already started moving away from them. In this release, the Apache project s
Kafka image replaces the Bitnami one (1eb382). Thanks to the switch to KRaft
mode, Zookeeper is no longer needed (0a2ea1, 8a49ca, and f65d20).
Akvorado s Docker images were previously compiled with Nix. However, building
AArch64 images on x86-64 is slow because it relies on QEMU userland emulation.
The updated Dockerfile uses multi-stage and multi-platform builds: one
stage builds the JavaScript part on the host platform, one stage builds the Go
part cross-compiled on the host platform, and the final stage assembles the
image on top of a slim distroless image (268e95 and d526ca).
# This is a simplified versionFROM--platform=$BUILDPLATFORMnode:20-alpineASbuild-js
RUNapkadd--no-cachemake
WORKDIR/buildCOPYconsole/frontendconsole/frontend
COPYMakefile.
RUNmakeconsole/data/frontend
FROM--platform=$BUILDPLATFORMgolang:alpineASbuild-go
RUNapkadd--no-cachemakecurlzip
WORKDIR/buildCOPY..
COPY--from=build-js/build/console/data/frontendconsole/data/frontend
RUNgomoddownload
RUNmakeall-indep
ARGTARGETOSTARGETARCHTARGETVARIANTVERSION
RUNmake
FROMgcr.io/distroless/static:latestCOPY--from=build-go/build/bin/akvorado/usr/local/bin/akvorado
ENTRYPOINT["/usr/local/bin/akvorado"]
When building for multiple platforms with --platform
linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted
line execute only once for all platforms. This significantly speeds up the
build.
Akvorado now ships Docker images for these platforms: linux/amd64,
linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting
ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU.
On x86-64, there are two choices. If your CPU is recent enough, Docker
downloads linux/amd64/v3. This version contains additional optimizations and
should run faster than the linux/amd64 version. It would be interesting to
ship an image for linux/arm64/v8.2, but Docker does not support the same
mechanism for AArch64 yet (792808).
Go
This release includes many changes related to Go but not visible to the users.
Toolchain
In the past, Akvorado supported the two latest Go versions, preventing immediate
use of the latest enhancements. The goal was to allow users of stable
distributions to use Go versions shipped with their distribution to compile
Akvorado. However, this became frustrating when interesting features, like go
tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be
compiled with older toolchains by automatically downloading a newer one
(94fb1c).5 Users can still override GOTOOLCHAIN to revert this
decision. The recommended toolchain updates weekly through CI to ensure we get
the latest minor release (5b11ec). This change also simplifies updates to
newer versions: only go.mod needs updating.
Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have
started converting some unit tests to the new test/synctest package
(bd787e, 7016d8, and 159085).
Testing
When testing equality, I use a helper function Diff() to display the
differences when it fails:
This function uses kylelemons/godebug. This package is
no longer maintained and has some shortcomings: for example, by default, it does
not compare struct private fields, which may cause unexpectedly successful
tests. I replaced it with google/go-cmp, which is stricter
and has better output (e2f1df).
Another package for Kafka
Another change is the switch from Sarama to franz-go to interact with
Kafka (756e4a and 2d26c5). The main motivation for this change is to
get a better concurrency model. Sarama heavily relies on channels and it is
difficult to understand the lifecycle of an object handed to this package.
franz-go uses a more modern approach with callbacks6 that is both more
performant and easier to understand. It also ships with a package to spawn fake
Kafka broker clusters, which is more convenient than the mocking functions
provided by Sarama.
Improved routing table for BMP
To store its routing table, the BMP component used
kentik/patricia, an implementation of a patricia tree
focused on reducing garbage collection pressure.
gaissmai/bart is a more recent alternative using an
adaptation of [Donald Knuth s ART algorithm][] that promises better
performance and delivers it: 90% faster lookups and 27% faster
insertions (92ee2e and fdb65c).
Unlike kentik/patricia, gaissmai/bart does not help efficiently store values
attached to each prefix. I adapted the same approach as kentik/patricia to
store route lists for each prefix: store a 32-bit index for each prefix, and use
it to build a 64-bit index for looking up routes in a map. This leverages Go s
efficient map structure.
gaissmai/bart also supports a lockless routing table version, but this is not
simple because we would need to extend this to the map storing the routes and to
the interning mechanism. I also attempted to use Go s new unique package to
replace the intern package included in Akvorado, but performance was
worse.7
Miscellaneous
Previous versions of Akvorado were using a custom Protobuf encoder for
performance and flexibility. With the introduction of the outlet service,
Akvorado only needs a simple static schema, so this code was removed. However,
it is possible to enhance performance with
planetscale/vtprotobuf (e49a74, and 8b580f).
Moreover, the dependency on protoc, a C++ program, was somewhat annoying.
Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf
schema into Go code (f4c879).
Another small optimization to reduce the size of the Akvorado binary by
10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It
includes the ASN database, as well as the SVG images for the documentation. A
small layer of code makes this change transparent (b1d638 and e69b91).
JavaScript
Recently, two large supply-chain attacks hit the JavaScript ecosystem: one
affecting the popular packages chalk and debug and another
impacting the popular package @ctrl/tinycolor. These attacks also
exist in other ecosystems, but JavaScript is a prime target due to heavy use of
small third-party dependencies. The previous version of Akvorado relied on 653
dependencies.
npm-run-all was removed (3424e8, 132 dependencies). patch-package was
removed (625805 and e85ff0, 69 dependencies) by moving missing
TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a
linter written in Rust (97fd8c, 125 dependencies, including the plugins).
I switched from npm to Pnpm, an alternative package manager (fce383).
Pnpm does not run install scripts by default8 and prevents installing
packages that are too recent. It is also significantly faster.9 Node.js
does not ship Pnpm but it ships Corepack, which allows us to use Pnpm
without installing it. Pnpm can also list licenses used by each dependency,
removing the need for license-compliance (a35ca8, 42 dependencies).
For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite
was replaced with its faster Rolldown version (463827).
After these changes, Akvorado only pulls 225 dependencies.
Next steps
I would like to land three features in the next version of Akvorado:
Add Grafana dashboards to complete the observability stack. See issue
#1906 for details.
Integrate OVH s Grafana plugin by providing a stable API for such
integrations. Akvorado s web console would still be useful for browsing
results, but if you want to build and share dashboards, you should switch to
Grafana. See issue #1895.
Move some work currently done in ClickHouse (custom dictionaries, GeoIP and IP
enrichment) back into the outlet service. This should give more flexibility
for adding features like the one requested in issue #1030. See issue #2006.
I started working on splitting the inlet into two parts more than one year ago.
I found more motivation in recent months, partly thanks to Claude Code,
which I used as a rubber duck. Almost none of the produced code was
kept:10 it is like an intern who does not learn.
Many attempts were made to make the BMP component both performant and
not blocking. See for example PR #254, PR #255, and PR #278.
Despite these efforts, this component remained problematic for most users.
See issue #1461 as an example.
Some features have been pushed to ClickHouse to avoid the
processing cost in the inlet. See for example PR #1059.
Broadcom is known for its user-hostile moves. Look at what happened
with VMWare.
As a Debian developer, I dislike these mechanisms that circumvent
the distribution package manager. The final straw came when Go 1.25 spent one month in the Debian NEW queue, an arbitrary mechanism I
don t like at all.
In the early years of Go, channels were heavily promoted. Sarama
was designed during this period. A few years later, a more nuanced approach
emerged. See notably Go channels are bad and you should feel bad.
This should be investigated further, but my theory is that the
intern package uses 32-bit integers, while unique uses 64-bit pointers.
See commit 74e5ac.
This is also possible with npm. See commit dab2f7.
An even faster alternative is Bun, but it is less available.
The exceptions are part of the code for the admonition blocks,
the code for collapsing the table of content, and part of the documentation.
The solar fence and some other ground and pole mount solar panels, seen through leaves.
Solar fencing manufacturers have some good simple designs, but it's hard
to buy for a small installation. They are selling to utility scale solar
mostly. And those are installed by driving metal beams into the ground,
which requires heavy machinery.
Since I have experience with Ironridge rails for roof mount solar, I
decided to adapt that system for a vertical mount. Which is something it
was not designed for. I combined the Ironridge hardware with regular parts
from the hardware store.
The cost of mounting solar panels nowadays is often higher than the cost of
the panels. I hoped to match the cost, and I nearly did. The solar panels cost
$100 each, and the fence cost $110 per solar panel. This fence was
significantly cheaper than conventional ground mount arrays that I
considered as alternatives, and made a better use of a difficult hillside
location.
I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail.
(Longer rails would need a center post anyway, and the 7 foot long rails
have cheaper shipping, since they do not need to be shipped freight.)
For the fence posts, I used regular 4x4" treated posts. 12 foot long, set
in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6
inches of gravel on the bottom.
detail of how the rails are mounted to the posts, and the panels to the rails
To connect the Ironridge rails to the fence posts, I used the Ironridge
LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8 x 3
inch hot-dipped galvanized lag screw. Since a treated post can react badly
with an aluminum bracket, there needs to be some flashing between the post
and bracket. I used Shurtape PW-100 tape for that. I see no sign of
corrosion after 1 year.
The rest of the Ironridge system is a T-bolt that connects the rail to the
L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners
(UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips.
Since the Ironridge hardware is not designed to hold a solar panel at a 90
degree angle, I was concerned that the panels might slide downward over
time. To help prevent that, I added some additional support brackets under
the bottom of the panels. So far, that does not seem to have been a problem
though.
I installed Aptos 370 watt solar panels on the fence. They are bifacial,
and while the posts block the back partially, there is still bifacial
gain on cloudy days. I left enough space under the solar panels to be able
to run a push mower under them.
Me standing in front of the solar fence at end of construction
I put pairs of posts next to one-another, so each 7 foot segment of fence
had its own 2 posts. This is the least elegant part of this design, but
fitting 2 brackets next to one-another on a single post isn't feasible.
I bolted the pairs of posts together with some spacers. A side benefit of
doing it this way is that treated lumber can warp as it dries, and this
prevented much twisting of the posts.
Using separate posts for each segment also means that the fence can
traverse a hill easily. And it does not need to be perfectly straight. In
fact, my fence has a 30 degree bend in the middle. This means it has both
south facing and south-west facing panels, so can catch the light for
longer during the day.
After building the fence, I noticed there was a slight bit of sway at the
top, since 9 feet of wooden post is not entirely rigid. My worry was that a
gusty wind could rattle the solar panels. While I did not actually observe
that happening, I added some diagonal back bracing for peace of mind.
view of rear upper corner of solar fence, showing back bracing connection
Inspecting the fence today, I find no problems after the first year. I hope
it will last 30 years, with the lifespan of the treated lumber
being the likely determining factor.
As part of my larger (and still ongoing) ground mount solar install, the
solar fence has consistently provided great power. The vertical orientation
works well at latitude 36. It also turned out that the back of the fence was
useful to hang conduit and wiring and solar equipment, and so it turned into
the electrical backbone of my whole solar field. But that's another story..
solar fence parts list
DebConf26 is already in the air in Argentina. Organizing DebConf26 give us the
opportunity to talk about Debian in our country again. This is not the first
time that Debian has come here, previously Argentina has hosted DebConf 8 in Mar
del Plata.
In August, Nattie Mayer-Hutchings and Stefano Rivera from DebConf Committee
visited the venue where the next DebConf will take place. They came to Argentina
in order to see what it is like to travel from Buenos Aires to Santa Fe (the
venue of the next DebConf). In addition, they were able to observe the layout
and size of the classrooms and halls, as well as the infrastructure available at
the venue, which will be useful for the Video Team.
But before going to Santa Fe, on the August 27th, we organized a meetup in
Buenos Aires at GCoop, where we hosted some talks:
Qu es Debian? - Pablo Gonzalez (sultanovich) / Emmanuel Arias
On August 28th, we had the opportunity to get to know the Venue. We walked around
the city and, obviously, sampled some of the beers from Santa Fe.
On August 29th we met with representatives of the University and local government
who were all very supportive. We are very grateful to them for opening
their doors to DebConf.
In the afternoon we met some of the local free software community at an event we
held in ATE Santa Fe. The event included several talks:
Qu es Debian? - Pablo (sultanovich) / Emmanuel Arias
Ciberrestauradores: Gestores de basura electr nica - Programa RAEES Acutis
Debian and DebConf (Stefano Rivera/Nattie Mayer-Hutchings)
Thanks to Debian Argentina, and all the people who will make DebConf26
possible.
Thanks to Nattie Mayer-Hutchings and Stefano Rivera for reviewing an earlier
version of this article.
Preparing for setup.py install deprecation, by Colin Watson
setuptools upstream will be removing the setup.py install command
on 31 October. While this may not trickle down immediately into Debian, it does
mean that in the near future nearly all Python packages will have to use
pybuild-plugin-pyproject (though they don t necessarily have to use
pyproject.toml; this is just a question of how the packaging runs the build
system). Some of the Python team talked about this a bit at DebConf, and Colin
volunteered to write up some notes
on cases where this isn t straightforward. This page will likely grow as the
team works on this problem.
Salsa CI, by Santiago Ruano Rinc n
Santiago fixed some pending issues in the MR that moves the pipeline to sbuild+unshare,
and after several months, Santiago was able to mark the MR as ready. Part of the
recent fixes include handling external repositories,
honoring the RELEASE autodetection from d/changelog
(thanks to Ahmed Siam for spotting the main reason of the issue), and fixing a
regression about the apt resolver for *-backports releases.
Santiago is currently waiting for a final review and approval from other members
of the Salsa CI team, and being able to merge it. Thanks to all the folks who
have helped testing the changes or provided feedback so far. If you want to test
the current MR, you need to include the following pipeline definition in your
project s CI config file:
As a reminder, this MR will make the Salsa CI pipeline build the packages more
similar to how it s built by the Debian official builders. This will also save
some resources, since the default pipeline will have one stage less (the
provisioning) stage, and will make it possible for more projects to be built on
salsa.debian.org (including large projects and
those from the OCaml ecosystem), etc. See the different issues being fixed in
the MR description.
Debian 13 trixie release, by Emilio Pozuelo Monfort
On August 9th, Debian 13 trixie was released, building on two years worth of
updates and bug fixes from hundreds of developers. Emilio helped coordinate the
release, communicating with several teams involved in the process.
DebConf 26 Site Visit, by Stefano Rivera
Stefano visited Santa Fe, Argentina, the site for DebConf 26
next year. The aim of the visit was to help build a local team and see the
conference venue first-hand. Stefano and Nattie represented the DebConf
Committee. The local team organized Debian meetups in Buenos Aires and Santa Fe,
where Stefano presented a talk
on Debian and DebConf. Venues were scouted
and the team met with the university management and local authorities.
Miscellaneous contributions
Rapha l updated tracker.debian.org after the
trixie release to add the new forky release in the set of monitored
distributions.
He also reviewed and deployed the work of Scott Talbert
showing open merge requests from salsa in the action needed panel.
Rapha l reviewed some DEP-3 changes
to modernize the embedded examples in light of the broad git adoption.
Rapha l configured new workflows
on debusine.debian.net to upload to trixie and
trixie-security, and officially announced the service
on debian-devel-announce, inviting Debian developers to try the service for
their next upload to unstable.
Carles created a merge request
for django-compressor upstream to fix an error when concurrent node processing
happened. This will allow removing a workaround
added in openstack-dashboard and avoid the same bug in other projects that use
django-compressor.
Carles prepared a system to detect packages that Recommends packages which
don t exist in unstable. Processed (either reported
or ignored due to mis-detected problems or temporary problems) 16% of the
reports. Will continue next month.
Carles got familiar and gave feedback for the freedict-wikdict package.
Planned contributions with the maintainer to improve the package.
Helmut responded to queries related to /usr-move.
Helmut adapted crossqa.d.n to the release of
trixie .
Helmut diagnosed sufficient failures in rebootstrap
to make it work with gcc-15.
Faidon discovered that the Multi-Arch hinter would emit confusing hints about
:any annotations. Helmut identified the root cause to be the handling of
virtual packages and fixed it.
Colin upgraded about 70 Python packages to new upstream versions, which is
around 10% of the backlog; this included a complicated Pydantic upgrade in
collaboration with the Rust team.
Colin fixed
a bug in debbugs that caused incoming emails to bugs.debian.org with certain
header contents to go missing.
Thorsten uploaded sane-airscan, which was already in experimental, to unstable.
Thorsten created a script to automate the upload of new upstream versions of
foomatic-db. The database contains information about printers and regularly gets
an update. Now it is possible to keep the package more up to date in Debian.
Stefano prepared updates to almost all of his packages that had new versions
waiting to upload to unstable. (beautifulsoup4, hatch-vcs, mkdocs-macros-plugin,
pypy3, python-authlib, python-cffi, python-mitogen, python-pip, python-pipx,
python-progress, python-truststore, python-virtualenv, re2, snowball, soupsieve).
Stefano uploaded two new python3.13 point releases to unstable.
Stefano updated distro-info-data in stable releases, to document the trixie
release and expected EoL dates.
Stefano did some debian.social sysadmin work (keeping up quotas with growing
databases and filesystems).
Stefano supported the Debian treasurers in processing some of the DebConf 25
reimbursements.
Lucas uploaded ruby3.4 to experimental. It was already approved by FTP masters.
Lucas uploaded ruby-defaults to experimental to add support for ruby3.4. It
will allow us to start triggering test rebuilds and catch any FTBFS with ruby3.4.
Lucas did some administrative work for Google Summer of Code (GSoC) and
replied to some queries from mentors and students.
Anupa helped to organize release parties for Debian 13 and Debian Day events.
Anupa did the live coverage for the Debian 13 release and prepared the Bits
post for the release announcement and 32nd Debian Day as part of the Debian
Publicity team.
Anupa attended a Debian Day event
organized by FOSS club SSET as a speaker.
Tobias Frost
did 4.0h (out of 0.0h assigned and 12.0h from previous period), thus carrying over 8.0h to the next month.
Utkarsh Gupta
did 16.0h (out of 22.75h assigned), thus carrying over 6.75h to the next month.
Evolution of the situation
In August, we released 27 DLAs.
The month of August marked the release of Debian 13 (codename trixie ). This is worth noting because it brought with it the return of the customary fast development pace of Debian unstable, which included several contributions from LTS Team members. More on that below.
Of the many security updates which were published (and a few non-security updates as well), some notable ones are highlighted here.
Notable security updates:
gnutls28 prepared by Adrian Bunk, fixes several potential denial of service vulnerabilities
apache2, prepared by Bastien Roucari s, fixes several vulnerabilities including a potential denial of service and SSL/TLS-related access control
mbedtls (original update, regression update) prepared by Andrej Shadura, fixes several potential denial of service and information disclosure vulnerabilities
openjdk-17, prepared by Emilio Pozuelo Monfort, fixes several vulnerabilities which could result in denial of service, information disclosure or weakened TLS connections
Notable non-security updates:
distro-info-data, prepared by Stefano Rivera, adds information concerning future Debian and Ubuntu releases
ca-certificates-java, prepared by Bastien Roucari s, fixes some bugs which could disrupt future updates
The LTS Team continues to welcome the collaboration of maintainers from across the Debian community. The contributions of maintainers from outside the LTS Team include: postgresql-13 (Christoph Berg), sope (Jordi Mallach), thunderbird (Carsten Schoenert), and iperf3 (Roberto Lumbreras).
Finally, LTS Team members also contributed updates of the following packages:
redis (to stable), prepared by Chris Lamb
firebird3.0 (to oldstable and stable), prepared by Adrian Bunk
node-tmp (to oldstable, stable, and unstable), prepared by Adrian Bunk
openjpeg2 (to oldstable, stable, and unstable), prepared by Adrian Bunk
apache2 (to oldstable), prepared by Bastien Roucari s
unbound (to oldstable), prepared by Guilhem Moulin
luajit (to oldstable), prepared by Guilhem Moulin
golang-github-gin-contrib-cors (to oldstable and stable), prepared by Thorsten Alteholz
libcoap3 (to stable), prepared by Thorsten Alteholz
libcommons-lang-java and libcommons-lang3-java (both to unstable), prepared by Daniel Leidert
python-flask-cors (to oldstable), prepared by Daniel Leidert
The LTS Team would especially like to thank our many longtime friends and sponsors for their support and collaboration.
Thanks to our sponsors
Sponsors that joined recently are in bold.
Debian LTS
This was my hundred-thirty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4272-1] aide security update of two CVEs where a local attacker can take advantage of these
flaws to hide the addition or removal of a file from the the report,
tamper with the log output, or cause aide to crash during report
printing or database listing.
[DLA 4284-1] udisks2 security update to fix one CVE related to a possible local privilege escalation.
[DLA 4285-1] golang-github-gin-contrib-cors security update to fix one CVE related to circumvention of restrictions.
[#1112054] trixie-pu of golang-github-gin-contrib-cors, prepared and uploaded
[#1112335] trixie-pu of libcoap3, prepared and uploaded
[#1112053] bookworm-pu of golang-github-gin-contrib-cors, prepared and uploaded
I also continued my work on suricata and could backport all patches. Now I have to do some tests with the package. I also started to work on an openafs regression and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the eighty-fifth ELTS month. During my allocated time I uploaded or worked on:
[ELA-1499-1] aide security update to fix one embargoed CVE in Stretch, related to a crash. The other CVE mentioned above was not affecting Stretch.
[ELA-1508-1] udisks2 security update to fix one embargoed CVE in Stretch and Buster.
I could also mark the CVEs of libcoap as not-affected.
I also attended the monthly LTS/ELTS meeting.Of course like for LTS, As suricata had been requested for Stretch as well now, I didn t not finish my work here.
Debian Printing
This month I uploaded a new upstream version or a bugfix version of:
On my fight against outdated RFPs, I closed 31 of them in August.
FTP master
Yeah, Trixie has been released, the tired bones need to be awaken again :-). This month I accepted 203 and rejected 18 packages. The overall number of packages that got accepted was 243.
Welcome to the August 2025 report from the Reproducible Builds project!
Welcome to the latest report from the Reproducible Builds project for August 2025. These monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
Reproducible Builds Summit 2025
Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th 30th 2025 in Vienna, Austria!**
We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.
During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.
If you re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!
Reproducible Builds and live-bootstrap at WHY2025
WHY2025 (What Hackers Yearn) is a nonprofit outdoors hacker camp that takes place in Geestmerambacht in the Netherlands (approximately 40km north of Amsterdam). The event is organised for and by volunteers from the worldwide hacker community, and knowledge sharing, technological advancement, experimentation, connecting with your hacker peers, forging friendships and hacking are at the core of this event .
At this year s event, Frans Faase gave a talk on live-bootstrap, an attempt to provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system .
Frans talk is available to watch on video and his slides are available as well.
DALEQ Explainable Equivalence for Java Bytecode
Jens Dietrich of the Victoria University of Wellington, New Zealand and Behnaz Hassanshahi of Oracle Labs, Australia published an article this month entitled DALEQ Explainable Equivalence for Java Bytecode which explores the options and difficulties when Java binaries are not identical despite being from the same sources, and what avenues are available for proving equivalence despite the lack of bitwise correlation:
[Java] binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process.
Jens and Behnaz therefore propose a tool called DALEQ, which:
disassembles Java byte code into a relational database, and can normalise this database by applying Datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with Datalog proofs recording the normalisation process. We demonstrate the impact of DALEQ in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, DALEQ is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that DALEQ outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.
Reproducibility regression identifies issue with AppArmor security policies
Tails developer intrigeri has tracked and followed a reproducibility regression in the generation of AppArmor policy caches, and has identified an issue with the 4.1.0 version of AppArmor.
Although initially tracked on the Tails issue tracker, intrigeri filed an issue on the upstream bug tracker. AppArmor developer John Johansen replied, confirming that they can reproduce the issue and went to work on a draft patch. Through this, John revealed that it was caused by an actual underlying security bug in AppArmor that is to say, it resulted in permissions not (always) matching what the policy intends and, crucially, not merely a cache reproducibility issue.
Work on the fix is ongoing at time of writing.
Rust toolchain fixes
Rust Clippy is a linting tool for the Rust programming language. It provides a collection of lints (rules) designed to identify common mistakes, stylistic issues, potential performance problems and unidiomatic code patterns in Rust projects. This month, however, Sosth ne Gu don filed a new issue in the GitHub requesting a new check that would lint against non deterministic operations in proc-macros, such as iterating over a HashMap .
Dropping support for the armhf architecture. From July 2015, Vagrant Cascadian has been hosting a zoo of approximately 35 armhf systems which were used for building Debian packages for that architecture.
Holger Levsen also uploaded strip-nondeterminism, our program that improves reproducibility by stripping out non-deterministic information such as timestamps or other elements introduced during packaging. This new version, 1.14.2-1, adds some metadata to aid the deputy tool. ( #1111947)
Lastly, Bernhard M. Wiedemann posted another openSUSEmonthly update for their work there.
diffoscopediffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 303, 304 and 305 to Debian:
Improvements:
Use sed(1) backreferences when generating debian/tests/control to avoid duplicating ourselves. []
Move from a mono-utils dependency to versioned mono-devel mono-utils dependency, taking care to maintain the [!riscv64] architecture restriction. []
Use sed over awk to avoid mangling dependency lines containing = (equals) symbols such as version restrictions. []
Bug fixes:
Fix a test after the upload of systemd-ukify version 258~rc3. []
Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2) time. []
Don t check for PyPDF version 3 specifically; check for >= 3. []
Misc:
Update copyright years. [][]
In addition, Martin Joerg fixed an issue with the HTML presenter to avoid crash when page limit is None [] and Zbigniew J drzejewski-Szmek fixed compatibility with RPM 6 []. Lastly, John Sirois fixed a missing requests dependency in the trydiffoscope tool. []
Website updates
Once again, there were a number of improvements made to our website this month including:
Chris Lamb:
Write and publish a news entry for the upcoming summit. []
Add some assets used at FOSSY, such as the badges and the paper handouts. []
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:
Ignore that the megacli RAID controller requires packages from Debian bookworm. []
In addition,
James Addison migrated away from deprecated toplevel deb822 Python module in favour of debian.deb822 in the bin/reproducible_scheduler.py script [] and removed a note on reproduce.debian.net note after the release of Debian trixie [].
Jochen Sprickerhof made a huge number of improvements to the reproduce.debian.net statistics calculation [][][][][][] as well as to the reproduce.debian.net service more generally [][][][][][][][].
Mattia Rizzolo performed a lot of work migrating scripts to SQLAlchemy version 2.0 [][][][][][] in addition to making some changes to the way openSUSE reproducibility tests are handled internally. []
Lastly, Roland Clobus updated the Debian Live packages after the release of Debian trixie. [][]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
About 95% of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay or GitHub
Sponsors.
Python team
forky is
open!
As a result I m starting to think about the upcoming Python
3.14. At some point we ll doubtless do
a full test rebuild, but in advance of that I concluded that one of the most
useful things I could do would be to work on our very long list of packages
with new upstream
versions.
Of course there s no real chance of this ever becoming empty since upstream
maintainers aren t going to stop work for that long, but there are a lot of
packages there where we re quite a long way out of date, and many of those
include fixes that we ll need for 3.14, either directly or by fixing
interactions with new versions of other packages that in turn will need to
be fixed. We can backport changes when we need to, but more often than not
the most efficient way to do things is just to keep up to date.
So, I upgraded these packages to new upstream versions (deep breath):
That s only about 10% of the backlog, but of course others are working on
this too. If we can keep this up for a while then it should help.
I packaged pytest-run-parallel,
pytest-unmagic (still in NEW), and
python-forbiddenfruit (still in NEW),
all needed as new dependencies of various other packages.
setuptools upstream will be removing the setup.py install
command on 31
October. While this may not trickle down immediately into Debian, it does
mean that in the near future nearly all Python packages will have to use
pybuild-plugin-pyproject (note that this does not mean that they
necessarily have to use pyproject.toml; this is just a question of how the
packaging runs the build system). We talked about this a bit at DebConf,
and I said that I d noticed a number of packages where this isn t
straightforward and promised to write up some notes. I wrote the
Python/PybuildPluginPyproject
wiki page for this; I expect to add more bits and pieces to it as I find them.
On that note, I converted several packages to pybuild-plugin-pyproject:
I reviewed Debian defaults: nftables as banaction and systemd as
backend,
but it looked as though nothing actually needed to be changed so we closed
this with no action.
Rust team
Upgrading Pydantic was complicated, and required a rust-pyo3 transition
(which Jelmer Vernoo started and Peter Michael Green has mostly been
driving, thankfully), packaging rust-malloc-size-of (including an upstream
portability fix), and
upgrading several packages to new upstream versions:
Another short status update of what happened on my side last month. Released
Phosh 0.49.0
and added some more QoL improvements to Phosh Mobile stack
(e.g. around Cell broadcasts). Also pulled my SHIFT6mq out of the
drawer (where it was sitting in a drawwer far too long) and got it to
show a picture after a small driver fix. Thanks to the work the
sdm845-mainlining folks are doing that was all that was needed. If I
can get touch to work better that would be another nice device for
demoing Phosh.
See below for details on the above and more:
phosh
Due to the freeze I did not do that many uploads in the last few months, so
there were various new releases I packaged once Trixie was released. Regarding
the release of Debian 13, Trixie, I wrote a small summary of the changes in my
packages.
I uploaded an unreleased version of cage to experimental, to prepare for the
transition to wlroots-0.19. Both sway and labwc already had packages in
experimental that depended on the new wlroots version. When the transition
happened, I uploaded the cage version to unstable, as well as labwc 0.9.1
and sway 1.11.
I updated
foot to 1.23.1
waybar to 0.14.0
swaylock to 1.8.3
git-quick-stats to 2.7.0
swayimg to 4.5
usbguard to 1.1.4
fcft to 3.3.2
fnott to 1.8.0
wdisplays to 1.1.3
wev to 1.1.0
wlopm to 1.0.0
wmenu to 0.2.0
libsfdo to 0.1.4
Most of the packages I uploaded using git-debpush, some of them could not
be uploaded this way due to upstream using git submodules (this is
1107219). I also
created 1112040 - git-debpush: should also say which tag it
created and
1111504 - git-debpush: pristine-tar check warns about pristine-tar data thats
not present (which
is already fixed).
I uploaded wayback 0.2 to NEW, where it is waiting for
review,
(ITP).
In my dayjob I added extended the place lookup form of apis-core-rdf to allow
searching places and selecting them on a map using leaflet and the nominatim
API. Another issue I worked on was about highlighting those inputs of our
generic list filter that are used to filter the results. I released a couple
of bugfix releases for the v0.50 release, then v0.51 and two bugfix releases
and then v0.52 and another couple of bugfix releases. v0.53 will land in a
couple of days. I also released v0.6.2 of apis-highlighter-ng, which is sort
of a plugin for apis-core-rdf, that allows to highlight parts of a text and
link them to whatever Django object (in our case relations).
Here s my monthly but brief update about the activities I ve done in the F/L/OSS world.
Debian
Debian 13 was released! Woot!
Whilst I didn t get a chance to do much, here s still a few things that I worked on:
Helped Anshul with Golang 1.25 packaging and upload.
Assited Anshul in fixing Golang bugs in the stable release via a -pu.
Mentoring for newcomers.
Moderation of -project mailing list.
Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021.
Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
Motivation
On the 8th of August 2025 (a day before the Debian Trixie release), I was upgrading my personal laptop from Debian Bookworm to Trixie. It was a major update. However, the update didn t go smoothly, and I ran into some errors. From the Debian support IRC channel, I got to know that it would be best if I removed the texlive packages.
However, it was not so easy to just remove texlive with a simple apt remove command. I had to remove the texlive packages from /usr/bin. Then I ran into other errors. Hours after I started the upgrade, I realized I preferred having my system as it was before, as I had to travel to Noida the next day. Needless to say, I wanted to go to sleep rather than fix my broken system. Only if I had a way to go back to my system before I started upgrading, it would have saved a lot of trouble for me. I ended up installing Trixie from scratch.
It turns out that there was a way to recover to the state before the upgrade - using Timeshift to roll back the system to a state (in our example, it is the state before the upgrade process started) in the past. However, it needs the Btrfs filesystem with appropriate subvolumes, not provided by Debian installer in their guided partitioning menu.
I have set it up after a few weeks of the above-mentioned incident. Let me demonstrate how it works.
Check the screenshot above. It shows a list of snapshots made by Timeshift. Some of them were made by me manually. Others were made by Timeshift automatically as per the routine - I have set up hourly backups and weekly backups etc.
In the above-mentioned major update, I could have just taken a snapshot using Timeshift before performing the upgrade and could have rolled back to that snapshot when I found that I cannot spend more time on fixing my installation errors. Then I could just perform the upgrade later.
Installation
In this tutorial, I will cover how I installed Debian with Btrfs and disk encryption, along with creating subvolumes @ for root and @home for /home so that I can use Timeshift to create snapshots. These snapshots are kept on the same disk where Debian is installed, and the use-case is to roll back to a working system in case I mess up something or to recover an accidentally deleted file.
I went through countless tutorials on the Internet, but I didn t find a single tutorial covering both the disk encryption and the above-mentioned subvolumes (on Debian). Debian doesn t create the desired subvolumes by default, therefore the process requires some manual steps, which beginners may not be comfortable performing. Beginners can try distros such as Fedora and Linux Mint, as their installation includes Btrfs with the required subvolumes.
Furthermore, it is pertinent to note that I used Debian Trixie s DVD iso on a real laptop (not a virtual machine) for my installation. Debian Trixie is the codename for the current stable version of Debian. Then I took screenshots in a virtual machine by repeating the process. Moreover, a couple of screenshots are from the installation I did on the real laptop.
Let s start the tutorial by booting up the Debian installer.
The above screenshot shows the first screen we see on the installer. Since we want to choose Expert Install, we select Advanced Options in the screenshot above.
Let s select the Expert Install option in the above screenshot. It is because we want to create subvolumes after the installer is done with the partition, and only then proceed to installing the base system. Non-expert install modes proceed directly to installing the system right after creating partitions without pausing for us to create the subvolumes.
After selecting the Expert Install option, you will get the screen above. I will skip to partitioning from here and leave the intermediate steps such as choosing language, region, connecting to Wi-Fi, etc. For your reference, I did create the root user.
Let s jump right to the partitioning step. Select the Partition disks option from the menu as shown above.
Choose Manual.
Select your disk where you would like to install Debian.
Select Yes when asked for creating a new partition.
I chose the msdos option as I am not using UEFI. If you are using UEFI, then you need to choose the gpt option. Also, your steps will (slightly) differ from mine if you are using UEFI. In that case, you can watch this video by the YouTube channel EF Linux in which he creates an EFI partition. As he doesn t cover disk encryption, you can continue reading this post after following the steps corresponding to EFI.
Select the free space option as shown above.
Choose Create a new partition.
I chose the partition size to be 1 GB.
Choose Primary.
Choose Beginning.
Now, I got to this screen.
I changed mount point to /boot and turned on the bootable flag and then selected Done setting up the partition.
Now select free space.
Choose the Create a new partition option.
I made the partition size equal to the remaining space on my disk. I do not intend to create a swap partition, so I do not need more space.
Select Primary.
Select the Use as option to change its value.
Select physical volume for encryption.
Select Done setting up the partition.
Now select Configure encrypted volumes.
Select Yes.
Select Finish.
Selecting Yes will take a lot of time to erase the data. Therefore, I would say if you have hours for this step (in case your SSD is like 1 TB), then I would recommend selecting Yes. Otherwise, you could select No and compromise on the quality of encryption.
After this, you will be asked to enter a passphrase for disk encryption and confirm it. Please do so. I forgot to take the screenshot for that step.
Now select that encrypted volume as shown in the screenshot above.
Here we will change a couple of options which will be shown in the next screenshot.
In the Use as menu, select btrfs journaling file system.
Now, click on the mount point option.
Change it to / - the root file system.
Select Done setting up the partition.
This is a preview of the paritioning after performing the above-mentioned steps.
If everything is okay, proceed with the Finish partitioning and write changes to disk option.
The installer is reminding us to create a swap partition. I proceeded without it as I planned to add swap after the installation.
If everything looks fine, choose yes for writing the changes to disks.
Now we are done with partitioning and we are shown the screen in the screenshot above. If we had not selected the Expert Install option, the installer would have proceeded to install the base system without asking us.
However, we want to create subvolumes before proceeding to install the base system. This is the reason we chose Expert Install.
Now press Ctrl + F2.
You will see the screen as in the above screenshot. It says Please press Enter to activate this console. So, let s press Enter.
After pressing Enter, we see the above screen.
The screenshot above shows the steps I performed in the console. I followed the already mentioned video by EF Linux for this part and adapted it to my situation (he doesn t encrypt the disk in his tutorial).
First we run df -h to have a look at how our disk is partitioned. In my case, the output was:
df -h shows us that /dev/mapper/sda2_crypt and /dev/sda1 are mounted on /target and /target/boot respectively.
Let s unmount them. For that, we run:
# umount /target
# umount /target/boot
Next, let s mount our root filesystem to /mnt.
# mount /dev/mapper/sda2_crypt /mnt
Let s go into the /mnt directory.
# cd /mnt
Upon listing the contents of this directory, we get:
/mnt # ls
@rootfs
Debian installer has created a subvolume @rootfs automatically. However, we need the subvolumes to be @ and @home. Therefore, let s rename the @rootfs subvolume to @.
/mnt # mv @rootfs @
Listing the contents of the directory again, we get:
/mnt # ls
@
We only one subvolume right now. Therefore, let us go ahead and create another subvolume @home.
If we perform ls now, we will see there are two subvolumes:
/mnt # ls
@ @home
Let us mount /dev/mapper/sda2_crypt to /target
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@ /dev/mapper/sda2_crypt /target/
Now we need to create a directory for /home.
/mnt # mkdir /target/home/
Now we mount the /home directory with subvol=@home option.
/mnt # mount -o noatime,space_cache=v2,compress=zstd,ssd,discard=async,subvol=@home /dev/mapper/sda2_crypt /target/home/
Now mount /dev/sda1 to /target/boot.
/mnt # mount /dev/sda1 /target/boot/
Now we need to add these options to the fstab file, which is located at /target/etc/fstab. Unfortunately, vim is not installed in this console. The only way to edit is Nano.
nano /target/etc/fstab
Edit your fstab file to look similar to the one in the screenshot above. I am pasting the fstab file contents below for easy reference.
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/sda2_crypt / btrfs noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@ 0 0
/dev/mapper/sda2_crypt /home btrfs noatime,compress=zstd,ssd,discard=async,space_cache=v2,subvol=@home 0 0
# /boot was on /dev/sda1 during installation
UUID=12842b16-d3b3-44b4-878a-beb1e6362fbc /boot ext4 defaults 0 2
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
Please double check the fstab file before saving it. In Nano, you can press Ctrl+O followed by pressing Enter to save the file. Then press Ctrl+X to quit Nano. Now, preview the fstab file by running
cat /target/etc/fstab
and verify that the entries are correct, otherwise you will booted to an unusable and broken system after the installation is complete.
Next, press Ctrl + Alt + F1 to go back to the installer.
Proceed to Install the base system.
Screenshot of Debian installer installing the base system.
I chose the default option here - linux-image-amd64.
After this, the installer will ask you a few more questions. For desktop environment, I chose KDE Plasma. You can choose the desktop environment as per your liking. I will not cover the rest of the installation process and assume that you were able to install from here.
Post installation
Let s jump to our freshly installed Debian system. Since I created a root user, I added the user ravi to the suoders file (/etc/sudoers) so that ravi can run commands with sudo. Follow this if you would like to do the same.
Now we set up zram as swap. First, install zram-tools.
sudo apt install zram-tools
Now edit the file /etc/default/zramswap and make sure to have the following lines are uncommented:
ALGO=lz4
PERCENT=50
Now, run
sudo systemctl restart zramswap
If you run lsblk now, you should see the below-mentioned entry in the output:
zram0 253:0 0 7.8G 0 disk [SWAP]
This shows us that zram has been activated as swap.
Now we install timeshift, which can be done by running
sudo apt install timeshift
After the installation is complete, run Timeshift and schedule snapshots as you please. We are done now. Hope the tutorial was helpful.
See you in the next post and let me know if you have any suggestions and questions on this tutorial.
Posted on August 28, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
A bit more than a year ago, I had been thinking about making myself a
cartridge pleated skirt. For a number of reasons, one of which is the
historybounding potential, I ve been thinking pre-crinoline, so
somewhere around the 1840s, and that s a completely new era for me,
which means: new underwear.
Also, the 1840s are pre-sewing machine, and I was already in a position
where I had more chances to handsew than to machine sew, so I decided to
embrace the slowness and sew 100% by hand, not even using the machine
for straight seams.
If I remember correctly, I started with the corded petticoat, looking
around the internet for instructions, and then designing my own based on
the practicality of using modern wide fabric from my stash (and
specifically some DITTE from costumers favourite source of dirty cheap
cotton IKEA).
Around the same time I had also acquired a sashiko kit, and I used the
Japanese technique for sewing running stitches pushing the needle with a
thimble that covers the base of the middle finger, and I can confirm
that for this kind of things it s great!
I ve since worn the petticoat a few times for casual / historyBounding /
folkwearBounding reasons, during the summer, and I can confirm it s
comfortable to use; I guess that during the winter it could be nice to
add a flannel layer below it.
Then I proceeded with the base layers: I had been browsing through
The workwoman's guide and that provided plenty of examples, and I
selected the basic ankle-length drawers from page 53 and the alternative
shift on page 47.
As for fabric, I had (and still have) a significant lack of underwear
linen in my stash, but I had plenty of cotton voile that I had not used
in a while: not very historically accurate for plain underwear, but
quite suitable for a wearable mockup.
Working with a 1830s source had an interesting aspect: other of the
usual, mildly annoying, imperial units, it also used a lot a few
obsolete units, especially nails, that my qalc, my usual calculator and
converter, doesn t support.
Not a big deal, because GNU units came to the rescue, and that one
knows a lot of obscure and niche units, and it s quite easy to add those
that are missing1
Working on this project also made me freshly aware of something I had
already noticed: converting instructions for machine sewing garments
into instructions for hand sewing them is usually straightforward, but
the reverse is not always true.
Starting from machine stitching, you can usually convert straight
stitches into backstitches (or running backstitches), zigzag and
overlocking into overcasting and get good results. In some cases you may
want to use specialist hand stitches that don t really have a machine
equivalent, such as buttonhole stitches instead of simply overcasting
the buttonhole, but that s it.
Starting from hand stitching, instead, there are a number of techniques
that could be converted to machine stitching, but involve a lot of
visible topstitching that wasn t there in the original instructions, or
at times are almost impossible to do by machine, if they involve
whipstitching together finished panels on seams that are subject to
strong tension.
Anyway, halfway through working with the petticoat I cut both the
petticoat and the drawers at the same time, for efficiency in fabric
use, and then started sewing the drawers.
The book only provided measurements for one size (moderate), and my
fabric was a bit too narrow to make them that size (not that I have any
idea what hip circumference a person of moderate size was supposed to
have), so the result is just wide enough to be comfortably worn, but I
think that when I ll make another pair I ll try to make them a bit
wider. On the other hand they are a bit too long, but I think that I ll
fix it by adding a tuck or two. Not a big deal, anyway.
The shift gave me a bit more issues: I used the recommended gusset size,
and ended up with a shift that was way too wide at the top, so I had to
take a box pleat in the center front and back, which changed the look
and wear of the garment. I have adjusted the instructions to make
gussets wider, and in the future I ll make another shift following
those.
Even with the pleat, the narrow shoulder straps are set quite far to the
sides, and they tend to droop, and I suspect that this is to be expected
from the way this garment is made. The fact that there are buttonholes
on the shoulder straps to attach to the corset straps and prevent the
issue is probably a hint that this behaviour was to be expected.
I ve also updated the instructions so that they shoulder straps are a
bit wider, to look more like the ones in the drawing from the book.
Making a corset suitable for the time period is something that I will
probably do, but not in the immediate future, but even just wearing the
shift under a later midbust corset with no shoulder strap helps.
I m also not sure what the point of the bosom gores is, as they don t
really give more room to the bust where it s needed, but to the high
bust where it s counterproductive. I also couldn t find images of
original examples made from this pattern to see if they were actually
used, so in my next make I may just skip them.
On the other hand, I m really happy with how cute the short sleeves
look, and if2 I ll ever make the other cut of shift from the same
book, with the front flaps, I ll definitely use these pleated sleeves
rather than the straight ones that were also used at the time.
As usual, all of the patterns have been published on my website under a
Free license:
Posted on August 17, 2025
Tags: madeof:bits
TL;DL: if you re using rrdtool on a 32 bit architecture like armhf make
an XML dump of your RRD files just before upgrading to Debian Trixie.
I am an old person at heart, so the sensor data from my home monitoring
system1 doesn t go to one of those newfangled javascript-heavy
data visualization platforms, but into good old RRD files, using rrdtool
to generate various graphs.
This happens on the home server, which is an armhf single board
computer2, hosting a few containers3.
So, yesterday I started upgrading one of the containers to Trixie, and
luckily I started from the one with the RRD, because when I rebooted
into the fresh system and checked the relevant service I found it
stopped on ERROR: '<file>' is too small (should be <size> bytes).
Some searxing later, I ve4 found this was caused by the 64-bit time_t
transition, which
changed the format of the files, and that (somewhat unexpectedly)
there was no way to fix it on the machine itself.
What needed to be done
instead was to export the data on an XML dump before the upgrade, and
then import it back afterwards.
Easy enough, right? If you know about it, which is why I m blogging
this, so that other people will know in advance :)
Anyway, luckily I still had the other containers on bookworm, so I
copied the files over there, did the upgrade, and my home monitoring
system is happily running as before.
of course one has a self-built home monitoring system, right?
On August 16, 1993, Ian Murdock announced the Debian Project to the world.
Three decades (and a bit) later, Debian is still going strong, built by a
worldwide community of developers, contributors, and users who believe in a
free, universal operating system.
Over the years, Debian has powered servers, desktops, tiny embedded devices, and
huge supercomputers. We have gathered at DebConfs, squashed countless bugs,
shared late-night hacking sessions, and helped keep millions of systems secure.
Debian Day is a great excuse to get together, whether it is a local meetup, an
online event, a bug squashing party, a team sprint or just coffee with fellow
Debianites. Check out the Debian Day wiki to see if there is a celebration
near you or to add your own.
Here is to 32 years of collaboration, code, and community, and to all the
amazing people who make Debian what it is.
Happy Debian Day!
Debian LTS
This was my hundred-thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4255-1] audiofile security update of two CVEs related to an integer overflow and a memory leak.
[DLA 4256-1] libetpan security update to fix one CVE related to prevent a null pointer dereference.
[DLA 4257-1] libcaca security update to fix two CVEs related to heap buffer overflows.
[DLA 4258-1] libfastjson security update to fix one CVE related to an out of bounds write.
[#1106867] kmail-account-wizard was marked as accepted
I also continued my work on suricata, which turned out to be more challenging than expected. This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the eighty-fourth ELTS month. Unfortunately my allocated hours were far less than expected, so I couldn t do as much work as planned.
Most of the time I spent with FD tasks and I also attended the monthly LTS/ELTS meeting. I further listened to the debusine talks during debconf. On the one hand I would like to use debusine to prepare uploads for embargoed ELTS issues, on the other hand I would like to use debusine to run the version of lintian that is used in the different releases. At the moment some manual steps are involved here and I tried to automate things. Of course like for LTS, I also continued my work on suricata.
Debian Printing
This month I uploaded a new upstream version of:
Guess what, I also started to work on a new version of hplip and intend to upload it in August.
This work is generously funded by Freexian!
Debian Astro
This month I uploaded new upstream versions of:
I also uploaded the new package boinor. This is a fork of poliastro, which was retired by upstream and removed from Debian some months ago. I adopted it and rebranded it at the desire of upstream. boinor is the abbreviation of BOdies IN ORbit and I hope this software is still useful.
Debian Mobcom
Unfortunately I didn t found any time to work on this topic.
misc
On my fight against outdated RFPs, I closed 31 of them in July. Their number is down to 3447 (how can you dare to open new RFPs? :-)). Don t be afraid of them, they don t bite and are happy to be released to a closed state.
FTP master
The peace will soon come to an end, so this month I accepted 87 and rejected 2 packages. The overall number of packages that got accepted was 100.