Search Results: "ag"

2 February 2026

Paul Tagliamonte: Paging all Radio Curious Hackers

After years of thinking about and learning about how radios work, I figured it was high-time to start to more aggressively share the things i ve been learning. I had a ton of fun at DistrictCon year 0, so it was a pretty natural place to pitch an RF-focused introductory talk. I was selected for Year 1, and able to give my first ever RF related talk about how to set off restaurant pagers (including one on stage!) by reading and writing IQ directly using a little bit of stdlib only Python. This talk is based around the work I ve written about previously (here, here and here), but the all-in-one form factor was something I was hoping would help encourage folks out there to take a look under the hood of some of the gear around them. (In case the iframe above isn t working, direct link to the YouTube video recording is here) I ve posted my slides from the talk at PARCH.pdf to hopefully give folks some time to flip through them directly. All in all, the session was great It was truely humbling to see so many folks interested in hearing me talk about radios. I had a bit of an own-goal in picking a 20 minute form-factor, so the talk is paced wrong (it feels like it went way too fast). Hopefully being able to see the slides and pause the video is helpful. We had a short ad-hoc session after where I brought two sets of pagers and my power switch; but unfortunately we didn t have anyone who was able to trigger any of the devices on their own (due to a mix of time between sessions and computer set-up). Hopefully it was enough to get folks interested in trying this on their own!

1 February 2026

Benjamin Mako Hill: What do people do when they edit Wikipedia through Tor?

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some older published projects. This post is closely based on a previously published post by Kaylea Champion on the Community Data Science Blog.

Many individuals use Tor to reduce their visibility to widespread internet surveillance.
One popular approach to protecting our privacy online is to use the Tor network. Tor protects users from being identified by their IP address, which can be tied to a physical location. However, if you d like to contribute to Wikipedia using Tor, you ll run into a problem. Although most IP addresses can edit without an account, Tor users are blocked from editing.
Tor users attempting to contribute to Wikipedia are shown a screen that informs them that they are not allowed to edit Wikipedia.
Other research by my team has shown that Wikipedia s attempt to block Tor is imperfect and that some people have been able to edit despite the ban. As part of this work, we built a dataset of more than 11,000 contributions to Wikipedia via Tor and used quantitative analysis to show that contributions from Tor were of about the same quality as contributions from other new editors and other contributors without accounts. Of course, given the unusual circumstances Tor-based contributors face, we wondered whether a deeper look at the content of their edits might reveal more about their motives and the kinds of contributions they seek to make. Kaylea Champion (then a student, now faculty at UW Bothell) led a qualitative investigation to explore these questions. Given the challenges of studying anonymity seekers, we designed a novel forensic qualitative approach inspired by techniques common in computer security and criminal investigation. We applied this new technique to a sample of 500 editing sessions and categorized each session based on what the editor seemed to be intending to do. Most of the contributions we found fell into one of the two following categories: Although these were most of what we observed, we also found evidence of several types of contributor intent:
An exploratory mapping of our themes in terms of the value a type of contribution represents to the Wikipedia community and the importance of anonymity in facilitating it. Anonymity-protecting tools play a critical role in facilitating contributions on the right side of the figure, while edits on the left are more likely to occur even when anonymity is impossible. Contributions toward the top reflect valuable forms of participation in Wikipedia, while edits at the bottom reflect damage.
In all, these themes led us to reflect on how the risks individuals face when contributing to online communities sometimes diverge from the risks the communities face in accepting their work. Expressing minoritized perspectives, maintaining community standards even when you may be targeted by the rulebreaker, highlighting injustice, or acting as a whistleblower can be very risky for an individual, and may not be possible without privacy protections. Of course, in platforms seeking to support the public good, such knowledge and accountability may be crucial.

This work was published as a paper at CSCW: Kaylea Champion, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. 2019. A Forensic Qualitative Analysis of Contributions to Wikipedia from Anonymity Seeking Users. Proceedings of the ACM on Human-Computer Interaction. 3, CSCW, Article 53 (November 2019), 26 pages. https://doi.org/10.1145/3359155

This project was conducted by Kaylea Champion, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. This work was supported by the National Science Foundation (awards CNS-1703736 and CNS-1703049) and included the work of two undergraduates supported through an NSF REU supplement.

Russ Allbery: Review: Paladin's Faith

Review: Paladin's Faith, by T. Kingfisher
Series: The Saint of Steel #4
Publisher: Red Wombat Studio
Copyright: 2023
ISBN: 1-61450-614-0
Format: Kindle
Pages: 515
Paladin's Faith is the fourth book in T. Kingfisher's loosely connected series of fantasy novels about the berserker former paladins of the Saint of Steel. You could read this as a standalone, but there are numerous (spoilery) references to the previous books in the series. Marguerite, who was central to the plot of the first book in the series, Paladin's Grace, is a spy with a problem. An internal power struggle in the Red Sail, the organization that she's been working for, has left her a target. She has a plan for how to break their power sufficiently that they will hopefully leave her alone, but to pull it off she's going to need help. As the story opens, she is working to acquire that help in a very Marguerite sort of way: breaking into the office of Bishop Beartongue of the Temple of the White Rat. The Red Sail, the powerful merchant organization Marguerite worked for, makes their money in the salt trade. Marguerite has learned that someone invented a cheap and reproducible way to extract salt from sea water, thus making the salt trade irrelevant. The Red Sail wants to ensure that invention never sees the light of day, and has forced the artificer into hiding. Marguerite doesn't know where they are, but she knows where she can find out: the Court of Smoke, where the artificer has a patron.
Having grown up in Anuket City, Marguerite was familiar with many clockwork creations, not to mention all the ways that they could go horribly wrong. (Ninety-nine times out of a hundred, it was an explosion. The hundredth time, it ran amok and stabbed innocent bystanders, and the artificer would be left standing there saying, "But I had to put blades on it, or how would it rake the leaves?" while the gutters filled up with blood.)
All Marguerite needs to put her plan into motion is some bodyguards so that she's not constantly distracted and anxious about being assassinated. Readers of this series will be unsurprised to learn that the bodyguards she asks Beartongue for are paladins, including a large broody male one with serious self-esteem problems. This is, like the other books in this series, a slow-burn romance with infuriating communication problems and a male protagonist who would do well to seek out a sack of hammers as a mentor. However, it has two things going for it that most books in this series do not: a long and complex plot to which the romance takes a back seat, and Marguerite, who is not particularly interested in playing along with the expected romance developments. There are also two main paladins in this story, not just one, and the other is one of the two female paladins of the Saint of Steel and rather more entertaining than Shane. I generally like court intrigue stories, which is what fills most of this book. Marguerite is an experienced operative, so the reader gets some solid competence porn, and the paladins are fish out of water but are also unexpectedly dangerous, which adds both comedy and satisfying table-turning. I thoroughly enjoyed the maneuvering and the culture clashes. Marguerite is very good at what she does, knows it, and is entirely uninterested in other people's opinions about that, which short-circuits a lot of Shane's most annoying behavior and keeps the story from devolving into mopey angst like some of the books in this series have done. The end of this book takes the plot in a different direction that adds significantly to the world-building, but also has a (thankfully short) depths of despair segment that I endured rather than enjoyed. I am not really in the mood for bleak hopelessness in my fiction at the moment, even if the reader is fairly sure it will be temporary. But apart from that, I thoroughly enjoyed this book from beginning to end. When we finally meet the artificer, they are an absolute delight in that way that Kingfisher is so good at. The whole story is infused with the sense of determined and competent people refusing to stop trying to fix problems. As usual, the romance was not for me and I think the book would have been better without it, but it's less central to the plot and therefore annoyed me less than any of the books in this series so far. My one major complaint is the lack of gnoles, but we get some new and intriguing world-building to make up for it, along with a setup for a fifth book that I am now extremely curious about. By this point in the series, you probably know if you like the general formula. Compared to the previous book, Paladin's Hope, I thought Paladin's Faith was much stronger and more interesting, but it's clearly of the same type. If, like me, you like the plots but not the romance, the plot here is more substantial. You will have to decide if that makes up for a romance in the typical T. Kingfisher configuration. Personally, I enjoyed this quite a bit, except for the short bleak part, and I'm back to eagerly awaiting the next book in the series. Rating: 8 out of 10

31 January 2026

Michael Prokop: apt, SHA-1 keys + 2026-02-01

You might have seen Policy will reject signature within a year warnings in apt(-get) update runs like this:
root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Get:5 http://foo.example.org/debian demo/main amd64 Packages [1097 B]
Fetched 5326 B in 0s (43.2 kB/s)
All packages are up to date.
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
root@424812bd4556:/# apt --audit update
Hit:1 http://foo.example.org/debian demo InRelease
Hit:2 http://deb.debian.org/debian trixie InRelease
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
All packages are up to date.    
Warning:  http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
Audit:  http://foo.example.org/debian/dists/demo/InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is:
   Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:
              No binding signature at time 2024-06-19T10:33:47Z
     because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance
     because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
Audit: The sources.list(5) entry for 'http://foo.example.org/debian' should be upgraded to deb822 .sources
Audit: Missing Signed-By in the sources.list(5) entry for 'http://foo.example.org/debian'
Audit: Consider migrating all sources.list(5) entries to the deb822 .sources format
Audit: The deb822 .sources format supports both embedded as well as external OpenPGP keys
Audit: See apt-secure(8) for best practices in configuring repository signing.
Audit: Some sources can be modernized. Run 'apt modernize-sources' to do so.
If you ignored this for the last year, I would like to tell you that 2026-02-01 is not that far away (hello from the past if you re reading this because you re already affected). Let s simulate the future:
root@424812bd4556:/# apt --update -y install faketime
[...]
root@424812bd4556:/# export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/faketime/libfaketime.so.1 FAKETIME="2026-08-29 23:42:11" 
root@424812bd4556:/# date
Sat Aug 29 23:42:11 UTC 2026
root@424812bd4556:/# apt update
Get:1 http://foo.example.org/debian demo InRelease [4229 B]
Hit:2 http://deb.debian.org/debian trixie InRelease                                 
Err:1 http://foo.example.org/debian demo InRelease
  Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
Warning: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. OpenPGP signature verification failed: http://foo.example.org/debian demo InRelease: Sub-process /usr/bin/sqv returned an error code (1), error message is: Signing key on 54321ABCD6789ABCD0123ABCD124567ABCD89123 is not bound:            No binding signature at time 2024-06-19T10:33:47Z   because: Policy rejected non-revocation signature (PositiveCertification) requiring second pre-image resistance   because: SHA1 is not considered secure since 2026-02-01T00:00:00Z
[...]
root@424812bd4556:/# echo $?
100
Now, the proper solution would have been to fix the signing key underneath (via e.g. sq cert lint &dash&dashfix &dash&dashcert-file $PRIVAT_KEY_FILE > $PRIVAT_KEY_FILE-fixed). If you don t have access to the according private key (e.g. when using an upstream repository that has been ignoring this issue), you re out of luck for a proper fix. But there s a workaround for the apt situation (related see apt commit 0989275c2f7afb7a5f7698a096664a1035118ebf):
root@424812bd4556:/# cat /usr/share/apt/default-sequoia.config
# Default APT Sequoia configuration. To overwrite, consider copying this
# to /etc/crypto-policies/back-ends/apt-sequoia.config and modify the
# desired values.
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01
[hash_algorithms]
sha1.second_preimage_resistance = 2026-02-01    # Extend the expiry for legacy repositories
sha224 = 2026-02-01
[packets]
signature.v3 = 2026-02-01   # Extend the expiry
Adjust this according to your needs:
root@424812bd4556:/# mkdir -p /etc/crypto-policies/back-ends/
root@424812bd4556:/# cp /usr/share/apt/default-sequoia.config /etc/crypto-policies/back-ends/apt-sequoia.config
root@424812bd4556:/# $EDITOR /etc/crypto-policies/back-ends/apt-sequoia.config
root@424812bd4556:/# cat /etc/crypto-policies/back-ends/apt-sequoia.config
# APT Sequoia override configuration
[asymmetric_algorithms]
dsa2048 = 2024-02-01
dsa3072 = 2024-02-01
dsa4096 = 2024-02-01
brainpoolp256 = 2028-02-01
brainpoolp384 = 2028-02-01
brainpoolp512 = 2028-02-01
rsa2048  = 2030-02-01
[hash_algorithms]
sha1.second_preimage_resistance = 2026-09-01    # Extend the expiry for legacy repositories
sha224 = 2026-09-01
[packets]
signature.v3 = 2026-02-01   # Extend the expiry
Then we re back into the original situation, being a warning instead of an error:
root@424812bd4556:/# apt update
Hit:1 http://deb.debian.org/debian trixie InRelease
Get:2 http://foo.example.org/debian demo InRelease [4229 B]
Hit:3 http://deb.debian.org/debian trixie-updates InRelease
Hit:4 http://deb.debian.org/debian-security trixie-security InRelease
Warning: http://foo.example.org/debian/dists/demo/InRelease: Policy will reject signature within a year, see --audit for details
[..]
Please note that this is a workaround, and not a proper solution.

Russ Allbery: Review: Dragon Pearl

Review: Dragon Pearl, by Yoon Ha Lee
Series: Thousand Worlds #1
Publisher: Rick Riordan Presents
Copyright: 2019
ISBN: 1-368-01519-0
Format: Kindle
Pages: 315
Dragon Pearl is a middle-grade space fantasy based on Korean mythology and the first book of a series. Min is a fourteen-year-old girl living on the barely-terraformed world of Jinju with her extended family. Her older brother Jun passed the entrance exams for the Academy and left to join the Thousand Worlds Space Forces, and Min is counting the years until she can do the same. Those plans are thrown into turmoil when an official investigator appears at their door claiming that Jun deserted to search for the Dragon Pearl. A series of impulsive fourteen-year-old decisions lead to Min heading for a spaceport alone, determined to find her brother and prove his innocence. This would be a rather improbable quest for a young girl, but Min is a gumiho, one of the supernaturals who live in the Thousand Worlds alongside non-magical humans. Unlike the more respectable dragons, tigers, goblins, and shamans, gumiho are viewed with suspicion and distrust because their powers are useful for deception. They are natural shapeshifters who can copy the shapes of others, and their Charm ability lets them influence people's thoughts and create temporary illusions of objects such as ID cards. It will take all of Min's powers, and some rather lucky coincidences, to infiltrate the Space Forces and determine what happened to her brother. It's common for reviews of this book to open with a caution that this is a middle-grade adventure novel and you should not expect a story like Ninefox Gambit. I will be boring and repeat that caution. Dragon Pearl has a single first-person viewpoint and a very linear and straightforward plot. Adult readers are unlikely to be surprised by plot twists; the fun is the world-building and seeing how Min manages to work around plot obstacles. The world-building is enjoyable but not very rigorous. Min uses and abuses Charm with the creative intensity of a Dungeons & Dragons min-maxer. Each individual event makes sense given the implication that Min is unusually powerful, but I'm dubious about the surrounding society and lack of protections against Charm given what Min is able to do. Min does say that gumiho are rare and many people think they're extinct, which is a bit of a fig leaf, but you'll need to bring your urban fantasy suspension of disbelief skills to this one. I did like that the world-building conceit went more than skin deep and influenced every part of the world. There are ghosts who are critical to the plot. Terraforming is done through magic, hence the quest for the Dragon Pearl and the miserable state of Min's home planet due to its loss. Medical treatment involves the body's meridians, as does engineering: The starships have meridians similar to those of humans, and engineers partly merge with those meridians to adjust them. This is not the sort of book that tries to build rigorous scientific theories or explain them to the reader, and I'm not sure everything would hang together if you poked at it too hard, but Min isn't interested in doing that poking and the story doesn't try to justify itself. It's mostly a vibe, but it's a vibe that I enjoyed and that is rather different than other space fantasy I've read. The characters were okay but never quite clicked for me, in part because proper character exploration would have required Min take a detour from her quest to find her brother and that was not going to happen. The reader gets occasional glimpses of a military SF cadet story and a friendship on false premises story, but neither have time to breathe because Min drops any entanglement that gets in the way of her quest. She's almost amoral in a way that I found believable but not quite aligned with my reading mood. I also felt a bit wrong-footed by how her friendships developed; saying too much more would be a spoiler, but I was expecting more human connection than I got. I think my primary disappointment with this book was something I knew going in, not in any way its fault, and part of the reason why I'd put off reading it: This is pitched at young teenagers and didn't have quite enough plot and characterization complexity to satisfy me. It's a linear, somewhat episodic adventure story with some neat world-building, and it therefore glides over the spots where an adult novel would have added political and factional complexity. That is exactly as advertised, so it's up to you whether that's the book you're in the mood for. One warning: The text of this book opens with an introduction by Rick Riordan that is just fluff marketing and that spoils the first few chapters of the book. It is unmarked as such at the beginning and tricked me into thinking it was the start of the book proper, and then deeply annoyed me. If you do read this book, I recommend skipping the utterly pointless introduction and going straight to chapter one. Followed by Tiger Honor. Rating: 6 out of 10

30 January 2026

Utkarsh Gupta: FOSS Activites in January 2026

Here s my monthly but brief update about the activities I ve done in the FOSS world.

Debian
Whilst I didn t get a chance to do much, here are still a few things that I worked on:
  • A few discussions with the new DFSG team, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Reviewing pyenv MR for Ujjwal.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
I joined Canonical to work on Ubuntu full-time back in February 2021. Whilst I can t give a full, detailed list of things I did, here s a quick TL;DR of what I did:
  • Successfully released Resolute Snapshot 3!
    • This one was also done without the ISO tracker and cdimage access.
    • We also worked very hard to build and promote all the image in due time.
  • Worked further on the whole artifact signing story for cdimage.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • Helped in EOL ing Plucky.
    • Starting to help with the upcoming 24.04.4 release.
  • With that, the mid-cycle sprints are around the corner, so quite busy preparing for that.

Debian (E)LTS
This month I have worked 59 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

Released Security Updates

Work in Progress
  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
    • [ELTS]: After completing the work for LTS myself, Bastien picked it up for ELTS and reached out about an upstream regression and we ve been doing some exchanges. Bastien has done most of the work backporting the patches but needs a review and help backporting CVE-2025-61771. Haven t made much progress since last month and will carry it over.
  • node-lodash: Affected by CVE-2025-13465, lrototype pollution in baseUnset function.
    • [stable]: The patch for trixie and bookworm are ready but haven t been uploaded yet as I d like for the unstable upload to settle a bit before I proceed with stable uploads.
    • [LTS]: The bullseye upload will follow once the stable uploads are in and ACK d by the SRMs.
  • xrdp: Affected by CVE-2025-68670, leading to a stack-based buffer overflow.

Other Activities
  • [ELTS] Helped Bastien Roucaries debug a tomcat9 regression for buster.
    • I spent quite a lot of time trying to help Bastien (with Markus and Santiago involved via mail thread) by reproducing the regression that the user(s) reported.
    • I also helped suggest a path forward by vendoring everything, which I was then requested to also help perform.
    • Whilst doing that, I noticed circular dependency hellhole and suggested another path forward by backporting bnd and its dependencies as separate NEW packages.
    • Bastien liked the idea and is going to work on that but preferred to revert the update to remedy the immediate regressions reported. I further helped him in reviewing his update. This conversation happened on #debian-elts IRC channel.
  • [LTS] Assisted Ben Hutchings with his question about the next possible steps with a plausible libvirt regression caused by the Linux kernel update. This was a thread on debian-lts@ mailing list.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.

Until next time.
:wq for today.

29 January 2026

C.J. Collier: Part 3: Building the Keystone Dataproc Custom Images for Secure Boot & GPUs

Part
3: Building the Keystone Dataproc Custom Images for Secure Boot &
GPUs In Part 1, we established a secure, proxy-only network. In Part 2, we
explored the enhanced install_gpu_driver.sh initialization
action. Now, in Part 3, we ll focus on using the LLC-Technologies-Collier/custom-images
repository (branch proxy-exercise-2025-11) to build the
actual custom Dataproc images embedded with NVIDIA drivers signed for
Secure Boot, all within our proxied environment.

Why Custom Images? To run NVIDIA GPUs on Shielded VMs with Secure Boot enabled, the
NVIDIA kernel modules must be signed with a key trusted by the VM s EFI
firmware. Since standard Dataproc images don t include these
custom-signed modules, we need to build our own. This process also
allows us to pre-install a full stack of GPU-accelerated software.

The
custom-images Toolkit
(examples/secure-boot) The examples/secure-boot directory within the
custom-images repository contains the necessary scripts and
configurations, refined through significant development to handle proxy
and Secure Boot challenges. Key Components & Development Insights:
  • env.json: The central configuration
    file (as used in Part 1) for project, network, proxy, and bucket
    details. This became the single source of truth to avoid configuration
    drift.
  • create-key-pair.sh: Manages the Secure
    Boot signing keys (PK, KEK, DB) in Google Secret Manager, essential for
    the module signing.
  • build-and-run-podman.sh: Orchestrates
    the image build process in an isolated Podman container. This was
    introduced to standardize the build environment and encapsulate
    dependencies, simplifying what the user needs to install locally.
  • pre-init.sh: Sets up the build
    environment within the container and calls
    generate_custom_image.py. It crucially passes metadata
    derived from env.json (like proxy settings and Secure Boot
    key secret names) to the temporary build VM.
  • generate_custom_image.py: The core
    Python script that automates GCE VM creation, runs the customization
    script, and creates the final GCE image.
  • gce-proxy-setup.sh: This script from
    startup_script/ is vital. It s injected into the temporary
    build VM and runs first to configure the OS, package
    managers (apt, dnf), tools (curl, wget, GPG), Conda, and Java to use the
    proxy settings passed in the metadata. This ensures the entire build
    process is proxy-aware.
  • install_gpu_driver.sh: Used as the
    --customization-script within the build VM. As detailed in
    Part 2, this script handles the driver/CUDA/ML stack installation and
    signing, now able to function correctly due to the proxy setup by
    gce-proxy-setup.sh.
Layered Image Strategy: The pre-init.sh script employs a layered approach:
  1. secure-boot Image: Base image with
    Secure Boot certificates injected.
  2. tf Image: Based on
    secure-boot, this image runs the full
    install_gpu_driver.sh within the proxy-configured build VM
    to install NVIDIA drivers, CUDA, ML libraries (TensorFlow, PyTorch,
    RAPIDS), and sign the modules. This is the primary target image for our
    use case.
(Note: secure-proxy and proxy-tf layers
were experiments, but the -tf image combined with runtime
metadata emerged as the most effective solution for 2.2-debian12). Build Steps:
  1. Clone Repos & Configure
    env.json: Ensure you have the
    custom-images and cloud-dataproc repos and a
    complete env.json as described in Part 1.
  2. Run the Build:
    bash # Example: Build a 2.2-debian12 based image set # Run from the custom-images repository root bash examples/secure-boot/build-and-run-podman.sh 2.2-debian12
    This command will build the layered images, leveraging the proxy
    settings from env.json via the metadata injected into the
    build VM. Note the final image name produced (e.g.,
    dataproc-2-2-deb12-YYYYMMDD-HHMMSS-tf).

Conclusion of Part 3 Through an iterative process, we ve developed a robust workflow
within the custom-images repository to build Secure
Boot-compatible GPU images in a proxy-only environment. The key was
isolating the build in Podman, ensuring the build VM is fully
proxy-aware using gce-proxy-setup.sh, and leveraging the
enhanced install_gpu_driver.sh from Part 2. In Part 4, we ll bring it all together, deploying a Dataproc cluster
using this custom -tf image within the secure network, and
verifying the end-to-end functionality.

28 January 2026

C.J. Collier: Part 2: Taming the Beast Deep Dive into the Proxy-Aware GPU Initialization Action

Part
2: Taming the Beast Deep Dive into the Proxy-Aware GPU Initialization
Action In Part 1 of this series, we laid the network foundation for running
secure Dataproc clusters. Now, let s zoom in on the core component
responsible for installing and configuring NVIDIA GPU drivers and the
associated ML stack in this restricted environment: the
install_gpu_driver.sh script from the LLC-Technologies-Collier/initialization-actions
repository (branch gpu-202601). This isn t just any installation script; it has been significantly
enhanced to handle the nuances of Secure Boot and to operate seamlessly
behind an HTTP/S proxy.

The
Challenge: Installing GPU Drivers Without Direct Internet Our goal was to create a Dataproc custom image with NVIDIA GPU
drivers, sign the kernel modules for Secure Boot, and ensure the entire
process works seamlessly when the build VM and the eventual cluster
nodes have no direct internet access, relying solely on an HTTP/S proxy.
This involved:
  1. Proxy-Aware Build: Ensuring all build steps within
    the custom image creation process (package downloads, driver downloads,
    GPG keys, etc.) correctly use the customer s proxy.
  2. Secure Boot Signing: Integrating kernel module
    signing using keys managed in GCP Secret Manager, especially when
    drivers are built from source.
  3. Conda Environment: Reliably and speedily installing
    a complex Conda environment with PyTorch, TensorFlow, Rapids, and other
    GPU-accelerated libraries through the proxy.
  4. Dataproc Integration: Making sure the custom image
    works correctly with Dataproc s own startup, agent processes, and
    cluster-specific configurations like YARN.

The
Development Journey: Key Enhancements in
install_gpu_driver.sh To address these challenges, the script incorporates several key
features:
  • Robust Proxy Handling (set_proxy
    function):
    • Challenge: Initial script versions had spotty proxy
      support. Many tools like apt, curl,
      gpg, and even gsutil failed in proxy-only
      environments.
    • Enhancements: The set_proxy function
      (also used in gce-proxy-setup.sh) was completely overhauled
      to parse various proxy metadata (http-proxy,
      https-proxy, proxy-uri,
      no-proxy). Critically, environment variables
      (HTTP_PROXY, HTTPS_PROXY,
      NO_PROXY) are now set before any network
      operations. NO_PROXY is carefully set to include
      .google.com and .googleapis.com to allow
      direct access to Google APIs via Private Google Access. System-wide
      trust stores (OS, Java, Conda) are updated with the proxy s CA
      certificate if provided via http-proxy-pem-uri.
      gcloud, apt, dnf, and
      dirmngr are also configured to use the proxy.
  • Reliable GPG Key Fetching (import_gpg_keys
    function):
    • Challenge: Importing GPG keys for repositories
      often failed as keyservers use non-HTTP ports (e.g., 11371) blocked by
      firewalls, and gpg --recv-keys is not proxy-friendly.
    • Solution: A new import_gpg_keys
      function now fetches keys over HTTPS using curl, which
      respects the environment s proxy settings. This replaced all direct
      gpg --recv-keys calls.
  • GCS Caching is King:
    • Challenge: Repeatedly downloading large files
      (drivers, CUDA, source code) through a proxy is slow and
      inefficient.
    • Solution: Implemented extensive GCS caching for
      NVIDIA drivers, CUDA runfiles, NVIDIA Open Kernel Module source
      tarballs, compiled kernel modules, and even packed Conda environments.
      Scripts now check a GCS bucket (dataproc-temp-bucket)
      before hitting the internet.
    • Impact: Dramatically speeds up subsequent runs and
      init action execution times on cluster nodes after the cache is
      warmed.
  • Conda Environment Stability & Speed:
    • Challenge: Large Conda environments are prone to
      solver conflicts and slow installation times.
    • Solution: Integrated Mamba for faster package
      solving. Refined package lists for better compatibility. Added logic to
      force-clean and rebuild the Conda environment cache on GCS and locally
      if inconsistencies are detected (e.g., driver installed but Conda env
      not fully set up).
  • Secure Boot & Kernel Module Signing:
    • Challenge: Custom-compiled kernel modules must be
      signed to load when Secure Boot is enabled.
    • Solution: The script integrates with GCP Secret
      Manager to fetch signing keys. The build_driver_from_github
      function now includes robust steps to compile, sign (using
      sign-file), install, and verify the signed modules.
  • Custom Image Workflow & Deferred Configuration:
    • Challenge: Cluster-specific settings (like YARN GPU
      configuration) should not be baked into the image.
    • Solution: The install_gpu_driver.sh
      script detects when it s run during image creation
      (--metadata invocation-type=custom-images). In this mode,
      it defers cluster-specific setups to a systemd service
      (dataproc-gpu-config.service) that runs on the first boot
      of a cluster instance. This ensures that YARN and Spark configurations
      are applied in the context of the running cluster, not at image build
      time.

Conclusion of Part 2 The install_gpu_driver.sh initialization action is more
than just an installer; it s a carefully crafted tool designed to handle
the complexities of secure, proxied environments. Its robust proxy
support, comprehensive GCS caching, refined Conda management, Secure
Boot signing capabilities, and awareness of the custom image build
lifecycle make it a critical enabler. In Part 3, we ll explore how the LLC-Technologies-Collier/custom-images
repository (branch proxy-exercise-2025-11) uses this
initialization action to build the complete, ready-to-deploy Secure Boot
GPU custom images.

C.J. Collier: Dataproc GPUs, Secure Boot, & Proxies

Part
1: Building a Secure Network Foundation for Dataproc with GPUs &
SWP Welcome to the first post in our series on running GPU-accelerated
Dataproc workloads in secure, enterprise-grade environments. Many
organizations need to operate within VPCs that have no direct internet
egress, instead routing all traffic through a Secure Web Proxy (SWP).
Additionally, security mandates often require the use of Shielded VMs
with Secure Boot enabled. This series will show you how to meet these
requirements for your Dataproc GPU clusters. In this post, we ll focus on laying the network foundation using
tools from the LLC-Technologies-Collier/cloud-dataproc
repository (branch proxy-sync-2026-01).

The Challenge: Network
Isolation & Control Before we can even think about custom images or GPU drivers, we need
a network environment that:
  1. Prevents direct internet access from Dataproc cluster nodes.
  2. Forces all egress traffic through a manageable and auditable
    SWP.
  3. Provides the necessary connectivity for Dataproc to function and for
    us to build images later.
  4. Supports Secure Boot for all VMs.

The Toolkit:
LLC-Technologies-Collier/cloud-dataproc To make setting up and tearing down these complex network
environments repeatable and consistent, we ve developed a set of bash
scripts within the gcloud directory of the
cloud-dataproc repository. These scripts handle the
creation of VPCs, subnets, firewall rules, service accounts, and the
Secure Web Proxy itself. Key Script:
gcloud/bin/create-dpgce-private This script is the cornerstone for creating the private, proxied
environment. It automates:
  • VPC and Subnet creation (for the cluster, SWP, and management).
  • Setup of Certificate Authority Service and Certificate Manager for
    SWP TLS interception.
  • Deployment of the SWP Gateway instance.
  • Configuration of a Gateway Security Policy to control egress.
  • Creation of necessary firewall rules.
  • Result: Cluster nodes in this VPC have NO default
    internet route and MUST use the SWP.
Configuration via env.json We use a single env.json file to drive the
configuration. This file will also be used by the
custom-images scripts in Part 3. This env.json
should reside in your custom-images repository clone, and
you ll symlink it into the cloud-dataproc/gcloud
directory. Running the Setup:
# Assuming you have cloud-dataproc and custom-images cloned side-by-side
# And your env.json is in the custom-images root
cd cloud-dataproc/gcloud
# Symlink to the env.json in custom-images
ln -sf ../../custom-images/env.json env.json
# Run the creation script, but don't create a cluster yet
bash bin/create-dpgce-private --no-create-cluster
cd ../../custom-images

Node
Configuration: The Metadata Startup Script for Runtime For the Dataproc cluster nodes to function correctly in this proxied
environment, they need to be configured to use the SWP on boot. We
achieve this using a GCE metadata startup script. The script startup_script/gce-proxy-setup.sh (from the
custom-images repository) is designed to be run on each
cluster node at boot. It reads metadata like http-proxy and
http-proxy-pem-uri (which our cluster creation scripts in
Part 4 will pass) to configure the OS environment, package managers, and
other tools to use the SWP. Upload this script to your GCS bucket:
# Run from the custom-images repository root
gsutil cp startup_script/gce-proxy-setup.sh gs://$(jq -r .BUCKET env.json)/custom-image-deps/
This script is essential for the runtime behavior of the
cluster nodes.

Conclusion of Part 1 With the cloud-dataproc scripts, we ve laid the
groundwork by provisioning a secure VPC with controlled egress through
an SWP. We ve also prepared the essential node-level proxy configuration
script (gce-proxy-setup.sh) in GCS, ready to be used by our
clusters. Stay tuned for Part 2, where we ll dive into the
install_gpu_driver.sh initialization action from the
LLC-Technologies-Collier/initialization-actions repository
(branch gpu-202601) and how it s been adapted to install
all GPU-related software through the proxy during the image build
process.

Sven Hoexter: Decrypt TLS Connection with wireshark and curl

With TLS 1.3 more parts of the handshake got encrypted (e.g. the certificate), but sometimes it's still helpful to look at the complete handshake. curl uses the somewhat standardized env variable for the key log file called SSLKEYLOGFILE, which is also supported by Firefox and Chrome. wireshark hides the setting in the UI behind Edit -> Preferences -> Protocols -> TLS -> (Pre)-Master-Secret log filename which is uncomfortable to reach. Looking up the config setting in the Advanced settings one can learn that it's called internally tls.keylog_file. Thus we can set it up with:
sudo wireshark -o "tls.keylog_file:/home/sven/curl.keylog"
SSLKEYLOGFILE=/home/sven/curl.keylog curl -v https://www.cloudflare.com/cdn-cgi/trace
Depending on the setup root might be unable to access the wayland session, that can be worked around by letting sudo keep the relevant env variables:
$ cat /etc/sudoers.d/wayland 
Defaults   env_keep += "XDG_RUNTIME_DIR"
Defaults   env_keep += "WAYLAND_DISPLAY"
Or setup wireshark properly and use the wireshark group to be able to dump traffic. Might require a sudo dpkg-reconfigure wireshark-common. Regarding curl: In some situations it could be desirable to force a specific older TLS version for testing, which requires a minimal and maximal version. E.g. to force TLS 1.2 only:
curl -v --tlsv1.2 --tls-max 1.2 https://www.cloudflare.com/cdn-cgi/trace

27 January 2026

Sergio Cipriano: Query Debian changelogs by keyword with the FTP-Master API

Query Debian changelogs by keyword with the FTP-Master API In my post about tracking my Debian uploads, I used the ProjectB database directly to retrieve how many uploads I had so far. I was pleasantly surprised to receive a message from Joerg Jaspert, who introduced me to the Debian Archive Kit web API (dak), also known as the FTP-Master API. Joerg gave the idea of integrating the query I had written into the dak API, so that anyone could obtain the same results without needing to use the mirror host, with a simple http request. I liked the idea and I decided to work on it. The endpoint is already available and you can try by yourself by doing something like this:
$ curl https://api.ftp-master.debian.org/changelogs?search_term=almeida+cipriano
WARNING: Check v2: https://people.debian.org/~gladk/blog/posts/202601_ftp-master-changelog-v2/
The query provides a way to search through the changelogs of all Debian packages currently published. The source code is available at Salsa. I'm already using it to track my uploads, I made this page that updates every day. If you want to setup something similar, you can use my script and just change the search_term to the name you use in your changelog entries. I m running it using a systemd timer. Here s what I ve got:
# .config/systemd/user/track-uploads.service
[Unit]
Description=Track my uploads using the dak API
StopWhenUnneeded=yes
[Service]
Type=oneshot
WorkingDirectory=/home/cipriano/public_html/uploads
ExecStart=/usr/bin/python3 generate.py
# .config/systemd/user/track-uploads.timer
[Unit]
Description=Run track-uploads script daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
After placing every file in the right place you just need to run:
$ systemctl --user daemon-reload
$ systemctl --user enable --now track-uploads.timer
$ systemctl --user start track-uploads.service # generates the html now
If you want to get a bit fancier, I m also using an Ansible playbook for that. The source code is available on my GitLab repository. If you want to learn more about dak, there is a web docs available. I d like to thank Joerg once again for suggesting the idea and for reviewing and merging the change so quickly.

Elana Hashman: A beginner's guide to improving your digital security

In 2017, I led a series of workshops aimed at teaching beginners a better understanding of encryption, how the internet works, and their digital security. Nearly a decade later, there is still a great need to share reliable resources and guides on improving these skills. I have worked professionally in computer security one way or another for well over a decade, at many major technology companies and in many open source software projects. There are many inaccurate and unreliable resources out there on this subject, put together by well-meaning people without a background in security, which can lead to sharing misinformation, exaggeration and fearmongering. I hope that I can offer you a trusted, curated list of high impact things that you can do right now, using whichever vetted guide you prefer. In addition, I also include how long it should take, why you should do each task, and any limitations. This guide is aimed at improving your personal security, and does not apply to your work-owned devices. Always assume your company can monitor all of your messages and activities on work devices. What can I do to improve my security right away? I put together this list in order of effort, easiest tasks first. You should be able to complete many of the low effort tasks in a single hour. The medium to high effort tasks are very much worth doing, but may take you a few days or even weeks to complete them. Low effort (<15 minutes) Upgrade your software to the latest versions Why? I don't know anyone who hasn't complained about software updates breaking features, introducing bugs, and causing headaches. If it ain't broke, why upgrade, right? Well, alongside all of those annoying bugs and breaking changes, software updates also include security fixes, which will protect your device from being exploited by bad actors. Security issues can be found in software at any time, even software that's been available for many years and thought to be secure. You want to install these as soon as they are available. Recommendation: Turn on automatic upgrades and always keep your devices as up-to-date as possible. If you have some software you know will not work if you upgrade it, at least be sure to upgrade your laptop and phone operating system (iOS, Android, Windows, etc.) and web browser (Chrome, Safari, Firefox, etc.). Do not use devices that do not receive security support (e.g. old Android or iPhones). Guides: Limitations: This will prevent someone from exploiting known security issues on your devices, but it won't help if your device was already compromised. If this is a concern, doing a factory reset, upgrade, and turning on automatic upgrades may help. This also won't protect against all types of attacks, but it is a necessary foundation. Use Signal Why? Signal is a trusted, vetted, secure messaging application that allows you to send end-to-end encrypted messages and make video/phone calls. This means that only you and your intended recipient can decrypt the messages and someone cannot intercept and read your messages, in contrast to texting (SMS) and other insecure forms of messaging. Other applications advertise themselves as end-to-end encrypted, but Signal provides the strongest protections. Recommendation: I recommend installing the Signal app and using it! My mom loves that she can video call me on Wi-Fi on my Android phone. It also supports group chats. I use it as a secure alternative to texting (SMS) and other chat platforms. I also like Signal's "disappearing messages" feature which I enable by default because it automatically deletes messages after a certain period of time. This avoids your messages taking up too much storage. Guides: Limitations: Signal is only able to protect your messages in transit. If someone has access to your phone or the phone of the person you sent messages to, they will still be able to read them. As a rule of thumb, if you don't want someone to read something, don't write it down! Meet in person or make an encrypted phone call where you will not be overheard. If you are talking to someone you don't know, assume your messages are as public as posting on social media. Set passwords and turn on device encryption Why? Passwords ensure that someone else can't unlock your device without your consent or knowledge. They also are required to turn on device encryption, which protects your information on your device from being accessed when it is locked. Biometric (fingerprint or face ID) locking provides some privacy, but your fingerprint or face ID can be used against your wishes, whereas if you are the only person who knows your password, only you can use it. Recommendation: Always set passwords and have device encryption enabled in order to protect your personal privacy. It may be convenient to allow kids or family members access to an unlocked device, but anyone else can access it, too! Use strong passwords that cannot be guessed avoid using names, birthdays, phone numbers, addresses, or other public information. Using a password manager will make creating and managing passwords even easier. Disable biometric unlock, or at least know how to disable it. Most devices will enable disk encryption by default, but you should double-check. Guides: Limitations: If your device is unlocked, the password and encryption will provide no protections; the device must be locked for this to protect your privacy. It is possible, though unlikely, for someone to gain remote access to your device (for example through malware or stalkerware), which would bypass these protections. Some forensic tools are also sophisticated enough to work with physical access to a device that is turned on and locked, but not a device that is turned off/freshly powered on and encrypted. If you lose your password or disk encryption key, you may lose access to your device. For this reason, Windows and Apple laptops can make a cloud backup of your disk encryption key. However, a cloud backup can potentially be disclosed to law enforcement. Install an ad blocker Why? Online ad networks are often exploited to spread malware to unsuspecting visitors. If you've ever visited a regular website and suddenly seen an urgent, flashing pop-up claiming your device was hacked, it is often due to a bad ad. Blocking ads provides an additional layer of protection against these kinds of attacks. Recommendation: I recommend everyone uses an ad blocker at all times. Not only are ads annoying and disruptive, but they can even result in your devices being compromised! Guides: Limitations: Sometimes the use of ad blockers can break functionality on websites, which can be annoying, but you can temporarily disable them to fix the problem. These may not be able to block all ads or all tracking, but they make browsing the web much more pleasant and lower risk! Some people might also be concerned that blocking ads might impact the revenue of their favourite websites or creators. In this case, I recommend either donating directly or sharing the site with a wider audience, but keep using the ad blocker for your safety. Enable HTTPS-Only Mode Why? The "S" in "HTTPS" stands for "secure". This feature, which can be enabled on your web browser, ensures that every time you visit a website, your connection is always end-to-end encrypted (just like when you use Signal!) This ensures that someone can't intercept what you search for, what pages on websites you visit, and any information you or the website share such as your banking details. Recommendation: I recommend enabling this for everyone, though with improvements in web browser security and adoption of HTTPS over the years, your devices will often do this by default! There is a small risk you will encounter some websites that do not support HTTPS, usually older sites. Guides: Limitations: HTTPS protects the information on your connection to a website. It does not hide or protect the fact that you visited that website, only the information you accessed. If the website is malicious, HTTPS does not provide any protection. In certain settings, like when you use a work-managed computer that was set up for you, it can still be possible for your IT Department to see what you are browsing, even over an HTTPS connection, because they have administrator access to your computer and the network. Medium to high effort (1+ hours) These tasks require more effort but are worth the investment. Set up a password manager Why? It is not possible for a person to remember a unique password for every single website and app that they use. I have, as of writing, 556 passwords stored in my password manager. Password managers do three important things very well:
  1. They generate secure passwords with ease. You don't need to worry about getting your digits and special characters just right; the app will do it for you, and generate long, secure passwords.
  2. They remember all your passwords for you, and you just need to remember one password to access all of them. The most common reason people's accounts get hacked online is because they used the same password across multiple websites, and one of the websites had all their passwords leaked. When you use a unique password on every website, it doesn't matter if your password gets leaked!
  3. They autofill passwords based on the website you're visiting. This is important because it helps prevent you from getting phished. If you're tricked into visiting an evil lookalike site, your password manager will refuse to fill the password.
Recommendation: These benefits are extremely important, and setting up a password manager is often one of the most impactful things you can do for your digital security. However, they take time to get used to, and migrating all of your passwords into the app (and immediately changing them!) can take a few minutes at a time... over weeks. I recommend you prioritize the most important sites, such as your email accounts, banking/financial sites, and cellphone provider. This process will feel like a lot of work, but you will get to enjoy the benefits of never having to remember new passwords and the autofill functionality for websites. My recommended password manager is 1Password, but it stores passwords in the cloud and costs money. There are some good free options as well if cost is a concern. You can also use web browser- or OS-based password managers, but I do not prefer these. Guides: Limitations: Many people are concerned about the risk of using a password manager causing all of their passwords to be compromised. For this reason, it's very important to use a vetted, reputable password manager that has passed audits, such as 1Password or Bitwarden. It is also extremely important to choose a strong password to unlock your password manager. 1Password makes this easier by generating a secret to strengthen your unlock password, but I recommend using a long, memorable password in any case. Another risk is that if you forget your password manager's password, you will lose access to all your passwords. This is why I recommend 1Password, which has you set up an Emergency Kit to recover access to your account. Set up two-factor authentication (2FA) for your accounts Why? If your password is compromised in a website leak or due to a phishing attack, two-factor authentication will require a second piece of information to log in and potentially thwart the intruder. This provides you with an extra layer of security on your accounts. Recommendation: You don't necessarily need to enable 2FA on every account, but prioritize enabling it on your most important accounts (email, banking, cellphone, etc.) There are typically a few different kinds: email-based (which is why your email account's security is so important), text message or SMS-based (which is why your cell phone account's security is so important), app-based, and hardware token-based. Email and text message 2FA are fine for most accounts. You may want to enable app- or hardware token-based 2FA for your most sensitive accounts. Guides: Limitations: The major limitation is that if you lose access to 2FA, you can be locked out of an account. This can happen if you're travelling abroad and can't access your usual cellphone number, if you break your phone and you don't have a backup of your authenticator app, or if you lose your hardware-based token. For this reason, many websites will provide you with "backup tokens" you can print them out and store them in a secure location or use your password manager. I also recommend if you use an app, you choose one that will allow you to make secure backups, such as Ente. You are also limited by the types of 2FA a website supports; many don't support app- or hardware token-based 2FA. Remove your information from data brokers Why? This is a problem that mostly affects people in the US. It surprises many people that information from their credit reports and other public records is scraped and available (for free or at a low cost) online through "data broker" websites. I have shocked friends who didn't believe this was an issue by searching for their full names and within 5 minutes being able to show them their birthday, home address, and phone number. This is a serious privacy problem! Recommendation: Opt out of any and all data broker websites to remove this information from the internet. This is especially important if you are at risk of being stalked or harassed. Guides: Limitations: It can take time for your information to be removed once you opt out, and unfortunately search engines may have cached your information for a while longer. This is also not a one-and-done process. New data brokers are constantly popping up and some may not properly honour your opt out, so you will need to check on a regular basis (perhaps once or twice a year) to make sure your data has been properly scrubbed. This also cannot prevent someone from directly searching public records to find your information, but that requires much more effort. "Recommended security measures" I think beginners should avoid We've covered a lot of tasks you should do, but I also think it's important to cover what not to do. I see many of these tools recommended to security beginners, and I think that's a mistake. For each tool, I will explain my reasoning around why I don't think you should use it, and the scenarios in which it might make sense to use. "Secure email" What is it? Many email providers, such as Proton Mail, advertise themselves as providing secure email. They are often recommended as a "more secure" alternative to typical email providers such as GMail. What's the problem? Email is fundamentally insecure by design. The email specification (RFC-3207) states that any publicly available email server MUST NOT require the use of end-to-end encryption in transit. Email providers can of course provide additional security by encrypting their copies of your email, and providing you access to your email by HTTPS, but the messages themselves can always be sent without encryption. Some platforms such as Proton Mail advertise end-to-end encrypted emails so long as you email another Proton user. This is not truly email, but their own internal encrypted messaging platform that follows the email format. What should I do instead? Use Signal to send encrypted messages. NEVER assume the contents of an email are secure. Who should use it? I don't believe there are any major advantages to using a service such as this one. Even if you pay for a more "secure" email provider, the majority of your emails will still be delivered to people who don't. Additionally, while I don't use or necessarily recommend their service, Google offers an Advanced Protection Program for people who may be targeted by state-level actors. PGP/GPG Encryption What is it? PGP ("Pretty Good Privacy") and GPG ("GNU Privacy Guard") are encryption and cryptographic signing software. They are often recommended to encrypt messages or email. What's the problem? GPG is decades old and its usability has always been terrible. It is extremely easy to accidentally send a message that you thought was encrypted without encryption! The problems with PGP/GPG have been extensively documented. What should I do instead? Use Signal to send encrypted messages. Again, NEVER use email for sensitive information. Who should use it? Software developers who contribute to projects where there is a requirement to use GPG should continue to use it until an adequate alternative is available. Everyone else should live their lives in PGP-free bliss. Installing a "secure" operating system (OS) on your phone What is it? There are a number of self-installed operating systems for Android phones, such as GrapheneOS, that advertise as being "more secure" than using the version of the Android operating system provided by your phone manufacturer. They often remove core Google APIs and services to allow you to "de-Google" your phone. What's the problem? These projects are relatively niche, and don't have nearly enough resourcing to be able to respond to the high levels of security pressure Android experiences (such as against the forensic tools I mentioned earlier). You may suddenly lose security support with no notice, as with CalyxOS. You need a high level of technical know-how and a lot of spare time to maintain your device with a custom operating system, which is not a reasonable expectation for the average person. By stripping all Google APIs such as Google Play Services, some useful apps can no longer function. And some law enforcement organizations have gone as far as accusing people who install GrapheneOS on Pixel phones to be engaging in criminal activity. What should I do instead? For the best security on an Android device, use a phone manufactured by Google or Samsung (smaller manufacturers are more unreliable), or consider buying an iPhone. Make sure your device is receiving security updates and up-to-date. Who should use it? These projects are great for tech enthusiasts who are interested in contributing to and developing them further. They can be used to give new life to old phones that are not receiving security or software updates. They are also great for people with an interest in free and open source software and digital autonomy. But these tools are not a good choice for a general audience, nor do they provide more practical security than using an up-to-date Google or Samsung Android phone. Virtual Private Network (VPN) Services What is it? A virtual private network or VPN service can provide you with a secure tunnel from your device to the location that the VPN operates. This means that if I am using my phone in Seattle connected to a VPN in Amsterdam, if I access a website, it appears to the website that my phone is located in Amsterdam. What's the problem? VPN services are frequently advertised as providing security or protection from nefarious bad actors, or helping protect your privacy. These benefits are often far overstated, and there are predatory VPN providers that can actually be harmful. It costs money and resources to provide a VPN, so free VPN services are especially suspect. When you use a VPN, the VPN provider knows the websites you are visiting in order to provide you with the service. Free VPN providers may sell this data in order to cover the cost of providing the service, leaving you with less security and privacy. The average person does not have the knowledge to be able to determine if a VPN service is trustworthy or not. VPNs also don't provide any additional encryption benefits if you are already using HTTPS. They may provide a small amount of privacy benefit if you are connected to an untrusted network with an attacker. What should I do instead? Always use HTTPS to access websites. Don't connect to untrusted internet providers for example, use cellphone network data instead of a sketchy Wi-Fi access point. Your local neighbourhood coffee shop is probably fine. Who should use it? There are three main use cases for VPNs. The first is to bypass geographic restrictions. A VPN will cause all of your web traffic to appear to be coming from another location. If you live in an area that has local internet censorship policies, you can use a VPN to access the internet from a location that lacks such policies. The second is if you know your internet service provider is actively hostile or malicious. A trusted VPN will protect the visibility of all your traffic, including which websites you visit, from your internet service provider, and the only thing they will be able to see is that you are accessing a VPN. The third use case is to access a network that isn't connected to the public internet, such as a corporate intranet. I strongly discourage the use of VPNs for "general-purpose security." Tor What is it? Tor, "The Onion Router", is a free and open source software project that provides anonymous networking. Unlike with a VPN, where the VPN provider knows who you are and what websites you are requesting, Tor's architecture makes it extremely difficult to determine who sent a request. What's the problem? Tor is difficult to set up properly; similar to PGP-encrypted email, it is possible to accidentally not be connected to Tor and not know the difference. This usability has improved over the years, but Tor is still not a good tool for beginners to use. Due to the way Tor works, it is also extremely slow. If you have used cable or fiber internet, get ready to go back to dialup speeds. Tor also doesn't provide perfect privacy and without a strong understanding of its limitations, it can be possible to deanonymize someone despite using it. Additionally, many websites are able to detect connections from the Tor network and block them. What should I do instead? If you want to use Tor to bypass censorship, it is often better to use a trusted VPN provider, particularly if you need high bandwidth (e.g. for streaming). If you want to use Tor to access a website anonymously, Tor itself might not be enough to protect you. For example, if you need to provide an email address or personal information, you can decline to provide accurate information and use a masked email address. A friend of mine once used the alias "Nunya Biznes" Who should use it? Tor should only be used by people who are experienced users of security tools and understand its strengths and limitations. Tor also is best used on a purpose-built system, such as Tor Browser or Freedom of the Press Foundation's SecureDrop. I want to learn more! I hope you've found this guide to be a useful starting point. I always welcome folks reaching out to me with questions, though I might take a little bit of time to respond. You can always email me. If there's enough interest, I might cover the following topics in a future post: Stay safe out there!

26 January 2026

Otto Kek l inen: Ubuntu Pro subscription - should you pay to use Linux?

Featured image of post Ubuntu Pro subscription - should you pay to use Linux?Ubuntu Pro is a subscription offering for Ubuntu users who want to pay for the assurance of getting quick and high-quality security updates for Ubuntu. I tested it out to see how it works in practice, and to evaluate how well it works as a commercial open source service model for Linux. Anyone running Ubuntu can subscribe to it at ubuntu.com/pro/subscribe by selecting the setup type Desktops for the price of $25 per year (+applicable taxes) for Enterprise users. There is also a free version for personal use. Once you have an account, you can find your activation token at ubuntu.com/pro/dashboard, and use it to activate Ubuntu Pro on your desktop or laptop Ubuntu machine by running sudo pro attach <token>:
$ sudo pro attach aabbcc112233aabbcc112233
Enabling default service esm-apps
Updating package lists
Ubuntu Pro: ESM Apps enabled
Enabling default service esm-infra
Updating package lists
Ubuntu Pro: ESM Infra enabled
Enabling default service livepatch
Installing canonical-livepatch snap
Canonical livepatch enabled.
Unable to determine current instance-id
This machine is now attached to 'Ubuntu Pro Desktop'
You can at any time confirm the Ubuntu Pro status by running:
$ sudo pro status --all
SERVICE ENTITLED STATUS DESCRIPTION
anbox-cloud yes disabled Scalable Android in the cloud
cc-eal yes n/a Common Criteria EAL2 Provisioning Packages
esm-apps yes enabled Expanded Security Maintenance for Applications
esm-infra yes enabled Expanded Security Maintenance for Infrastructure
fips yes n/a NIST-certified FIPS crypto packages
fips-preview yes n/a Preview of FIPS crypto packages undergoing certification with NIST
fips-updates yes disabled FIPS compliant crypto packages with stable security updates
landscape yes enabled Management and administration tool for Ubuntu
livepatch yes disabled Canonical Livepatch service
realtime-kernel yes disabled Ubuntu kernel with PREEMPT_RT patches integrated
  generic yes disabled Generic version of the RT kernel (default)
  intel-iotg yes n/a RT kernel optimized for Intel IOTG platform
  raspi yes n/a 24.04 Real-time kernel optimised for Raspberry Pi
ros yes n/a Security Updates for the Robot Operating System
ros-updates yes n/a All Updates for the Robot Operating System
usg yes disabled Security compliance and audit tools
Enable services with: pro enable <service>
Account: Otto Kekalainen
Subscription: Ubuntu Pro Desktop
Valid until: Thu Mar 3 08:08:38 2026 PDT
Technical support level: essential
For a regular desktop/laptop user the most relevant service is the esm-apps, which delivers extended security updates to many applications typically used on desktop systems. Another relevant command to confirm the current subscription status is:
$ sudo pro security-status
2828 packages installed:
2143 packages from Ubuntu Main/Restricted repository
660 packages from Ubuntu Universe/Multiverse repository
13 packages from third parties
12 packages no longer available for download
To get more information about the packages, run
pro security-status --help
for a list of available options.
This machine is receiving security patching for Ubuntu Main/Restricted
repository until 2029.
This machine is attached to an Ubuntu Pro subscription.
Ubuntu Pro with 'esm-infra' enabled provides security updates for
Main/Restricted packages until 2034.
Ubuntu Pro with 'esm-apps' enabled provides security updates for
Universe/Multiverse packages until 2034. You have received 26 security
updates.
This confirms the scope of the security support. You can even run sudo pro security-status --esm-apps to get a detailed breakdown of the installed software packages in scope for Expanded Security Maintenance (ESM).

Experiences from using Ubuntu Pro for over a year Personally I have been using it on two laptop systems for well over a year now and everything seems to have worked well. I see apt is downloading software updates from https://esm.ubuntu.com/apps/ubuntu, but other than that there aren t any notable signs of Ubuntu Pro being in use. That is a good thing after all one is paying for assurance that everything works with minimum disruptions, so the system that enables smooth sailing should stay in the background and not make too much noise of itself.

Using Landscape to manage multiple Ubuntu laptops Landscape portal reports showing security update status and resource utilization Landscape.canonical.com is a fleet management system that shows information like security update status and resource utilization for the computers you administer. Ubuntu Pro attached systems under one s account are not automatically visible in Landscape, but have to be enrolled in it. To enroll an Ubuntu Pro attached desktop/laptop to Landscape, first install the required package with sudo apt install landscape-client and then run sudo landscape-config --account-name <account name> to start the configuration wizard. You can find your account name in the Landscape portal. On the last wizard question Request a new registration for this computer now? [y/N] hit y to accept. If successful, the new computer will be visible on the Landscape portal page Pending computers , from where you can click to accept it. Landscape portal page showing pending computer registration If I had a large fleet of computers, Landscape might come in useful. Also it is obvious Landscape is intended primarily for managing server systems. For example, the default alarm trigger on systems being offline, which is common for laptops and desktop computers, is an alert-worthy thing only on server systems. It is good to know that Landscape exists, but on desktop systems I would probably skip it, and only stick to the security updates offered by Ubuntu Pro without using Landscape.

Landscape is evolving The screenshots above are from the current Landscape portal which I have been using so far. Recently Canonical has also launched a new web portal, with a fresh look: New Landscape dashboard with fresh look This shows Canonical is actively investing in the service and it is likely going to sit at the center of their business model for years to come.

Other offerings by Canonical for individual users Canonical, the company behind the world s most popular desktop Linux distribution Ubuntu, has been offering various commercial support services for corporate customers since the company launched back in 2005, but there haven t been any offerings available to individual users since Ubuntu One, with file syncing, a music store and more, was wound down back in 2014. Canonical and the other major Linux companies, Red Hat and SUSE, have always been very enterprise-oriented, presumably because achieving economies of scale is much easier when maintaining standardized corporate environments compared to dealing with a wide range of custom configurations that individual consumer customers might have. I remember some years ago Canonical offered desktop support under the Ubuntu Advantage product name, but the minimum subscription was for 5 desktop systems, which typically isn t an option for a regular home consumer. I am glad to see Ubuntu Pro is now available and I honestly hope people using Ubuntu will opt into it. The more customers it has, the more it incentivizes Canonical to develop and maintain features that are important for desktop and home users.

Pay for Linux because you can, not because you have to Open source is a great software development model for rapid innovation and adoption, but I don t think the business models in the space are yet quite mature. Users who get long-term value should participate more in funding open source maintenance work. While some donation platforms like GitHub Sponsors, OpenCollective and the like have gained popularity in recent years, none of them seem to generate recurring revenue comparable to the scale of how popular open source software is now in 2026. I welcome more paid schemes, such as Ubuntu Pro, as I believe it is beneficial for the whole ecosystem. I also expect more service providers to enter this space and experiment with different open source business models and various forms of decentralized funding. Linux and open source are primarily free as in speech, but as a side effect license fees are hard to enforce and many use Linux without paying for it. The more people, corporations and even countries rely on it to stay sovereign in the information society, the more users should think about how they want to use Linux and who they want to pay to maintain it and other critical parts of the open source ecosystem.

24 January 2026

Gunnar Wolf: Finally some light for those who care about Debian on the Raspberry Pi

Finally, some light at the end of the tunnel! As I have said in this blog and elsewhere, after putting quite a bit of work into generating the Debian Raspberry Pi images between late 2018 and 2023, I had to recognize I don t have the time and energy to properly care for it. I even registered a GSoC project for it. I mentored Kurva Prashanth, who did good work on the vmdb2 scripts we use for the image generation but in the end, was unable to push them to be built in Debian infrastructure. Maybe a different approach was needed! While I adopted the images as they were conceived by Michael Stapelberg, sometimes it s easier to start from scratch and build a fresh approach. So, I m not yet pointing at a stable, proven release, but to a good promise. And I hope I m not being pushy by making this public: in the #debian-raspberrypi channel, waldi has shared the images he has created with the Debian Cloud Team s infrastructure. So, right now, the images built so far support Raspberry Pi families 4 and 5 (notably, not the 500 computer I have, due to a missing Device Tree, but I ll try to help figure that bit out Anyway, p400/500/500+ systems are not that usual). Work is underway to get the 3B+ to boot (some hackery is needed, as it only understands MBR partition schemes, so creating a hybrid image seems to be needed). Debian Cloud images for Raspberries Sadly, I don t think the effort will be extended to cover older, 32-bit-only systems (RPi 0, 1 and 2). Anyway, as this effort stabilizes, I will phase out my (stale!) work on raspi.debian.net, and will redirect it to point at the new images.

Comments Andrea Pappacoda tachi@d.o 2026-01-26 17:39:14 GMT+1 Are there any particular caveats compared to using the regular Raspberry Pi OS? Are they documented anywhere? Gunnar Wolf gwolf.blog@gwolf.org 2026-01-26 11:02:29 GMT-6 Well, the Raspberry Pi OS includes quite a bit of software that s not packaged in Debian for various reasons some of it because it s non-free demo-ware, some of it because it s RPiOS-specific configuration, some of it I don t care, I like running Debian wherever possible Andrea Pappacoda tachi@d.o 2026-01-26 18:20:24 GMT+1 Thanks for the reply! Yeah, sorry, I should ve been more specific. I also just care about the Debian part. But: are there any hardware issues or unsupported stuff, like booting from an SSD (which I m currently doing)? Gunnar Wolf gwolf.blog@gwolf.org 2026-01-26 12:16:29 GMT-6 That s beyond my knowledge Although I can tell you that:
  • Raspberry Pi OS has hardware support as soon as their new boards hit the market. The ability to even boot a board can take over a year for the mainline Linux kernel (at least, it has, both in the cases of the 4 and the 5 families).
  • Also, sometimes some bits of hardware are not discovered by the Linux kernels even if the general family boots because they are not declared in the right place of the Device Tree (i.e. the wireless network interface in the 02W is in a different address than in the 3B+, or the 500 does not fully boot while the 5B now does). Usually it is a matter of just declaring stuff in the right place, but it s not a skill many of us have.
  • Also, many RPi hats ship with their own Device Tree overlays, and they cannot always be loaded on top of mainline kernels.
Andrea Pappacoda tachi@d.o 2026-01-26 19:31:55 GMT+1
That s beyond my knowledge Although I can tell you that: Raspberry Pi OS has hardware support as soon as their new boards hit the market. The ability to even boot a board can take over a year for the mainline Linux kernel (at least, it has, both in the cases of the 4 and the 5 families).
Yeah, unfortunately I m aware of that I ve also been trying to boot OpenBSD on my rpi5 out of curiosity, but been blocked by my somewhat unusual setup involving an NVMe SSD as the boot drive :/
Also, sometimes some bits of hardware are not discovered by the Linux kernels even if the general family boots because they are not declared in the right place of the Device Tree (i.e. the wireless network interface in the 02W is in a different address than in the 3B+, or the 500 does not fully boot while the 5B now does). Usually it is a matter of just declaring stuff in the right place, but it s not a skill many of us have.
At some point in my life I had started reading a bit about device trees and stuff, but got distracted by other stuff before I could develop any familiarity with it. So I don t have the skills either :)
Also, many RPi hats ship with their own Device Tree overlays, and they cannot always be loaded on top of mainline kernels.
I m definitely not happy to hear this! Guess I ll have to try, and maybe report back once some page for these new builds materializes.

23 January 2026

Reproducible Builds (diffoscope): diffoscope 311 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 311. This version includes the following changes:
[ Chris Lamb ]
* Fix test compatibility with u-boot-tools 2026-01. Thanks, Jelle!
* Bump Standards-Version to 4.7.3.
* Drop implied "Priority: optional" from debian/control.
* Also drop implied "Rules-Requires-Root: no" entry in debian/control.
* Update copyright years.
You find out more by visiting the project homepage.

19 January 2026

Dirk Eddelbuettel: RApiDatetime 0.0.11 on CRAN: Micro-Maintenance

A new (micro) maintenance release of our RApiDatetime package is now on CRAN, coming only a good week after the 0.0.10 release which itself had a two year gap to its predecessor release. RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change. This release adds a single (and ) around one variable as the rchk container and service by Tomas now flagged this. Which is somewhat peculiar, as this is old code also borrowed from R itself but no point arguing so I just added this. Details of the release follow based on the NEWS file.

Changes in RApiDatetime version 0.0.11 (2026-01-19)
  • Add PROTECT (and UNPROTECT) to appease rchk

Courtesy of my CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Isoken Ibizugbe: Mid-Point Project Progress

Halfway There

Hurray!  I have officially reached the 6-week mark, the halfway point of my Outreachy internship. The time has flown by incredibly fast, yet it feels short because there is still so much exciting work to do.

I remember starting this journey feeling overwhelmed, trying to gain momentum. Today, I feel much more confident. I began with the apps_startstop task during the contribution period, writing manual test steps and creating preparation Perl scripts for the desktop environments. Since then, I ve transitioned into full automation and taken a liking to reading openQA upstream documentation when I have issues or for reference.

In all of this, I ve committed over 30 hours a week to the project. This dedicated time has allowed me to look in-depth into the Debian ecosystem and automated quality assurance.

The Original Roadmap vs. Reality

Reviewing my 12-week goal, which included extending automated tests for live image testing, installer testing, and documentation, I am happy to report that I am right on track. My work on desktop apps tests has directly improved the quality of both the Live Images and the netinst (network installer) ISOs.

Accomplishments

I have successfully extended the apps_startstop tests for two Desktop Environments (DEs): Cinnamon and LXQt. These tests ensure that common and DE specific apps launch and close correctly across different environments.

  • Merged Milestone: My Cinnamon tests have been officially merged into the upstream repository! [MR !84]
  • LXQt & Adaptability: I am in the final stages of the LXQt tests. Interestingly, I had to update these tests mid-way through because of a version update in the DE. This required me to update the needles (image references) to match the new UI, a great lesson in software maintenance.

Solving for Synergy

One of my favorite challenges was suggested by my mentor, Roland: synergizing the tests to reduce redundancy. I observed that some applications (like Firefox and LibreOffice) behave identically across different desktops. Instead of duplicating Perl scripts/code for every single DE, I used symbolic links. This allows the use of the same Perl script and possibly the same needles, making the test suite lighter and much easier to maintain.

The Contributor Guide

During the contribution phase, I noticed how rigid the documentation and coding style requirements are. While this ensures high standards and uniformity, it can be intimidating for newcomers and time-consuming for reviewers.

To help, I created a contributor guide [MR !97]. This guide addresses the project s writing style. My goal is to reduce the back-and-forth during reviews, making the process more efficient for everyone and helping new contributors.

Looking Forward

For the second half of the internship, I plan to:

  1. Assist others: Help new contributors extend apps start-stop tests to even more desktop environments.
  2. Explore new coverage: Move beyond start-stop tests into deeper functional testing.

This journey has been an amazing experience of learning and connecting with the wider open-source community, especially Debian Women and the Linux QA team.

I am deeply grateful to my mentors, Tassia Camoes Araujo, Roland Clobus, and Philip Hands, for their constant guidance and for believing in my ability to take on this project.

Here s to the next 6 weeks

Hellen Chemtai: Internship Highlights at Outreachy: My Journey with Debian OpenQA

Highlights

Hello world  . I am an intern here at Outreachy working with Debian OpenQA Image testing team. The work consists of testing Images with OpenQA. The internship has reached midpoint and here are some of the highlights that I have had so far.

  1. The mentors : Roland Clobus, Tassia Camoes and Philip Hands are very good mentors. I like the constant communication and the help I get while working on the project. I enjoy working with this team.
  2. The community : The contributors, mentors and the greater SUSE OpenQA community are constantly in communication. I learn a lot from these meetings.
  3. The women network : The women of Debian meet and network . The meetings are interactive and we are encouraged to interact.
  4. The project : We are making progress one step at a time. Isoken Ibizugbe is my fellow intern working on start-stop tests. I am working on live installers tests.

Communication

I have learned a lot during my internship. I have always been on the silent path of life with little communication. I once told myself being a developer would hide me behind a computer to avoid socializing. Being in open source especially this internship has helped me out with communication and networking. The team work in the project has helped me a lot

  1. My mentors encourage communication. Giving project updates and stating when we get stuck.
  2. My mentors have scheduled weekly meetings to communicate about the project
  3. We are constantly invited to the SUSE meetings by mentors or by Sam Thursfield who is part of the team.
  4. Female contributors are encouraged to join Debian women monthly meetings for networking

Lessons so far

I have had challenges , solved problems and learned new skills all this while

  1. I have learned Perl, OpenQA configuration, needle editing and improved my Linux and Git skills
  2. I have known how various Images are installed , booted and run through live viewing of tests
  3. I have solved many test errors and learned to work with applications that are needed in the OS installations. e.g. rufus
  4. I have learned how virtual machines work and how to solve errors in regards to them

So far so good. I am grateful to be a contributor towards the project and hope to continue learning.

Jonathan Dowland: FOSDEM 2026

I'm going to FOSDEM 2026! I'm presenting in the Containers dev room. My talk is Java Memory Management in Containers and it's scheduled as the first talk on the first day. I'm the warm-up act! The Java devroom has been a stalwart at FOSDEM since 2004 (sometimes in other forms), but sadly there's no Java devroom this year. There's a story about that, but it's not mine to tell. Please recommend to me any interesting talks! Here's a few that caught my eye: Debian/related: Containers: Research: Other:

Francesco Paolo Lovergine: A Terramaster NAS with Debian, take two.

After experimenting at home, the very first professional-grade NAS from Terramaster arrived at work, too, with 12 HDD bays and possibly a pair of M2s. NVME cards. In this case, I again installed a plain Debian distribution, but HDD monitoring required some configuration adjustments to run smartd properly.A decent approach to data safety is to run regularly scheduled short and long SMART tests on all disks to detect potential damage. Running such tests on all disks at once isn't ideal, so I set up a script to create a staggered configuration and test multiple groups of disks at different times. Note that it is mandatory to read the devices at each reboot because their names and order can change.Of course, the same principle (short/long test at regular intervals along the week) should be applied for a simpler configuration, as in the case of my home NAS with a pair of RAID1 devices.What follows is a simple script to create a staggered smartd.conf at boot time:
#!/bin/bash
#
# Save this as /usr/local/bin/create-smartd-conf.sh
#
# Dynamically generate smartd.conf with staggered SMART test scheduling
# at boot time based on discovered ATA devices
# HERE IS A LIST OF DIRECTIVES FOR THIS CONFIGURATION FILE.
# PLEASE SEE THE smartd.conf MAN PAGE FOR DETAILS
#
#   -d TYPE Set the device type: ata, scsi[+TYPE], nvme[,NSID],
#           sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N],
#           usbprolific, usbsunplus, sntasmedia, sntjmicron[,NSID], sntrealtek,
#           ... (platform specific)
#   -T TYPE Set the tolerance to one of: normal, permissive
#   -o VAL  Enable/disable automatic offline tests (on/off)
#   -S VAL  Enable/disable attribute autosave (on/off)
#   -n MODE No check if: never, sleep[,N][,q], standby[,N][,q], idle[,N][,q]
#   -H      Monitor SMART Health Status, report if failed
#   -s REG  Do Self-Test at time(s) given by regular expression REG
#   -l TYPE Monitor SMART log or self-test status:
#           error, selftest, xerror, offlinests[,ns], selfteststs[,ns]
#   -l scterc,R,W  Set SCT Error Recovery Control
#   -e      Change device setting: aam,[N off], apm,[N off], dsn,[on off],
#           lookahead,[on off], security-freeze, standby,[N off], wcache,[on off]
#   -f      Monitor 'Usage' Attributes, report failures
#   -m ADD  Send email warning to address ADD
#   -M TYPE Modify email warning behavior (see man page)
#   -p      Report changes in 'Prefailure' Attributes
#   -u      Report changes in 'Usage' Attributes
#   -t      Equivalent to -p and -u Directives
#   -r ID   Also report Raw values of Attribute ID with -p, -u or -t
#   -R ID   Track changes in Attribute ID Raw value with -p, -u or -t
#   -i ID   Ignore Attribute ID for -f Directive
#   -I ID   Ignore Attribute ID for -p, -u or -t Directive
#   -C ID[+] Monitor [increases of] Current Pending Sectors in Attribute ID
#   -U ID[+] Monitor [increases of] Offline Uncorrectable Sectors in Attribute ID
#   -W D,I,C Monitor Temperature D)ifference, I)nformal limit, C)ritical limit
#   -v N,ST Modifies labeling of Attribute N (see man page)
#   -P TYPE Drive-specific presets: use, ignore, show, showall
#   -a      Default: -H -f -t -l error -l selftest -l selfteststs -C 197 -U 198
#   -F TYPE Use firmware bug workaround:
#           none, nologdir, samsung, samsung2, samsung3, xerrorlba
#   -c i=N  Set interval between disk checks to N seconds
#    #      Comment: text after a hash sign is ignored
#    \      Line continuation character
# Attribute ID is a decimal integer 1 <= ID <= 255
# except for -C and -U, where ID = 0 turns them off.
set -euo pipefail
# Test schedule configuration
BASE_SCHEDULE="L/../../6"  # Long test on Saturdays
TEST_HOURS=(01 03 05 07)   # 4 time slots: 1am, 3am, 5am, 7am
DEVICES_PER_GROUP=3
main()  
    # Get array of device names (e.g., sda, sdb, sdc)
    mapfile -t devices < <(ls -l /dev/disk/by-id/   grep ata   awk ' print $11 '   grep sd   cut -d/ -f3   sort -u)
    if [[ $ #devices[@]  -eq 0 ]]; then
        exit 1
    fi
    # Start building config file
    cat << EOF
# smartd.conf - Auto-generated at boot
# Generated: $(date '+%Y-%m-%d %H:%M:%S')
#
# Staggered SMART test scheduling to avoid concurrent disk load
# Long tests run on Saturdays at different times per group
#
EOF
    # Process devices into groups
    local group=0
    local count_in_group=0
    for i in "$ !devices[@] "; do
        local dev="$ devices[$i] "
        local hour="$ TEST_HOURS[$group] "
        # Add group header at start of each group
        if [[ $count_in_group -eq 0 ]]; then
            echo ""
            echo "# Group $((group + 1)) - Tests at $ hour :00 on Saturdays"
        fi
        # Add device entry
        #echo "/dev/$ dev  -a -o on -S on -s ($ BASE_SCHEDULE /$ hour ) -m root"
        echo "/dev/$ dev  -a -o on -S on -s (L/../../6/$ hour ) -s (S/../.././$(((hour + 12) % 24))) -m root"
        # Move to next group when current group is full
        count_in_group=$((count_in_group + 1))
        if [[ $count_in_group -ge $DEVICES_PER_GROUP ]]; then
            count_in_group=0
            group=$(((group + 1) % $ #TEST_HOURS[@] ))
        fi
    done
 
main "$@"
To run such a script at boot, add a unit file to the systemd configuration.
sudo systemctl  edit --full /etc/systemd/system/regenerate-smartd-conf.service
sudo systemctl enable regenerate-smartd-conf.service
Where the unit service is the following:
[Unit]
Description=Generate smartd.conf with staggered SMART test scheduling
# Wait for all local filesystems and udev device detection
After=local-fs.target systemd-udev-settle.service
Before=smartd.service
Wants=systemd-udev-settle.service
DefaultDependencies=no
[Service]
Type=oneshot
# Only generate the config file, don't touch smartd here
ExecStart=/bin/bash -c '/usr/local/bin/create-smartd-config.sh > /etc/smartd.conf'
StandardOutput=journal
StandardError=journal
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Next.

Previous.