Search Results: "john"

10 July 2025

Tianon Gravi: Yubi Whati? (YubiKeys, ECDSA, and X.509)

Off-and-on over the last several weeks, I've been spending time trying to learn/understand YubiKeys better, especially from the perspective of ECDSA and signing. I had a good mental model for how "slots" work (canonically referenced by their hexadecimal names such as 9C), but found that it had a gap related to "objects"; while closing that, I was annoyed that the main reference table for this gap lives primarily in either a PDF or inside several implementations, so I figured I should create the reference I want to see in the world, but that it would also be useful to write down some of my understanding for my own (and maybe others') future reference. So, to that end, I'm going to start with a bit ( ) of background information, with the heavy caveat that this only applies to "PIV" ("FIPS 201") usage of YubiKeys, and that I only actually care about ECDSA, although I've been reassured that it's the same for at least RSA (anything outside this is firmly Here Be Not Tianon; "gl hf dd"). (Incidentally, learning all this helped me actually appreciate the simplicity of cloud-based KMS solutions, which was an unexpected side effect. ) At a really high level, ECDSA is like many other (asymmetric) cryptographic solutions you've got a public key and a private key, the private key can be used to "sign" data (tiny amounts of data, in fact, like P-256 can only reasonably sign 256 bits of data, which is where cryptographic hashes like SHA256 come in as secure analogues for larger data in small bit sizes), and the public key can then be used to verify that the data was indeed signed by the private key, and only someone with the private key could've done so. There's some complex math and RNGs involved, but none of that's actually relevant to this post, so find that information elsewhere. Unfortunately, this is where things go off the rails: PIV is X.509 ("x509") heavy, and there's no X.509 in the na ve view of my use case. In a YubiKey (or any other PIV-signing-supporting smart card? do they actually have competitors in this specific niche? ), a given "slot" can hold one single private key. There are ~24 slots which can hold a private key and be used for signing, although "Slot 9c" is officially designated as the "Digital Signature" slot and is encouraged for signing purposes. One of the biggest gotchas is that with pure-PIV (and older YubiKey firmware ) the public key for a given slot is only available at the time the key is generated, and the whole point of the device in the first place is that the private key is never, ever available from it (all cryptographic operations happen inside the device), so if you don't save that public key when you first ask the device to generate a private key in a particular slot, the public key is lost forever (asterisk).
$ # generate a new ECDSA P-256 key in "slot 9c" ("Digital Signature")
$ # WARNING: THIS WILL GLEEFULLY WIPE SLOT 9C WITHOUT PROMPTING
$ yubico-piv-tool --slot 9c --algorithm ECCP256 --action generate
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtGoWRGyjjUlJFXpu8BL6Rnx8jjKR
5+Mzl2Vepgor+k7N9q7ppOtSMWefjFVR0SEPmXqXINNsCi6LpLtNEigIRg==
-----END PUBLIC KEY-----
Successfully generated a new private key.
$ # this is the only time/place we (officially) get this public key
With that background, now let's get to the second aspect of "slots" and how X.509 fits. For every aforementioned slot, there is a corresponding "object" (read: place to store arbitrary data) which is corresponding only by convention. For all these "key" slots the (again, by convention) corresponding "object" is explicitly supposed to be an X.509 certificate (see also the PDF reference linked above). It turns out this is a useful and topical place to store that public key we need to keep handy! It's also an interesting place to shove additional details about what the key in a given slot is being used for, if that's your thing. Converting the raw public key into a (likely self-signed) X.509 certificate is an exercise for the reader, but if you want to follow the conventions, you need some way to convert a given "slot" to the corresponding "object", and that is the lookup table I wish existed in more forms. So, without further ado, here is the anti-climax:
Slot Object Description
0x9A 0x5FC105 X.509 Certificate for PIV Authentication
0x9E 0x5FC101 X.509 Certificate for Card Authentication
0x9C 0x5FC10A X.509 Certificate for Digital Signature
0x9D 0x5FC10B X.509 Certificate for Key Management
0x82 0x5FC10D Retired X.509 Certificate for Key Management 1
0x83 0x5FC10E Retired X.509 Certificate for Key Management 2
0x84 0x5FC10F Retired X.509 Certificate for Key Management 3
0x85 0x5FC110 Retired X.509 Certificate for Key Management 4
0x86 0x5FC111 Retired X.509 Certificate for Key Management 5
0x87 0x5FC112 Retired X.509 Certificate for Key Management 6
0x88 0x5FC113 Retired X.509 Certificate for Key Management 7
0x89 0x5FC114 Retired X.509 Certificate for Key Management 8
0x8A 0x5FC115 Retired X.509 Certificate for Key Management 9
0x8B 0x5FC116 Retired X.509 Certificate for Key Management 10
0x8C 0x5FC117 Retired X.509 Certificate for Key Management 11
0x8D 0x5FC118 Retired X.509 Certificate for Key Management 12
0x8E 0x5FC119 Retired X.509 Certificate for Key Management 13
0x8F 0x5FC11A Retired X.509 Certificate for Key Management 14
0x90 0x5FC11B Retired X.509 Certificate for Key Management 15
0x91 0x5FC11C Retired X.509 Certificate for Key Management 16
0x92 0x5FC11D Retired X.509 Certificate for Key Management 17
0x93 0x5FC11E Retired X.509 Certificate for Key Management 18
0x94 0x5FC11F Retired X.509 Certificate for Key Management 19
0x95 0x5FC120 Retired X.509 Certificate for Key Management 20
See also "piv-objects.json" for a machine-readable copy of this data. (Major thanks to paultag and jon gzip johnson for helping me learn and generally putting up with me, but especially dealing with my live-stream-of-thoughts while I stumble through the dark. )

4 July 2025

Sahil Dhiman: Secondary Authoritative Name Server Options for Self-Hosted Domains

In the past few months, I have moved authoritative name servers (NS) of two of my domains (sahilister.net and sahil.rocks) in house using PowerDNS. Subdomains of sahilister.net see roughly 320,000 hits/day across my IN and DE mirror nodes, so adding secondary name servers with good availability (in addition to my own) servers was one of my first priorities. I explored the following options for my secondary NS, which also didn t cost me anything:

1984 Hosting

Hurriance Electric

Afraid.org

Puck

NS-Global

Asking friends Two of my friends and fellow mirror hosts have their own authoritative name server setup, Shrirang (ie albony) and Luke. Shirang gave me another POP in IN and through Luke (who does have an insane amount of in-house NS, see dig ns jing.rocks +short), I added a JP POP. If we know each other, I would be glad to host a secondary NS for you in (IN and/or DE locations).

Some notes
  • Adding a third-party secondary is putting trust that the third party would serve your zone right.
  • Hurricane Electric and 1984 hosting provide multiple NS. One can use some or all of them. Ideally, you can get away with just using your own with full set from any of these two. Play around with adding and removing secondaries, which gives you the best results. . Using everyone is anyhow overkill, unless you have specific reasons for it.
  • Moving NS in-house isn t that hard. Though, be prepared to get it wrong a few times (and some more). I have already faced partial outages because:
    • Recursive resolvers (RR) in the wild behave in a weird way and cache the wrong NS response for longer time than in TTL.
    • NS expiry took more than time. 2 out of 3 of my Netim s NS (my domain registrar) had stopped serving my domain, while RRs in the wild hadn t picked up my new in-house NS. I couldn t really do anything about it, though.
    • Dot is pretty important at the end.
    • With HE.net, I forgot to delegate my domain on their panel and just added in my NS set, thinking I ve already done so (which I did but for another domain), leading to a lame server situation.
  • In terms of serving traffic, there s no distinction between primary and secondary NS. RR don t really care who they re asking the query to. So one can have hidden primary too.
  • I initially thought of adding periodic RIPE Atlas measurements from the global set but thought against it as I already host a termux mirror, which brings in thousands of queries from around the world leading to a diverse set of RRs querying my domain already.
  • In most cases, query resolution time would increase with out of zone NS servers (which most likely would be in external secondary). 1 query vs. 2 queries. Pay close attention to ADDITIONAL SECTION Shrirang s case followed by mine:
$ dig ns albony.in
; <<>> DiG 9.18.36 <<>> ns albony.in
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60525
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 9
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;albony.in.			IN	NS
;; ANSWER SECTION:
albony.in.		1049	IN	NS	ns3.albony.in.
albony.in.		1049	IN	NS	ns4.albony.in.
albony.in.		1049	IN	NS	ns2.albony.in.
albony.in.		1049	IN	NS	ns1.albony.in.
;; ADDITIONAL SECTION:
ns3.albony.in.		1049	IN	AAAA	2a14:3f87:f002:7::a
ns1.albony.in.		1049	IN	A	82.180.145.196
ns2.albony.in.		1049	IN	AAAA	2403:44c0:1:4::2
ns4.albony.in.		1049	IN	A	45.64.190.62
ns2.albony.in.		1049	IN	A	103.77.111.150
ns1.albony.in.		1049	IN	AAAA	2400:d321:2191:8363::1
ns3.albony.in.		1049	IN	A	45.90.187.14
ns4.albony.in.		1049	IN	AAAA	2402:c4c0:1:10::2
;; Query time: 29 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:01 IST 2025
;; MSG SIZE  rcvd: 286
vs mine
$ dig ns sahil.rocks
; <<>> DiG 9.18.36 <<>> ns sahil.rocks
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64497
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;sahil.rocks.			IN	NS
;; ANSWER SECTION:
sahil.rocks.		6385	IN	NS	ns5.he.net.
sahil.rocks.		6385	IN	NS	puck.nether.net.
sahil.rocks.		6385	IN	NS	colin.sahilister.net.
sahil.rocks.		6385	IN	NS	marvin.sahilister.net.
sahil.rocks.		6385	IN	NS	ns2.afraid.org.
sahil.rocks.		6385	IN	NS	ns4.he.net.
sahil.rocks.		6385	IN	NS	ns2.albony.in.
sahil.rocks.		6385	IN	NS	ns3.jing.rocks.
sahil.rocks.		6385	IN	NS	ns0.1984.is.
sahil.rocks.		6385	IN	NS	ns1.1984.is.
sahil.rocks.		6385	IN	NS	ns-global.kjsl.com.
;; Query time: 24 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Fri Jul 04 07:57:20 IST 2025
;; MSG SIZE  rcvd: 313
  • Theoretically speaking, a small increase/decrease in resolution would occur based on the chosen TLD and the popularity of the TLD in query originators area (already cached vs. fresh recursion).
  • One can get away with having only 3 NS (or be like Google and have 4 anycast NS or like Amazon and have 8 or like Verisign and make it 13 :P).
  • Nowhere it s written, your NS needs not to be called dns* or ns1, ns2 etc. Get creative with naming NS; be deceptive with the naming :D.
  • A good understanding of RR behavior can help engineer a good authoritative NS system.

Further reading

24 June 2025

Matthew Garrett: Why is there no consistent single signon API flow?

Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.

comment count unavailable comments

11 June 2025

John Goerzen: I Learned We All Have Linux Seats, and I m Not Entirely Pleased

I recently wrote about How to Use SSH with FIDO2/U2F Security Keys, which I now use on almost all of my machines. The last one that needed this was my Raspberry Pi hooked up to my DEC vt510 terminal and IBM mechanical keyboard. Yes I do still use that setup! To my surprise, generating a key on it failed. I very quickly saw that /dev/hidraw0 had incorrect permissions, accessible only to root. On other machines, it looks like this:
crw-rw----+ 1 root root 243, 16 May 24 16:47 /dev/hidraw16
And, if I run getfacl on it, I see:
# file: dev/hidraw16
# owner: root
# group: root
user::rw-
user:jgoerzen:rw-
group::---
mask::rw-
other::---
Yes, something was setting an ACL on it. Thus began to saga to figure out what was doing that. Firing up inotifywatch, I saw it was systemd-udevd or its udev-worker. But cranking up logging on that to maximum only showed me that uaccess was somehow doing this. I started digging. uaccess turned out to be almost entirely undocumented. People say to use it, but there s no description of what it does or how. Its purpose appears to be to grant access to devices to those logged in to a machine by dynamically adding them to ACLs for devices. OK, that s a nice goal, but why was machine A doing this and not machine B? I dug some more. I came across a hint that uaccess may only do that for a seat . A seat? I ve not heard of that in Linux before. Turns out there s some information (older and newer) about this out there. Sure enough, on the machine with KDE, loginctl list-sessions shows me on seat0, but on the machine where I log in from ttyUSB0, it shows an empty seat. But how to make myself part of the seat? I tried various udev rules to add the seat or master-of-seat tags, but nothing made any difference. I finally gave up and did the old-fashioned rule to just make it work already:
TAG=="security-device",SUBSYSTEM=="hidraw",GROUP="mygroup"
I still don t know how to teach logind to add a seat for ttyUSB0, but oh well. At least I learned something. An annoying something, but hey. This all had a laudable goal, but when there are so many layers of indirection, poorly documented, with poor logging, it gets pretty annoying.

21 May 2025

Simon Quigley: Fences and Values

Don t knock the fence down before you know why it s up. I repeat this phrase over and over again, yet the (metaphorical) Homeowner s Association still decides my fence is the wrong color.Well, now you get to know why the fence is up. If anyone s actually willing to challenge me on this level, I d welcome it.The four ideas I d like to discuss are this: quantum physics, Lutheranism, mental resilience, and psychology. I ve been studying these topics intensely for the past decade as a passion project. I m just going to let my thoughts flow, but I d like to hear other opinions on this.Can the mysteries of the mind, the subatomic world, and faith converge to reveal deeper truths?When it comes to self-taught knowledge on analysis, I m mostly learned on Freud, with some hints of Jung and Peterson. I ve read much of the original source material, and watched countless presentations on it. This all being said, I m both learned on Rothbard and Marx, so if there is a major flaw in the way of Freud is frowned upon, I d genuinely like to know so I can update my research and juxtapose the two schools of thought.Alongside this, although probably not directly relevant, I m learned on John Locke and transcendentalism. What I d like to focus on here is this the Id.The Id is the pleasure-seeking, instinctual part of the psyche. Jung further extends this into the idea of the shadow self, and Peterson maps the meanings of these texts into a combined work (at least in my rudimentary understanding).In my research, the Id represents the part of your psyche that deals with religious values. As an example, if you re an impulsive person, turning to a spiritual or religious outlet can be highly beneficial. I ve been using references from the foundational text of the Judaeo-Christian value system this entire time, feel free to re-read my other blog posts (instead of claiming they don t exist).Let s tie this into quantum physics. This is the part where I ll struggle most. I ve watched several movies about this, read several books, and even learned about it academically, but quantum physics is likely to be my weak spot here.I did some research, and here are the elements I m looking for: uncertainty principle, wave-particle duality, quantum entanglement, and the observer effect.I already know about the cat in the box. And the Cat in the Hat, for that matter. I know about wave-particle duality from an incredibly intelligent high school physics teacher of mine. I know about the uncertainty principle purely in a colloquial sense. The remaining element I need to wrap my head around is quantum entanglement, but it feels like I m almost there.These concepts do actually challenge the idea of pure free will. It s almost like we re coming full circle. Some theologians (including myself, if you can call me a self-taught one) do believe the idea of quantum indeterminacy can be a space where divine action may take place. You could also liken the unpredictable nature of the Id to quantum indeterminacy as well. These are ones to think about, because in all reality, they re subjective opinions. I do believe they re interconnected.In terms of Lutheranism, I ll be short on this one. Please do go read the full history behind Martin Luther and his turbulent relationship with Catholicism. I m not a Bible thumper, and I actually think this is the first time I ve mentioned religion publicly at all. This being said, now I m actually ready to defend the points on an academic level.The Id represents hidden psychological forces, quantum physics reveals subatomic mysteries, and Lutheranism emphasizes faith in the unseen God. Okay, so we have the baseline. Now, time for some mental resilience. When I think of mental resilience, the first people I think of are David Goggins and Jocko Willink. I ve also enjoyed Dr. Andrew Huberman s podcast.The idea there is simple if you understand exactly how to learn, you know your fundamentals well enough to draw them and explain them vividly on a whiteboard, and you can make it a habit, at that point you re ready to work on your mental resilience. Little by little, gradually, how far can you push the bar towards the ceiling?There s obviously limits. People sometimes get scared when I mention mental resilience, but obviously that s a bit of a catch 22. There are plenty of satirical videos out there, and of course, I don t believe in Goggins or Jocko wholeheartedly. They re just tools in the toolbox when times get tough.I wish you all well, and I hope this gets you thinking about those people who just insist there is no God or higher being, and think you re stupid for believing there is one. Those people obviously haven t read analysis, in my own opinion.Have a great night!

17 May 2025

John Goerzen: How to Use SSH with FIDO2/U2F Security Keys

For many years now, I ve been using an old YubiKey along with the free tier of Duo Security to add a second factor to my SSH logins. This is klunky, and has a number of drawbacks (dependency on a cloud service and Internet among them). I decided it was time to upgrade, so I recently bought a couple of YubiKey 5 series security keys. These support FIDO2/U2F, which make it so much easier to integrate with ssh. But in researching how to do this, I found a lot of pages online with poor instructions. Either they didn t explain what was going on very well, or suggested what I came to learn were insecure practices, or most often both. It turns out this whole process is quite easy. But I wanted to understand how it worked. So, I figured it out, set it up myself, and then put up a new, comprehensive page on my website: https://www.complete.org/easily-using-ssh-with-fido2-u2f-hardware-security-keys/. I hope it helps!

14 May 2025

Jonathan Dowland: Orbital

Orbital at NX, Newcastle in 2023 Orbital at NX, Newcastle in 2023
I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album. I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld) As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)
  1. The Naked and the Dead (1992) The Naked and the Dead by Orbital From an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended. Anyway I've now written more about a song I haven't recommended than the one I did
  2. Remind (1993) Remind by Orbital From the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there. In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)
  3. You Lot (2004) From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.
  4. Beached (2000) Beached (Long version) by Orbital, Angelo Badalamenti Co-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.
  5. Spare Parts Express (1999) Spare Parts Express by Orbital Critics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later. Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)
  6. Attached (1994) Attached by Orbital This originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.

26 April 2025

John Goerzen: Memoirs of the Early Internet

The Internet is an amazing place, and occasionally you can find things on the web that have somehow lingered online for decades longer than you might expect. Today I ll take you on a tour of some parts of the early Internet. The Internet, of course, is a network of networks and part of its early (and continuing) promise was to provide a common protocol that all sorts of networks can use to interoperate with each other. In the early days, UUCP was one of the main ways universities linked with each other, and eventually UUCP and the Internet sort of merged (but that s a long story). Let s start with some Usenet maps, which were an early way to document the UUCP modem links between universities. Start with this PDF. The first page is a Usenet map (which at the time mostly flowed over UUCP) from April of 1981. Notice that ucbvax, a VAX system at Berkeley, was central to the map. ucbvax continued to be a central node for UUCP for more than a decade; on page 5 of that PDF, you ll see that it asks for a Path from a major node (eg, ucbvax, devcax, harpo, duke) . Pre-Internet email addresses used a path; eg, mark@ucbvax was duke!decvax!ucbvax!mark to someone. You had to specify the route from your system to the recipient on your email To line. If you gave out your email address on a business card, you would start it from a major node like ucbvax, and the assumption was that everyone would know how to get from their system to the major node. On August 19, 1994, ucbvax was finally turned off. TCP/IP had driven UUCP into more obscurity; by then, it was mostly used by people without a dedicated Internet connection to get on the Internet, rather than an entire communication network of its own. A few days later, Cliff Frost posted a memoir of ucbvax; an obscurbe bit of Internet lore that is fun to read. UUCP was ad-hoc, and by 1984 there was an effort to make a machine-parsable map to help automate routing on UUCP. This was called the pathalias project, and there was a paper about it. The Linux network administration guide even includes a section on pathalias. Because UUCP mainly flowed over phone lines, long distance fees made it quite expensive. In 1985, the Stargate Project was formed, with the idea of distributing Usenet by satellite. The satellite link was short-lived, but the effort eventually morphed into UUNET. It was initially a non-profit, but eventually became a commercial backbone provider, and later ISP. Over a long series of acquisitions, UUNET is now part of Verizon. An article in ;login: is another description of this history. IAPS has an Internet in 1990 article, which includes both pathalias data and an interesting map of domain names to UUCP paths. As I was pondering what interesting things a person could do with NNCPNET Internet email, I stumbled across a page on getting FTP files via e-mail. Yes, that used to be a thing! I remember ftpmail@decwrl.dec.com. It turns out that page is from a copy of EFF s (Extended) Guide to the Internet from 1994. Wow, what a treasure! It has entries such as A Slice of Life in my Virtual Community, libraries with telnet access, Gopher, A Statement of Principle by Bruce Sterling, and I could go on. You can also get it as a PDF from Internet Archive. UUCP is still included with modern Linux and BSD distributions. It was part of how I experienced the PC and Internet revolution in rural America. It lacks modern security, but NNCP is to UUCP what ssh is to telnet.

John Goerzen: NNCPNET Can Optionally Exchange Internet Email

A few days ago, I announced NNCPNET, the email network based atop NNCP. NNCPNET lets anyone run a real mail server on a network that supports all sorts of topologies for transport, from Internet to USB drives. And verification is done at the NNCP protocol level, so a whole host of Internet email bolt-ons (SPF, DMARC, DKIM, etc.) are unnecessary. Shortly after announcing NNCPNET, I added an Internet bridge. This lets you get your own DOMAIN.nncpnet.org domain, and from there route email to and from the Internet using a gateway node. Simple, effective, and a way to get real email to and from your laptop or Raspberry Pi without having to have a static IP, SPF, DMARC, DKIM, etc. It s a volunteer-run, free, service. Give it a try!

10 April 2025

John Goerzen: Announcing the NNCPNET Email Network

From 1995 to 2019, I ran my own mail server. It began with a UUCP link, an expensive long-distance call for me then. Later, I ran a mail server in my apartment, then ran it as a VPS at various places. But running an email server got difficult. You can t just run it on a residential IP. Now there s SPF, DKIM, DMARC, and TLS to worry about. I recently reviewed mail hosting services, and don t get me wrong: I still use one, and probably will, because things like email from my bank are critical. But we ve lost the ability to tinker, to experiment, to have fun with email. Not anymore. NNCPNET is an email system that runs atop NNCP. I ve written a lot about NNCP, including a less-ambitious article about point-to-point email over NNCP 5 years ago. NNCP is to UUCP what ssh is to telnet: a modernization, with modern security and features. NNCP is an asynchronous, onion-routed, store-and-forward network. It can use as a transport anything from the Internet to a USB stick. NNCPNET is a set of standards, scripts, and tools to facilitate a broader email network using NNCP as the transport. You can read more about NNCPNET on its wiki! The easy mode is to use the Docker container (multi-arch, so you can use it on your Raspberry Pi) I provide, which bundles: It is open to all. The homepage has a more extensive list of features. I even have mailing lists running on NNCPNET; see the interesting addresses page for more details. There is extensive documentation, and of course the source to the whole thing is available. The gateway to Internet SMTP mail is off by default, but can easily be enabled for any node. It is a full participant, in both directions, with SPF, DKIM, DMARC, and TLS. You don t need any inbound ports for any of this. You don t need an always-on Internet connection. You don t even need an Internet connection at all. You can run it from your laptop and still use Thunderbird to talk to it via its optional built-in IMAP server.

8 April 2025

Petter Reinholdtsen: Some notes on Linux LUKS cracking

A few months ago, I found myself in the unfortunate position that I had to try to recover the password used to encrypt a Linux hard drive. Tonight a few friends of mine asked for details on this effort. I guess it is a good idea to expose the recipe I found to a wider audience, so here are a few relevant links and key findings. I've forgotten a lot, so part of this is taken from memory. I found a good recipe in a blog post written in 2019 by diverto, titled Cracking LUKS/dm-crypt passphrases. I tried both the john the ripper approach where it generated password candidates and passed it to cryptsetup and the luks2jack.py approach (which did not work for me, if I remember correctly), but believe I had most success with the hashcat approach. I had it running for several days on my Thinkpad X230 laptop from 2012. I do not remember the exact hash rate, but when I tested it again just now on the same machine by running "hashcat -a 0 hashcat.luks longlist --force", I got a hash rate of 7 per second. Testing it on a newer machine with a 32 core AMD CPU, I got a hash rate of 289 per second. Using the ROCM OpenCL approach on the same machine I managed to get a hash rate of 2821 per second.
Session..........: hashcat                                
Status...........: Quit
Hash.Mode........: 14600 (LUKS v1 (legacy))
Hash.Target......: hashcat.luks
Time.Started.....: Tue Apr  8 23:06:08 2025 (1 min, 10 secs)
Time.Estimated...: Tue Apr  8 23:12:49 2025 (5 mins, 31 secs)
Kernel.Feature...: Pure Kernel
Guess.Base.......: File (/usr/share/dict/bokm l)
Guess.Queue......: 1/1 (100.00%)
Speed.#1.........:     2821 H/s (8.18ms) @ Accel:128 Loops:128 Thr:32 Vec:1
Recovered........: 0/1 (0.00%) Digests (total), 0/1 (0.00%) Digests (new)
Progress.........: 0/935405 (0.00%)
Rejected.........: 0/0 (0.00%)
Restore.Point....: 0/935405 (0.00%)
Restore.Sub.#1...: Salt:0 Amplifier:0-1 Iteration:972928-973056
Candidate.Engine.: Device Generator
Candidates.#1....: A-aksje -> fiskebil
Hardware.Mon.#1..: Temp: 73c Fan: 77% Util: 99% Core:2625MHz Mem: 456MHz Bus:16
Note that for this last test I picked the largest word list I had on my machine (dict/bokm l) as a fairly random work list and not because it is useful for cracking my particular use case from a few months ago. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4 April 2025

Guido G nther: Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging. One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this. To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it. Flashing the factory image If you still have Android on the device you can skip this step. You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:
unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh
This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below. Enabling USB debugging Now boot Android and enable Developer mode by going to Settings About then touching Build Number (at the very bottom) 7 times. Go back one level, then go to System Developer Options and enable "USB Debugging". Obtaining boot.img There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:
unzip image-sargo-sp2a.220505.008.zip boot.img
If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post). Becoming root with Magisk Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current. Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):
adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download
In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK. We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded. The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:
adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img
Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):
fastboot flash boot magisk_patched-28100_3ucVs.img
Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell. If you now connect to your phone via adb again and now su should work:
adb shell
su
As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:
fastboot boot magisk_patched-28100_3ucVs.img
and then perform the same adb shell su check as above. Building the custom kernel For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:
sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync
With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here. We then build the kernel:
PATH=/usr/sbin:$PATH ./build_bonito.sh
The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb. In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below). Preparing a new boot.img To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:
ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "$ ARGS "   sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" $ CLEAN_PARAMS  --cmdline "$ ARGS "
This will give you a boot.patched.img with the just built kernel. Boot the new kernel via fastboot We can now boot the new boot.patched.img. No need to flash that onto the device for that:
fastboot boot boot.patched.img
Fetching the kernel logs With that we can fetch the kernel logs with the debug output via adb:
adb shell su -c 'dmesg -t' > dmesg_dump.xml
or already filtering out the QMI commands:
adb shell su -c 'dmesg -t'    grep "@QMI@"   sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml
That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar. In case you just need a rooted boot.img for sargo you can find a patched one here. If this procedure can be improved / streamlined somehow please let me know. Appendix: Fetching boot.img from the phone If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone. First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:
fastboot boot twrp-3.7.1_12-1-sargo.img
Within TWRP select Backup Boot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:
adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win
You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

31 March 2025

Dirk Eddelbuettel: Rblpapi 0.3.15 on CRAN: Several Refinements

bloomberg terminal Version 0.3.16 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required). This is the sixteenth release since the package first appeared on CRAN in 2016. It contains several enhancements. Two contributed PRs improve an error message, and extended connection options. We cleaned up a bit of internal code. And this release also makes the build conditional on having a valid build environment. This has been driven by the fact CRAN continues to builder under macOS 13 for x86_64, but Bloomberg no longer supplies a library and headers. And our repeated requests to be able to opt out of the build were, well, roundly ignored. So now the builds will succeed, but on unviable platforms such as that one we will only offer empty functions. But no more build ERRORS yelling at us for three configurations. The detailed list of changes follow below.

Changes in Rblpapi version 0.3.16 (2025-03-31)
  • A quota error message is now improved (Rodolphe Duge in #400)
  • Convert remaining throw into Rcpp::stop (Dirk in #402 fixing #401)
  • Add optional appIdentityKey argument to blpConnect (Kai Lin in #404)
  • Rework build as function of Blp library availability (Dirk and John in #406, #409, #410 fixing #407, #408)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Simon Josefsson: On Binary Distribution Rebuilds

I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd64 and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive. One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I m using the latest version of published packages for the rebuild. What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach. It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach. However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code. I ve made an illustration of the effort I m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.
The illustration shows how the Debian main archive is used as input to rebuild another stage #0 archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the stage #1 archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution. How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve. What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well. Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don t know how they would look like in practice. An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible. I m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the build-essentials-depends package set. I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution. What do you think? PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

28 March 2025

John Goerzen: Why You Should (Still) Use Signal As Much As Possible

As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks. The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around. Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic. So let s dive in. I ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea. This post isn t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

What makes communications secure? When most people are talking about secure communications, they mean some combination of these properties:
  1. Privacy - nobody except the intended recipient can decode a message.
  2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
  3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
  4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.
If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can t really have privacy without authentication. I ll have more to say about these later. For now, let s discuss attack scenarios.

What compromises security? There are a number of ways that security can be compromised. Let s think through some of them:

Communications infrastructure snooping Let s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?
  • The owner of the coffee shop s WiFi
  • The coffee shop s Internet provider
  • The recipient s Internet provider
  • Any Internet providers along the network between the sender and the recipient
  • Any government or institution that can compel any of the above to hand over copies of the traffic
  • Any hackers that compromise any of the above systems
Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing. Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity). Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone s retention. So defenses against this involve things like:
  • Strong end-to-end encryption, so no intermediate party even the people that make the app can snoop on it.
  • Using strong authentication of your peers
  • Taking steps to prevent even app developers from being able to see your contact list or communication history
You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks. When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it even if they never open it or attempt to peak inside will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

Device compromise Let s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn t take away all of them. What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what? An even simpler attack doesn t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right? Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later. An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on but still, it protects against a wide variety of attacks.

Untrustworthy communication partner Perhaps you are sending sensitive information to a contact, but that person doesn t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

Environmental compromise Perhaps your device is secure, but a hidden camera still captures what s on your screen. You can take some steps against things like this, of course.

Human error Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

Protecting yourself So how can you protect yourself against these attacks? Let s consider:
  • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can t access your messages
  • Keep your software and phone up-to-date
  • Be careful about phishing attacks and who you add to chat rooms
  • Be aware of your surroundings; don t send sensitive messages where people might be looking over your shoulder with their eyes or cameras
There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your secure laptop, it wouldn t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?) But, that approach is hard to use. Many people aren t familiar with GnuPG. You don t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier. Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available. Signal is also open source; you don t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it s not federated, I previously addressed that.

Government use If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised. I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn t have been possible. This doesn t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it. And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

Conclusion Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history? I say no. So, go install Signal. It s the best, most practical tool we have.
This post is also available on my website, where it may be periodically updated.

31 January 2025

Russell Coker: Links January 2025

Aaron Quigley s Everything Open lecture about Intelligent Interfaces is one of the most interesting research reports I ve seen in a long time [1]. This one can be understood and appreciated by people who don t have a strong background in computer science. Statites (satellites that don t orbit the sun but use solar sails to hover in place) could be used to catch up to interstellar objects [2]. Slashgear has an interesting article about an AI piloted F16 beating a human piloted F16 [3]. Given the serious handicaps of flying a plane designed for humans and flying to minimise risk to itself and other crewed aircraft this is a serious victory. Hopefully crewed military aircraft will be obsolete soon. Amusing video about the performance of cats with MMORPG style descriptions [4]. John Goerzen wrote an interesting blog post about censorship and the changes to Facebook [5]. Ron Garret wrote an interesting blog post 15 years ago when going through what he now describes as an existential crisis [6]. A comment on Ron s post is references Alan Crowe s blog post about whether the self exists which is an interesting philosophical post [7]. But I m still going to think of myself as a person. Another comment on Ron s post references Aaron Swartz blog post about Noam Chomsky etc [8]. I have to watch Manufacturing Consent: Noam Chomsky and the Media. Ron Garret wrote an interesting blog post about his failed attempts to start a company and how it all worked out well for him any way [9]. Amusing video about a failed crowdfunded e-bike [10]. Cory Doctorow wrote an insightful article about how Enshittification is not caused by VCs but by lack of controls [11].

8 January 2025

John Goerzen: Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I a person that has been censored by Facebook for mentioning the Open Source social network Mastodon not cheering a lighter touch ? The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s. Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not. As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access and wouldn t understand how to access even if they had the equipment you have a place where self-expression can be unleashed. And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms. But as I ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough more than a few dozen people, say there are troublemakers that ruin it for everyone. Maybe it s trolling, maybe it s vicious attacks, you name it it will arrive and it will be poisonous. I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.) But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the sysop or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you re banned. But, in all but the smallest towns, there were other options you could try. On the other hand, sysops enjoyed having people call their BBSs and didn t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don t be excessively annoying, and don t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet. On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty delinking from the network if they repeatedly failed to prevent malicious content from flowing through their site. Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn t be in real life . And yes, some harbored trolls and flamers. The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn t like what the vibe was at a certain place. That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online. With the rise of the very large platforms and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you d need tens or hundreds of millions of dollars of equipment and employees. All that means you can t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook s mission is not the one they say it is [to] give people the power to build community and bring the world closer together. If that was their goal, they wouldn t be creating AI users and AI spam and all the rest. Zuck isn t showing courage; he s sucking up to Trump and those that will pay the price are those that always do: women and minorities. Really, the point of any large social network isn t to build community. It s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles even free speech doing it. Don t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others. The problem with a one-size-fits-all content policy is that the world isn t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can t square this circle. Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don t have much to offer on that point. Secondly, it indicates that we are too centralized. We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don t. And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance. Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go shrug, this sucks and try something better. (And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

1 January 2025

Louis-Philippe V ronneau: 2024 A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one. Albums In 2024, I added 88 new albums to my collection that's a lot! This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union. Money continues to ruin the world, I guess. Concerts I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :) If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year. Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait. Anyway, here are the concerts I went to in 2024: Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there. See you all in 2025!

  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics.

31 December 2024

Chris Lamb: Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024. It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

Films Recent releases

Seen at a 2023 festival. Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almod var) and Emilia P rez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie Deux (Todd Phillips).
Older releases ie. Films released before 2023, and not including rewatches from previous years. Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser s Husband (Patrice Leconte, 19990). On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).

28 December 2024

Thomas Goirand: Running a Lenovo Legion pro 7 laptop under Debian

As I was tired of long build times, so I convinced my boss to buy me a Lenovo Legion pro 7. The reason is: this laptop has an AMD Ryzen 9 7945HX that has 16 cores (32 threads). This reduces a lot the time I have to just wait for my laptop to compile, or run unit tests, especially for big packages like Ceph, OpenVSwitch, and so on. When buying it, I knew it would not be a good fit for Debian, as this type of laptop is aimed at gaming, and the support under Linux is rather bad. I wish Lenovo had other policies, but that is the way it is: if you re a Linux user, you re not suppose to be needing a big CPU, apparently. Anyways, I slowly have been able to fix all issues over this year. In this blog post I ll explain how I fixed all problems, in the hope it can be useful to others. And I ll explain what the src:lenovolegionlinux package (that I now maintain in Debian) does. Video The laptop comes with an nVidia RTX-4080 and a Radeon. I quickly tried the radeon, but couldn t make it work with an external monitor. So I gave up on it, disabled it, and now I m using the proprietary nVidia driver from non-free. I don t like it: the nVidia card drains too much power, and I don t care at all 3D acceleration. I would have prefer an intel board, but no choice: all laptops with this kind of CPU comes with gamer s 3D card. Anyways, apart from the power issue, it works out well. Fan control This sounds like a non-issue, but it is a huge one. Indeed, if not controlling the fan, it is impossible to get the full potential of the CPUs that are otherwise throttling. One may end up using the laptop at a few hundred MHz instead of 5GHz+. More on this later. Sound It took me a really long time to figure out what to do. Indeed, while the sound card works out of the box, the issue was that my laptop came with a TI (Texas Instrument) speaker firmware that isn t on by default. I suppose the purpose is to save on power when it isn t in use. Anyways, to have sound working, one need in Debian, to run at least kernel 6.10, which means for me, running the Bookworm backport, so that there s a kernel module for the speakers. But that s not it. The speakers also need a proprietary firmware in /lib/firmware/TAS2XXX38*.bin. I was able to find that in the ti.com forum. As I tried so many packages, I wouldn t be able to tell which one was the correct one. Once that was done, the firmware needs to be initialized through the i2c interface. I could find a script that did that, which I pushed in my lenovolegionlinux package (see below). WiFi WiFi worked out of the box for me, just it wouldn t wake up if I closed the laptop lead. This fixed it for me in /etc/modprobe.d/rtw8852be.conf: options rtw89_pci disable_aspm_l1=y disable_aspm_l1ss=y
options rtw89_core disable_ps_mode=y lenovolegionlinux package I came across https://github.com/johnfanv2/LenovoLegionLinux which I packaged. The result is now 4 binary packages: lenovolegionlinux-dkms that provides the kernel module for accessing the fan control. python3-legion-linux that provides legion_cli and legion_gui, written in Python, that make it possible to control the kernel module. I often use sudo legion_gui, click on Other options and then switch the power profile from quiet to balanced. Many things on this GUI do no work for me, like the fancurve thingy, but should be working for other flavors of Legion laptops. Please feel free to contribute. There s also legiond that provides a daemon for setting-up the fan curve on wake up. And finally, I pushed my i2c speaker script to a new lenovolegionlinux-sound debian binary package that I have just uploaded today, in the hope it may be useful for others. Conclusion Finally, almost everything is (almost) working as expected. Just my webcam (lsusb says it s a Luxvisions Innotech Limited Integrated Camera) went dark at some point (it did work previously). It is now as if it is working, but just transmitting a black picture. If anyone knows how to fix, please tell me. Also, I only get 40 minutes of battery time if I m lucky, I hope this could be fixed. But overall, I m happy of the laptop. Thanks to Ding Shenghao for his support of many people in the ti.com forum. Thanks to the people maintaining the LenovoLegionLinux that helped me a lot writing this Debian package. Please try and report issue with lenovolegionlinux in Debian, and help me improving it. It is in Salsa s debian namespace in the hope that others may push contributions.

Next.