Search Results: "ghe"

23 February 2022

Jonathan McDowell: Upgrading my home internet; a story of yak shaving

RB5009 This has ended up longer than I expected. I ll write up posts about some of the individual steps with some more details at some point, but this is an overview of the yak shaving I engaged in. The TL;DR is:

The desire for a faster connection When I migrated my home connection to FTTP I kept the same 80M/20M profile I d had on FTTC. I didn t have a pressing need for faster, and I saved money because I was no longer paying for the phone line portion. I wanted more, but at the time I think the only option was for a 160M/30M profile instead and I didn t need it and it wasn t enough better to convince me. Time passed and BT rolled out their GigE (really 900M) download option. And again, I didn t need it, but I wanted it. My provider, Aquiss, initially didn t offer this (I think they had up to 330M download options available by this point). So I stayed on 80M/20M. And the only time I really wanted it to be faster was when pushing off-site backups to rsync.net. Of course, we ve had the pandemic, and that s involved 2 adults working from home with plenty of video calls throughout the day. The 80M/20M connection has proved rock solid for this, so again, I didn t feel an upgrade was justified. We got a 4K capable TV last year and while the bandwidth usage for 4K streaming is noticeably higher, again the connection can handle it no problem. At some point last year I noticed Aquiss had added speed options all the way to 900M down. At the end of the year I accepted a new role, which is fully remote, so I had a bit of an acceptance about the fact that I wasn t going back into an office any time soon. The combination (and the desire for the increased upload speed) finally allowed me to justify the upgrade to myself.

Testing the current setup for bottlenecks The first thing to do was see whether my internal network could cope with an upgrade. I m mostly running Cat6 GigE so I wasn t worried about that side of things. However I m using an RB3011 as my core router, and while it has some coprocessors for routing acceleration they re not supported under mainline Linux (and unlikely to be any time soon). So I had to benchmark what it was capable of routing. I run a handful of VLANs within my home network, with stateful firewalling between them, so I felt that would be a good approximation of the maximum speed to the outside world I might be able to get if I had the external connection upgraded. I went for the easy approach and fired up iPerf3 on 2 hosts, both connected via ethernet but on separate networks, so routed through the RB3011. That resulted in slightly more than a 300Mb/s throughput. Ok. I confirmed that I could get 900Mb/s+ on 2 hosts both on the same network, just to be sure there wasn t some other issue I was missing. Nope, so unsurprisingly the router was the bottleneck. So. To upgrade my internet speed I need to upgrade my router. I could just buy something off the shelf, but I like being able to run Debian (or OpenWRT) on the router rather than some horrible vendor firmware. Lucky MikroTik launched the RB5009 towards the end of last year. RouterOS is probably more than capable, but what really interested me was the fact it s an ARM64 platform based on an Armada 7040, which is pretty well supported in mainline kernels already. There s a 10G connection from the internal switch to the CPU, as well as a 2.5Gb/s ethernet port and a 10G SFP+ cage. All good stuff. I ordered one just before the New Year. Thankfully the OpenWRT folk had done all of the hard work on getting a mainline kernel booting on the device; Sergey Sergeev and Robert Marko in particular fighting RouterBoot and producing a suitable device tree file to get everything up and running. I ended up soldering a serial console connection up to aid debugging, and lightly patching Rob s u-boot to fix the incorrect RAM size reported by RouterBoot. A few kernel tweaks were necessary to make the networking entirely happy and at that point it was time to think about actually doing a replacement.

Upgrading to Debian 11 (bullseye) My RB3011 is currently running Debian 10 (buster); an upgrade has been on my todo list, but with the impending replacement I decided I d hold off and create a new Debian 11 (bullseye) image for the RB5009. Additionally, I don t actually run off the internal NAND in the RB3011; I have a USB flash drive for the rootfs and just the kernel booting off internal NAND. Originally this was for ease of testing, then a combination of needing to figure out a good read-only root solution and a small enough image to fit in the 120M available. For the upgrade I decided to finally look at these pieces. I ve ended up with a script that will build me a squashfs image, and the initial rootfs takes care of mounting this and then a tmpfs as an overlay fs. That means I can easily see what pieces are being written to. The RB5009 has a total of 1G NAND so I m not as space constrained, but the squashfs ends up under 50M. I ve added some additional pieces to allow me to pre-populate the overlay fs with updates rather than always needing to rebuild the squashfs image. With that done I decided to try it out on the RB3011; I tweaked the build script to be able to build for armhf (the RB3011) or arm64 (the RB5009) and to deal with some slight differences in configuration between the two (e.g. interface naming). The idea here was to ensure I d got all the appropriate configuration sorted for the RB5009, in the known-good existing environment. Everything is still on a USB stick at this stage and the new device has an armhf busybox root meaning it can be used on either device, and the init script detects the architecture to select the appropriate squashfs to mount.

A problem with ESP8266 home automation devices Everything seemed to work fine - a few niggles with the watchdog, which is overly sensitive on the RB3011, but I got those sorted (and the build script updated) and the device came up and successfully did the PPPoE dance to bring up external connectivity. And then I noticed that my home automation devices were having problems connecting to the mosquitto MQTT server. It turned out it was only the ESP8266 based devices that were failing, and examining the serial debug output on one of my test devices revealed it was hitting an out of memory issue (displaying E:M 280) when establishing the TLS MQTT connection. I rolled back to the Debian 10 image and set about creating a test environment to look at the ESP8266 issues. My first action was to try and reduce my RAM footprint to try and ensure there was enough spare to establish the connection. I moved a few functions that were still sitting in IRAM into flash. I cleaned up a couple of buffers that are on the stack to be more correctly sized. I tried my new image, and I didn t get the memory issue. Instead I progressed a bit further and got a watchdog reset. Doh! It was obviously something related to the TLS connection, but I couldn t easily see what the difference was; the same x509 cert was in use, it looked like the initial handshake was the same (and trying with openssl s_client looked pretty similar too). I set about instrumenting the ancient Mbed TLS used in the Espressif SDK and discovered that whatever had changed between buster + bullseye meant the EPS8266 was now trying a TLS-DHE-RSA-WITH-AES-256-CBC-SHA256 handshake instead of a TLS-RSA-WITH-AES-256-CBC-SHA256 handshake and that was causing enough extra CPU usage that it couldn t complete in time and the watchdog kicked in. So I commented out MBEDTLS_KEY_EXCHANGE_DHE_RSA_ENABLED in the config_esp.h for mbedtls and rebuilt things. Hacky, but I ll go back to trying to improve this generally at some point.

A detour into interrupt load Now, my testing of the RB3011 image is generally done at weekends, when I have enough time to tear down and rebuild the connection rather than doing it in the evening and having limited time to get things working again in time for work in the morning. So at the point I had an image ready to go I pulled the trigger on the line upgrade. I went with the 500M/75M option rather than the full 900M - I suspect I d have difficulty actually getting that most of the time and 75M of upload bandwidth seems fairly substantial for now. It only took a couple of days from the order to the point the line was regraded (which involved no real downtime - just a reconnection in the night). Of course this happened just after the weekend I d discovered the ESP8266 issue. collectd CPU usage for RB3011 This provided an opportunity to see just what the RB3011 could actually manage. In the configuration I had it turned out to be not much more than the 80Mb/s speeds I had previously seen. The upload jumped from a solid 20Mb/s to 75Mb/s, so I knew the regrade had actually happened. Looking at CPU utilisation clearly showed the problem; softirqs were using almost 100% of a CPU core. Now, the way the hardware is setup on the RB3011 is that there are two separate 5 port switches, each connected back to the CPU via a separate GigE interface. For various reasons I had everything on a single switch, which meant that all traffic was boomeranging in and out of the same CPU interface. The IPQ8064 has dual cores, so I thought I d try moving the external connection to the other switch. That puts it on its own GigE CPU interface, which then allows binding the interrupts to a different CPU core. That helps; throughput to the outside world hits 140Mb/s+. Still a long way from the expected max, but proof we just need more grunt.

Success collectd CPU usage for RB5009 Which brings us to this past weekend, when, having worked out all the other bits, I tried the squashfs root image again on the RB3011. Success! The home automation bits connected to it, the link to the outside world came up, everything seemed happy. So I double checked my bootloader bits on the RB5009, brought it down to the comms room and plugged it in instead. And, modulo my failing to update the nftables config to allow it to do forwarding, it all came up ok. Some testing with iperf3 internally got a nice 912Mb/s sustained between subnets, and some less scientific testing with wget + speedtest-cli saw speeds of over 460Mb/s to the outside world. Time from ordering the router until it was in service? Just under 8 weeks

7 February 2022

Russ Allbery: Review: Embers of War

Review: Embers of War, by Gareth L. Powell
Series: Embers of War #1
Publisher: Titan Books
Copyright: February 2018
ISBN: 1-78565-519-1
Format: Kindle
Pages: 312
The military leadership of the Outward faction of humanity was meeting on the forest world of Pelapatarn, creating an opportunity for the Conglomeration to win the war at a stroke. Resistance was supposed to be minimal, since the Outward had attempted to keep the conference secret rather than massing forces to protect it. But the Outward resistance was stronger than expected, and Captain Deal's forces would not be able to locate and assassinate the Outward leadership before they could escape. She therefore followed orders from above her and ordered the four incoming Carnivore heavy cruisers to jump past the space battle and bomb the planet. The entire planet. The Carnivores' nuclear and antimatter weaponry reduced the billion-year-old sentient jungle of Pelapatarn to ash. Three years later, Sal Konstanz is a ship captain for the House of Reclamation, a strictly neutral search and rescue force modeled after a long-vanished alien fleet that prioritizes preservation of life above all else. Anyone can join Reclamation, provided that they renounce their previous alliances and devote themselves to the Reclamation cause. Sal and her crew member Alva Clay were Outward. The ship medic was Conglomeration. So was the ship: the Trouble Dog, a sentient AI heavy cruiser built to carry an arsenal of weapons and three hundred crew, and now carrying three humans and a Druff mechanic. More precisely, the Trouble Dog was one of the four Carnivores that destroyed Pelapatarn. Meanwhile, Ona Sudak, a popular war poet, is a passenger on a luxury cruise on the liner Geest van Amsterdam, which is making an unscheduled stop in the star system known as the Gallery. The Gallery is the home of the Objects: seven planets that, ten thousand years earlier, were carved by unknown aliens into immense sculptures for unknown reasons. The Objects appear to be both harmless and mysterious, making them an irresistible tourist attraction for the liner passengers. The Gallery is in disputed space, but no one was expecting serious trouble. They certainly weren't expecting the Geest van Amsterdam to be attacked and brought down on the Object known as the Brain, killing nearly everyone aboard. The Trouble Dog is the closest rescue ship. This book was... fine. It's a perfectly serviceable science fiction novel that didn't stand out for me, which I think says more about the current excellent state of the science fiction field than about this book. When I was a teenager reading Asimov, Niven, and Heinlein, I would have devoured this. It compares favorably to minor Niven or Heinlein (The Integral Trees, for example, or Double Star), but the bar for excellent science fiction is just so much higher now. The best character in this book (and the reason why I read it) is the Trouble Dog. I love science fiction about intelligent space ships, and she did not disappoint. The AI ships in this book are partly made from human and dog neurons, so their viewpoint is mostly human but with some interesting minor variations. And the Trouble Dog would be a great character even if she weren't an intelligent ship: ethical, aggressive, daring, and introspective, with a nuanced relationship with her human crew. Unfortunately, Embers of War has four other viewpoint characters, and Powell chose to write them all in the first person. First person narration depends heavily on a memorable and interesting character because the reader is so thoroughly within their perspective. This works great for the Trouble Dog, and is fine for Nod, the Druff who serves as the ship mechanic. (Nod's perspective is intriguing, short, almost free-verse musings, rather than major story segments.) Sal, the ship captain, is a bit of a default character, but I didn't mind her much. Neither of the other two viewpoint characters are interesting enough to warrant the narrative attention. The Conglomerate agent Ashton Childe has such an uninteresting internal monologue that I would have liked the character better if he'd only been seen through other people's viewpoints, and although Powell needs some way to show Ona Sudak's view of events, I didn't think her thoughts added much to the story. The writing is adequate but a bit clunky: slightly flat descriptions, a bit too predictable at the sentence level, and rarely that memorable. There is a bit of fun world-building of the ancient artifact variety and a couple of decent set pieces (and one rather-too-obvious Matrix homage that I didn't think was as effective as the author did), but most of the story is focused on characters navigating their lives and processing trauma from the war. The story kept me turning the pages with interest, but I also doubt it will surprise anyone who has read much science fiction. I suspect a lot of it is setup for the following two books of the trilogy, and there are plenty of hooks for more stories in this universe. I really wanted a first-person story from the perspective of the Trouble Dog, possibly with some tight third-person interludes showing Ona Sudak's story. What I got instead was entertaining but not memorable enough to stand out in the current rich state of science fiction. I think I'm invested enough in this story to want to read the next book, though, so that's still a recommendation of sorts. Followed by Fleet of Knives. Rating: 6 out of 10

27 January 2022

Russ Allbery: Review: I Didn't Do the Thing Today

Review: I Didn't Do the Thing Today, by Madeleine Dore
Publisher: Avery
Copyright: 2022
ISBN: 0-593-41914-6
Format: Kindle
Pages: 291
At least from my narrow view of it, the world of productivity self-help literature is a fascinating place right now. The pandemic overturned normal work patterns and exacerbated schedule inequality, creating vastly different experiences for the people whose work continued to be in-person and the people whose work could become mostly or entirely remote. Self-help literature, which is primarily aimed at the more affluent white-collar class, primarily tracked the latter disruption: newly-remote work, endless Zoom meetings, the impossibility of child care, the breakdown of boundaries between work and home, and the dawning realization that much of the mechanics of day-to-day office work are neither productive nor defensible. My primary exposure these days to the more traditional self-help productivity literature is via Cal Newport. The stereotype of the productivity self-help book is a collection of life hacks and list-making techniques that will help you become a more efficient capitalist cog, but Newport has been moving away from that dead end for as long as I've been reading him, and his recent work focuses more on structural issues with the organization of knowledge work. He also shares with the newer productivity writers a willingness to tell people to use the free time they recover via improved efficiency on some life goal other than improved job productivity. But he's still prickly and defensive about the importance of personal productivity and accomplishing things. He gives lip service on his podcast to the value of the critique of productivity, but then usually reverts to characterizing anti-productivity arguments as saying that productivity is a capitalist invention to control workers. (Someone has doubtless said this on Twitter, but I've never seen a serious critique of productivity make this simplistic of an argument.) On the anti-productivity side, as it's commonly called, I've seen a lot of new writing in the past couple of years that tries to break the connection between productivity and human worth so endemic to US society. This is not a new analysis; disabled writers have been making this point for decades, it's present in both Keynes and in Galbraith's The Affluent Society, and Kathi Weeks's The Problem with Work traces some of its history in Marxist thought. But what does feel new to me is its widespread mainstream appearance in newspaper articles, viral blog posts, and books such as Jenny Odell's How to Do Nothing and Devon Price's Laziness Does Not Exist. The pushback against defining life around productivity is having a moment. Entering this discussion is Madeleine Dore's I Didn't Do the Thing Today. Dore is the author of the Extraordinary Routines blog and host of the Routines and Ruts podcast. Extraordinary Routines began as a survey of how various people organize their daily lives. I Didn't Do the Thing Today is, according to the preface, a summary of the thoughts Dore has had about her own life and routines as a result of those interviews. As you might guess from the subtitle (Letting Go of Productivity Guilt), Dore's book is superficially on the anti-productivity side. Its chapters are organized around gentle critiques of productivity concepts, with titles like "The Hopeless Search for the Ideal Routine," "The Myth of Balance," or "The Harsh Rules of Discipline." But I think anti-productivity is a poor name for this critique; its writers are not opposed to being productive, only to its position as an all-consuming focus and guilt-generating measure of personal worth. Dore structures most chapters by naming an aspect, goal, or concern of a life defined by productivity, such as wasted time, ambition, busyness, distraction, comparison, or indecision. Each chapter sketches the impact of that idea and then attempts to gently dismantle the grip that it may have on the reader's life. All of these discussions are nuanced; it's rare for Dore to say that one of these aspects has no value, and she anticipates numerous objections. But her overarching goal is to help the reader be more comfortable with imperfection, more willing to live in the moment, and less frustrated with the limitations of life and the human brain. If striving for productivity is like lifting weights, Dore's diagnosis is that we've tried too hard for too long, and have overworked that muscle until it is cramping. This book is a gentle massage to induce the muscle to relax and let go. Whether this will work is, as with all self-help books, individual. I found it was best read in small quantities, perhaps a chapter per day, since it otherwise began feeling too much the same. I'm also not the ideal audience; Dore is a creative freelancer and primarily interviewed other creative people, which I think has a different sort of productivity rhythm than the work that I do. She's also not a planner to the degree that I am; more on that below. And yet, I found this book worked on me anyway. I can't say that I was captivated all the way through, but I found myself mentally relaxing while I was reading it, and I may re-read some chapters from time to time. How does this relate to the genre of productivity self-help? With less conflict than I think productivity writers believe, although there seems to be one foundational difference of perspective. Dore is not opposed to accomplishing things, or even to systems that help people accomplish things. She is more attuned than the typical productivity writer to the guilt and frustration that can accumulate when one has a day in which one does not do the thing, but her goal is not to talk you out of attempting things. It is, instead, to convince you to hold those attempts and goals more lightly, to allow them to move and shift and change, and to not treat a failure to do the thing today as a reason for guilt. This is wholly compatible with standard productivity advice. It's adding nuance at one level of abstraction higher: how tightly to cling to productivity goals, and what to do when they don't work out. Cramping muscles are not strong muscles capable of lifting heavy things. If one can massage out the cramp, one's productivity by even the strict economic definition may improve. Where I do see a conflict is that most productivity writers are planners, and Dore is not. This is, I think, a significant blind spot in productivity self-help writing. Cal Newport, for example, advocates time-block planning, where every hour of the working day has a job. David Allen advocates a complex set of comprehensive lists and well-defined next actions. Mark Forster builds a flurry of small systems for working through lists. The standard in productivity writing is to to add structure to your day and cultivate the self-discipline required to stick to that structure. For many people, including me, this largely works. I'm mostly a planner, and when my life gets chaotic, adding more structure and focusing on that structure helps me. But the productivity writers I've read are quite insistent that their style of structure will work for everyone, and on that point I am dubious. Newport, for example, advocates time-block planning for everyone without exception, insisting that it is the best way to structure a day. Dore, in contrast, describes spending years trying to perfect a routine before realizing that elastic possibilities work better for her than routines. For those who are more like Dore than Newport, I Didn't Do the Thing Today is more likely to be helpful than Newport's instructions. This doesn't make Newport's ideas wrong; it simply makes them not universal, something that the productivity self-help genre seems to have trouble acknowledging. Even for readers like myself who prefer structure, I Didn't Do the Thing Today is a valuable corrective to the emphasis on every-better systems. For those who never got along with too much structure, I think it may strike a chord. The standard self-help caveat still applies: Dore has the most to say to people who are in a similar social class and line of work as her. I'm not sure this book will be of much help to someone who has to juggle two jobs with shift work and child care, where the problem is more sharp external constraints than internalized productivity guilt. But for its target audience, I think it's a valuable, calming message. Dore doesn't have a recipe to sort out your life, but may help you feel better about the merits of life unsorted. Rating: 7 out of 10

17 January 2022

Matthew Garrett: Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

11 January 2022

Ritesh Raj Sarraf: ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.
ThinkPad T14 Gen2 AMD
ThinkPad T14 Gen2 AMD

Lenovo It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.
  • No fingerprint reader
  • No Touch screen
There s still parts where Lenovo could improve. Or less frustate a customer. I don t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.

AMD For the first time in my computing life, I m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven t had to tinker even once so far, in my 30 days of use. On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.

Power/Thermal This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn t just this machine, but also the previous T14 Gen1 which has similar problems. I m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead. Similarly, on the thermal side, this machine doesn t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle. I m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.

Linux The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn t good but I took the plunge, considering that in the worst scenario I d have the option to swap the card. There s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.

Migration Other than what s mentioned above, I haven t had any serious issues. I may have had some rare occassional hangs but they ve been so infrequent that I haven t spent time to investigate those. Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it. Below is filtered list of subvols created over years, that were worthy of moving to the new machine.
rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54                      

btrfs send/receive This did come in handy but I sorely missed some feature. Maybe they aren t there, or are there and I didn t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it d be nice to say, Take this volume and take it with all its attributes . I didn t find that functionality in send/receive. There s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer. In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes. Also, iirc, send/receive works only on ro volumes. So there s more work one needs to do in:
  1. create ro vol
  2. send
  3. receive
  4. don t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination
I wish this all be condensed into a sub-command. For my own sake, for this migration, the steps used were:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/TOSHIBA/Migrate/   cut -d ' ' -f9   grep -v ROOTVOL   grep -v etc   grep -v btrbk ; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume   sudo btrfs receive /media/user/BTRFSROOT/ ; done            
Migrate/snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
Migrate/snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
Migrate/snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
Migrate/snapshot-home_rrs_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
Migrate/snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
Migrate/snapshot-var_lib_machines
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
Migrate/snapshot-var_lib_machines_DebianSidTemplate
..... snipped .....
And then, follow-up with:
user@debian:~$ for volume in  sudo btrfs sub list /media/user/BTRFSROOT/   cut -d ' ' -f9 ; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ROOTVOL
ERROR: Could not open: No such file or directory
etc
snapshot_disk-tmp
snapshot-home_foo_.cache
snapshot-home_rrs
snapshot-var_spool_news
snapshot-var_lib_machines
snapshot-var_lib_machines_DebianSidTemplate
snapshot-var_lib_machines_DebSidArmhf
snapshot-var_lib_machines_DebianJessieTemplate
snapshot-var_tmp
snapshot-var_log
snapshot-var_cache
snapshot-disk-tmp
And then finally, renaming everything to match proper:
user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x   cut -d '-' -f2   sed -e "s _ / g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp

snapper Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.

Conclusion That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I d like to explore other boot options but given that that is such a non-essential task, it is low on the list. The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part. But I surely think there s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

Russ Allbery: Review: Hench

Review: Hench, by Natalie Zina Walschots
Publisher: William Morrow
Copyright: September 2020
ISBN: 0-06-297859-4
Format: Kindle
Pages: 403
Anna Tromedlov is a hench, which means she does boring things for terrible people for money. Supervillains need a lot of labor to keep their bases and criminal organizations running, and they get that labor the same way everyone else does: through temporary agencies. Anna does spreadsheets, preferably from home on her couch. On-site work was terrifying and she tried to avoid it, but the lure of a long-term contract was too strong. The Electric Eel, despite being a creepy sleazeball, seemed to be a manageable problem. He needed some support at a press conference, which turns out to be code for being a diversity token in front of the camera, but all she should have to do is stand there. That's how Anna ends up holding the mind control device to the head of the mayor's kid when the superheroes attack, followed shortly by being thrown across the room by Supercollider. Left with a complex fracture of her leg that will take months to heal, a layoff notice and a fruit basket from Electric Eel's company, and a vaguely menacing hospital conversation with the police (including Supercollider in a transparent disguise) in which it's made clear to her that she is mistaken about Supercollider's hand-print on her thigh, Anna starts wondering just how much damage superheroes have done. The answer, when analyzed using the framework for natural disasters, is astonishingly high. Anna's resulting obsession with adding up the numbers leads to her starting a blog, the Injury Report, with a growing cult following. That, in turn, leads to a new job and a sponsor: the mysterious supervillain Leviathan. To review this book properly, I need to talk about Watchmen. One of the things that makes superheroes interesting culturally is the straightforwardness of their foundational appeal. The archetypal superhero story is an id story: an almost pure power fantasy aimed at teenage boys. Like other pulp mass media, they reflect the prevailing cultural myths of the era in which they're told. World War II superheroes are mostly all-American boy scouts who punch Nazis. 1960s superheroes are a more complex mix of outsider misfits with a moral code and sarcastic but earnestly ethical do-gooders. The superhero genre is vast, with numerous reinterpretations, deconstructions, and alternate perspectives, but its ur-story is a good versus evil struggle of individual action, in which exceptional people use their powers for good to defeat nefarious villains. Watchmen was not the first internal critique of the genre, but it was the one that everyone read in the 1980s and 1990s. It takes direct aim at that moral binary. The superheroes in Watchmen are not paragons of virtue (some of them are truly horrible people), and they have just as much messy entanglement with the world as the rest of us. It was superheroes re-imagined for the post-Vietnam, post-Watergate era, for the end of the Cold War when we were realizing how many lies about morality we had been told. But it still put superheroes and their struggles with morality at the center of the story. Hench is a superhero story for the modern neoliberal world of reality TV and power inequality in the way that Watchmen was a superhero story for the Iran-Contra era and the end of the Cold War. Whether our heroes have feet of clay is no longer a question. Today, a better question is whether the official heroes, the ones that are celebrated as triumphs of individual achievement, are anything but clay. Hench doesn't bother asking whether superheroes have fallen short of their ideal; that answer is obvious. What Hench asks instead is a question familiar to those living in a world full of televangelists, climate denialism, manipulative advertising, and Facebook: are superheroes anything more than a self-perpetuating scam? Has the good superheroes supposedly do ever outweighed the collateral damage? Do they care in the slightest about the people they're supposedly protecting? Or is the whole system of superheroes and supervillains a performance for an audience, one that chews up bystanders and spits them out mangled while delivering simplistic and unquestioned official morality? This sounds like a deeply cynical premise, but Hench is not a cynical book. It is cynical about superheroes, which is not the same thing. The brilliance of Walschots's approach is that Anna has a foot in both worlds. She works for a supervillain and, over the course of the book, gains access to real power within the world of superheroic battles. But she's also an ordinary person with ordinary problems: not enough money, rocky friendships, deep anger at the injustices of the world and the way people like her are discarded, and now a disability and PTSD. Walschots perfectly balances the tension between those worlds and maintains that tension straight to the end of the book. From the supervillain world, Anna draws support, resources, and a mission, but all of the hope, true morality, and heart of this book comes from the ordinary side. If you had the infrastructure of a supervillain at your disposal, what would you do with it? Anna's answer is to treat superheroes as a destructive force like climate change, and to do whatever she can to drive them out of the business and thus reduce their impact on the world. The tool she uses for that is psychological warfare: make them so miserable that they'll snap and do something too catastrophic to be covered up. And the raw material for that psychological warfare is data. That's the foot in the supervillain world. In descriptions of this book, her skills with data are often called her superpower. That's not exactly wrong, but the reason why she gains power and respect is only partly because of her data skills. Anna lives by the morality of the ordinary people world: you look out for your friends, you treat your co-workers with respect as long as they're not assholes, and you try to make life a bit better for the people around you. When Leviathan gives her the opportunity to put together a team, she finds people with skills she admires, funnels work to people who are good at it, and worries about the team dynamics. She treats the other ordinary employees of a supervillain as people, with lives and personalities and emotions and worth. She wins their respect. Then she uses their combined skills to destroy superhero lives. I was fascinated by the moral complexity in this book. Anna and her team do villainous things by the morality of the superheroic world (and, honestly, by the morality of most readers), including some things that result in people's deaths. By the end of the book, one could argue that Anna has been driven by revenge into becoming an unusual sort of supervillain. And yet, she treats the people around her so much better than either the heroes or the villains do. Anna is fiercely moral in all the ordinary person ways, and that leads directly to her becoming a villain in the superhero frame. Hench doesn't resolve that conflict; it just leaves it on the page for the reader to ponder. The best part about this book is that it's absurdly grabby, unpredictable, and full of narrative momentum. Walschots's pacing kept me up past midnight a couple of times and derailed other weekend plans so that I could keep reading. I had no idea where the plot was going even at the 80% mark. The ending is ambiguous and a bit uncomfortable, just like the morality throughout the book, but I liked it the more I thought about it. One caveat, unfortunately: Hench has some very graphic descriptions of violence and medical procedures, and there's an extended torture sequence with some incredibly gruesome body horror that I thought went on far too long and was unnecessary to the plot. If you're a bit squeamish like I am, there are some places where you'll want to skim, including one sequence that's annoyingly intermixed with important story developments. Otherwise, though, this is a truly excellent book. It has a memorable protagonist with a great first-person voice, an epic character arc of empowerment and revenge, a timely take on the superhero genre that uses it for sharp critique of neoliberal governance and reality TV morality, a fascinatingly ambiguous and unsettled moral stance, a gripping and unpredictable plot, and some thoroughly enjoyable competence porn. I had put off reading it because I was worried that it would be too cynical or dark, but apart from the unnecessary torture scene, it's not at all. Highly recommended. Rating: 9 out of 10

9 January 2022

Russell Coker: Video Conferencing (LCA)

I ve just done a tech check for my LCA lecture. I had initially planned to do what I had done before and use my phone for recording audio and video and my PC for other stuff. The problem is that I wanted to get an external microphone going and plugging in a USB microphone turned off the speaker in the phone (it seemed to direct audio to a non-existent USB audio output). I tried using bluetooth headphones with the USB microphone and that didn t work. Eventually a viable option seemed to be using USB headphones on my PC with the phone for camera and microphone. Then it turned out that my phone (Huawei Mate 10 Pro) didn t support resolutions higher than VGA with Chrome (it didn t have the advanced settings menu to select resolution), this is probably an issue of Android build features. So the best option is to use a webcam on the PC, I was recommended a Logitech C922 but OfficeWorks only has a Logitech C920 which is apparently OK. The free connection test from freeconference.com [1] is good for testing out how your browser works for videoconferencing. It tests each feature separately and is easy to run. After buying the C920 webcam I found that it sometimes worked and sometimes caused a kernel panic like the following (partial panic log included for the benefit of people Googling this Logitech C920 problem):
[95457.805417] BUG: kernel NULL pointer dereference, address: 0000000000000000
[95457.805424] #PF: supervisor read access in kernel mode
[95457.805426] #PF: error_code(0x0000) - not-present page
[95457.805429] PGD 0 P4D 0 
[95457.805431] Oops: 0000 [#1] SMP PTI
[95457.805435] CPU: 2 PID: 75486 Comm: v4l2src0:src Not tainted 5.15.0-2-amd64 #1  Debian 5.15.5-2
[95457.805438] Hardware name: HP ProLiant ML110 Gen9/ProLiant ML110 Gen9, BIOS P99 02/17/2017
[95457.805440] RIP: 0010:usb_ifnum_to_if+0x3a/0x50 [usbcore]
...
[95457.805481] Call Trace:
[95457.805484]  
[95457.805485]  usb_hcd_alloc_bandwidth+0x23d/0x360 [usbcore]
[95457.805507]  usb_set_interface+0x127/0x350 [usbcore]
[95457.805525]  uvc_video_start_transfer+0x19c/0x4f0 [uvcvideo]
[95457.805532]  uvc_video_start_streaming+0x7b/0xd0 [uvcvideo]
[95457.805538]  uvc_start_streaming+0x2d/0xf0 [uvcvideo]
[95457.805543]  vb2_start_streaming+0x63/0x100 [videobuf2_common]
[95457.805550]  vb2_core_streamon+0x54/0xb0 [videobuf2_common]
[95457.805555]  uvc_queue_streamon+0x2a/0x40 [uvcvideo]
[95457.805560]  uvc_ioctl_streamon+0x3a/0x60 [uvcvideo]
[95457.805566]  __video_do_ioctl+0x39b/0x3d0 [videodev]
It turns out that Ubuntu Launchpad bug #1827452 has great information on this problem [2]. Apparently if the device decides it doesn t have enough power then it will reconnect and get a different USB bus device number and this often happens when the kernel is initialising it. There s a race condition in the kernel code in which the code to initialise the device won t realise that the device has been detached and will dereference a NULL pointer and then mess up other things in USB device management. The end result for me is that all USB devices become unusable in this situation, commands like lsusb hang, and a regular shutdown/reboot hangs because it can t kill the user session because something is blocked on USB. One of the comments on the Launchpad bug is that a powered USB hub can alleviate the problem while a USB extension cable (which I had been using) can exacerbate it. Officeworks currently advertises only one powered USB hub, it s described as USB 3 but also maximum speed 480 Mbps (USB 2 speed). So basically they are selling a USB 2 hub for 4* the price that USB 2 hubs used to sell for. When debugging this I used the cheese webcam utility program and ran it in a KVM virtual machine. The KVM parameters -device qemu-xhci -usb -device usb-host,hostbus=1,hostaddr=2 (where 1 and 2 are replaced by the Bus and Device numbers from lsusb ) allow the USB device to be passed through to the VM. Doing this meant that I didn t have to reboot my PC every time a webcam test failed. For audio I m using the Sades Wand gaming headset I wrote about previously [3].

5 January 2022

Reproducible Builds: Reproducible Builds in December 2021

Welcome to the December 2021 report from the Reproducible Builds project! In these reports, we try and summarise what we have been up to over the past month, as well as what else has been occurring in the world of software supply-chain security. As a quick recap of what reproducible builds is trying to address, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As always, if you would like to contribute to the project, please get in touch with us directly or visit the Contribute page on our website.
Early in December, Julien Voisin blogged about setting up a rebuilderd instance in order to reproduce Tails images. Working on previous work from 2018, Julien has now set up a public-facing instance which is providing build attestations. As Julien dryly notes in his post, Currently, this isn t really super-useful to anyone, except maybe some Tails developers who want to check that the release manager didn t backdoor the released image. Naturally, we would contend sincerely that this is indeed useful.
The secure/anonymous Tor browser now supports reproducible source releases. According to the project s changelog, version 0.4.7.3-alpha of Tor can now build reproducible tarballs via the make dist-reprod command. This issue was tracked via Tor issue #26299.
Fabian Keil posted a question to our mailing list this month asking how they might analyse differences in images produced with the FreeBSD and ElectroBSD s mkimg and makefs commands:
After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
I recently noticed that the "memstick" images are unfortunately
still not 100% reproducible.
Fabian s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploading versions 195, 196, 197 and 198 to Debian, as well as made the following changes:
  • Support showing Ordering differences only within .dsc field values. [ ]
  • Add support for XMLb files. [ ]
  • Also add, for example, /usr/lib/x86_64-linux-gnu to our local binary search path. [ ]
  • Support OCaml versions 4.11, 4.12 and 4.13. [ ]
  • Drop some unnecessary has_same_content_as logging calls. [ ]
  • Replace token variable with an anonymously-named variable instead to remove extra lines. [ ]
  • Don t use the runtime platform s native endianness when unpacking .pyc files. This fixes test failures on big-endian machines. [ ]
Mattia Rizzolo also made a number of changes to diffoscope this month as well, such as:
  • Also recognize GnuCash files as XML. [ ]
  • Support the pgpdump PGP packet visualiser version 0.34. [ ]
  • Ignore the new Lintian tag binary-with-bad-dynamic-table. [ ]
  • Fix the Enhances field in debian/control. [ ]
Finally, Brent Spillner fixed the version detection for Black uncompromising code formatter [ ], Jelle van der Waa added an external tool reference for Arch Linux [ ] and Roland Clobus added support for reporting when the GNU_BUILD_ID field has been modified [ ]. Thank you for your contributions!

Distribution work In Debian this month, 70 reviews of packages were added, 27 were updated and 41 were removed, adding to our database of knowledge about specific issues. A number of issue types were created as well, including: strip-nondeterminism version 1.13.0-1 was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the dh_strip_nondeterminism Debian integration interface uses the new get_non_binnmu_date_epoch() utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.
In the world of openSUSE, however, Bernhard M. Wiedemann posted his monthly reproducible builds status report.
In NixOS, work towards the longer-term goal of making the graphical installation image reproducible is ongoing. For example, Artturin made the gnome-desktop package reproducible.

Upstream patches The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. In December, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Run the Debian scheduler less often. [ ]
    • Fix the name of the Debian testing suite name. [ ]
    • Detect builds that are rescheduling due to problems with the diffoscope container. [ ]
    • No longer special-case particular machines having a different /boot partition size. [ ]
    • Automatically fix failed apt-daily and apt-daily-upgrade services [ ], failed e2scrub_all.service & user@ systemd units [ ][ ] as well as generic build failures [ ].
    • Simplify a script to powercycle arm64 architecture nodes hosted at/by codethink.co.uk. [ ]
    • Detect if the udd-mirror.debian.net service is down. [ ]
    • Various miscellaneous node maintenance. [ ][ ]
  • Roland Clobus (Debian live image generation):
    • If the latest snapshot is not complete yet, try to use the previous snapshot instead. [ ]
    • Minor: whitespace correction + comment correction. [ ]
    • Use unique folders and reports for each Debian version. [ ]
    • Turn off debugging. [ ]
    • Add a better error description for incorrect/missing arguments. [ ]
    • Report non-reproducible issues in Debian sid images. [ ]
Lastly, Mattia Rizzolo updated the automatic logfile parsing rules in a number of ways (eg. to ignore a warning about the Python setuptools deprecation) [ ][ ] and Vagrant Cascadian adjusted the config for the Squid caching proxy on a node. [ ]

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

31 December 2021

Russell Coker: Links December 2021

Wired magazine has many short documentary films on YouTube, this one about How Photography is Affecting Our Brains is particularly good [1]. Matt Blaze wrote an informative blog post about Faraday cages for phones [2]. It seems that the commercial shielded bags are all pretty good while doing it yourself with aluminium foil may get similar results or may get much worse results with no obvious difference in the quality of the wrapping. Aluminium foil doesn t protect that well and doesn t protect consistently. A metal biscuit tin performed quite well and consistently, so that s a cheap option for reducing signals. Umair Haque wrote an insightful article about the single word that describes most of the problems the world faces right now [3]. Forbes has an informative article about the early days of the Ford company when they doubled wages, it proves that they didn t do so to enable workders to afford cars but to avoid staff turnover (which is expensive) [4]. Also the Ford company had a fascistic approach to employees, controlling what they were allowed to do in their spare time if they wanted the bonus payment. The wages weren t doubled, there was a bonus payment that would double the salary if the employee was eligible for the bonus. One thing that Forbes gets wrong is that they claim that it was only having higher pay than other companies that provided a benefit and that a higher minimum wage wouldn t, the problem with that idea is that a higher minimum wage would discourage people from having multiple jobs and allow more families to not have the mother working (a condition for a man to get the Ford bonus was for his wife to not work). The WSJ has an interesting article about Intel s datacenter for running all the different configurations of CPUs that they have supported over the last 10 years for security tests [5]. My Thinkpad (which is less than 10yo) is vulnerable to one of the SPECTRE family of exploits as Intel hasn t released microcode to fix it, getting fixed microcode out for all the systems from major vendors like Lenovo would be a good idea if they want to improve their security. NPR has an interesting article about the correlation between support for Trump in counties of the US with lack of vaccination and Covid19 deaths [6]. No surprises, but it s good to see the graphs. Cory Doctorow wrote an interesting article on the lack of slack in the current American education system [7]. It s not that bad in Australia but we are unfortunately moving in the American direction. Teen Vogue has an insightful article about the problems with the focus on resilience [8], while resilience is good we should make it a higher priority to avoid putting people in situations where they need to be resiliant than on encouraging resilience.

29 December 2021

Chris Lamb: Favourite books of 2021: Memoir/biography

Just as I did for 2020, I won't publically disclose exactly how many books I read in 2021, but they evidently provoked enough thoughts that felt it worth splitting my yearly writeup into separate posts. I will reveal, however, that I got through more books than the previous year, and, like before, I enjoyed the books I read this year even more in comparison as well. How much of this is due to refining my own preferences over time, and how much can be ascribed to feeling less pressure to read particular books? It s impossible to say, and the question is complicated further by the fact I found many of the classics I read well worth of their entry into the dreaded canon. But enough of the throat-clearing. In today's post I'll be looking at my favourite books filed under memoir and biography, in no particular order. Books that just missed the cut here include: Bernard Crick's celebrated 1980 biography of George Orwell, if nothing else because it was a pleasure to read; Hilary Mantel's exhilaratingly bitter early memoir, Giving up the Ghost (2003); and Patricia Lockwood's hilarious Priestdaddy (2017). I also had a soft spot for Tim Kreider's We Learn Nothing (2012) as well, despite not knowing anything about the author in advance, likely a sign of good writing. The strangest book in this category I read was definitely Michelle Zauner's Crying in H Mart. Based on a highly-recommended 2018 essay in the New Yorker, its rich broth of genuine yearning for a departed mother made my eyebrows raise numerous times when I encountered inadvertent extra details about Zauner's relationships.

Beethoven: A Life in Nine Pieces (2020) Laura Tunbridge Whilst it might immediately present itself as a clickbait conceit, organising an overarching narrative around just nine compositions by Beethoven turns out to be an elegant way of saying something fresh about this grizzled old bear. Some of Beethoven's most famous compositions are naturally included in the nine (eg. the Eroica and the Hammerklavier piano sonata), but the book raises itself above conventional Beethoven fare when it highlights, for instance, his Septet, Op. 20, an early work that is virtually nobody's favourite Beethoven piece today. The insight here is that it was widely popular in its time, played again and again around Vienna for the rest of his life. No doubt many contemporary authors can relate to this inability to escape being artistically haunted by an earlier runaway success. The easiest way to say something interesting about Beethoven in the twenty-first century is to talk about the myth of Beethoven instead. Or, as Tunbridge implies, perhaps that should really be 'Beethoven' in leaden quotation marks, given so much about what we think we know about the man is a quasi-fictional construction. Take Anton Schindler, Beethoven's first biographer and occasional amanuensis, who destroyed and fabricated details about Beethoven's life, casting himself in a favourable light and exaggerating his influence with the composer. Only a few decades later, the idea of a 'heroic' German was to be politically useful as well; the Anglosphere often need reminding that Germany did not exist as a nation-state prior to 1871, so it should be unsurprising to us that the late nineteenth-century saw a determined attempt to create a uniquely 'German' culture ex nihilo. (And the less we say about Immortal Beloved the better, even though I treasure that film.) Nevertheless, Tunbridge cuts through Beethoven's substantial legacy using surgical precision that not only avoids feeling like it is settling a score, but it also does so in a way that is unlikely to completely alienate anyone emotionally dedicated to some already-established idea of the man to bring forth the tediously predictable sentiment that Beethoven has 'gone woke'. With Alex Ross on the cult of Wagner, it seems that books about the 'myth of X' are somewhat in vogue right now. And this pattern within classical music might fit into some broader trend of deconstruction in popular non-fiction too, especially when we consider the numerous contemporary books on the long hangover of the Civil Rights era (Robin DiAngelo's White Fragility, etc.), the multifarious ghosts of Empire (Akala's Natives, Sathnam Sanghera's Empireland, etc.) or even the 'transmogrification' of George Orwell into myth. But regardless of its place in some wider canon, A Life in Nine Pieces is beautifully printed in hardback form (worth acquiring for that very reason alone), and it is one of the rare good books about classical music that can be recommended to both the connoisseur and the layperson alike.

Sea State (2021) Tabitha Lasley In her mid-30s and jerking herself out of a terrible relationship, Tabitha Lasley left London and put all her savings into a six-month lease on a flat within a questionable neighbourhood in Aberdeen, Scotland. She left to make good on a lukewarm idea for a book about oil rigs and the kinds of men who work on them: I wanted to see what men were like with no women around, she claims. The result is Sea State, a forthright examination of the life of North Sea oil riggers, and an unsparing portrayal of loneliness, masculinity, female desire and the decline of industry in Britain. (It might almost be said that Sea State is an update of a sort to George Orwell's visit to the mines in the North of England.) As bracing as the North Sea air, Sea State spoke to me on multiple levels but I found it additionally interesting to compare and contrast with Julian Barnes' The Man with Red Coat (see below). Women writers are rarely thought to be using fiction for higher purposes: it is assumed that, unlike men, whatever women commit to paper is confessional without any hint of artfulness. Indeed, it seems to me that the reaction against the decades-old genre of autofiction only really took hold when it became the domain of millennial women. (By contrast, as a 75-year-old male writer with a firmly established reputation in the literary establishment, Julian Barnes is allowed wide latitude in what he does with his sources and his writing can be imbued with supremely confident airs as a result.) Furthermore, women are rarely allowed metaphor or exaggeration for dramatic effect, and they certainly aren t permitted to emphasise darker parts in order to explore them... hence some of the transgressive gratification of reading Sea State. Sea State is admittedly not a work of autofiction, but the sense that you are reading about an author writing a book is pleasantly unavoidable throughout. It frequently returns to the topic of oil workers who live multiple lives, and Lasley admits to living two lives herself: she may be in love but she's also on assignment, and a lot of the pleasure in this candid and remarkably accessible book lies in the way these states become slowly inseparable.

Twilight of Democracy (2020) Anne Applebaum For the uninitiated, Anne Applebaum is a staff writer for The Atlantic magazine who won a Pulitzer-prize for her 2004 book on the Soviet Gulag system. Her latest book, however, Twilight of Democracy is part memoir and part political analysis and discusses the democratic decline and the rise of right-wing populism. This, according to Applebaum, displays distinctly authoritarian tendencies, and who am I to disagree? Applebaum does this through three main case studies (Poland, the United Kingdom and the United States), but the book also touches on Hungary as well. The strongest feature of this engaging book is that Appelbaum's analysis focuses on the intellectual classes and how they provide significant justification for a descent into authoritarianism. This is always an important point to be remembered, especially as much of the folk understanding of the rise of authoritarian regimes tends to place exaggerated responsibility on the ordinary and everyday citizen: the blame placed on the working-class in the Weimar Republic or the scorn heaped upon 'white trash' of the contemporary Rust Belt, for example. Applebaum is uniquely poised to discuss these intellectuals because, well, she actually knows a lot of them personally. Or at least, she used to know them. Indeed, the narrative of the book revolves around two parties she hosted, both in the same house in northwest Poland. The first party, on 31 December 1999, was attended by friends from around the Western world, but most of the guests were Poles from the broad anti-communist alliance. They all agreed about democracy, the rule of law and the route to prosperity whilst toasting in the new millennium. (I found it amusing to realise that War and Peace also starts with a party.) But nearly two decades later, many of the attendees have ended up as supporters of the problematic 'Law and Justice' party which currently governs the country. Applebaum would now cross the road to avoid them, and they would do the same to her, let alone behave themselves at a cordial reception. The result of this autobiographical detail is that by personalising the argument, Applebaum avoids the trap of making too much of high-minded abstract argument for 'democracy', and additionally makes her book compellingly spicy too. Yet the strongest part of this book is also its weakest. By individualising the argument, it often feels that Applebaum is settling a number of personal scores. She might be very well justified in doing this, but at times it feels like the reader has walked in halfway through some personal argument and is being asked to judge who is in the right. Furthermore, Applebaum's account of contemporary British politics sometimes deviates into the cartoonish: nothing was egregiously incorrect in any of her summations, but her explanation of the Brexit referendum result didn't read as completely sound. Nevertheless, this lively and entertaining book that can be read with profit, even if you disagree with significant portions of it, and its highly-personal approach makes it a refreshing change from similar contemporary political analysis (eg. David Runciman's How Democracy Ends) which reaches for that more 'objective' line.

The Man in the Red Coat (2019) Julian Barnes As rich as the eponymous red coat that adorns his cover, Julian Barnes quasi-biography of French gynaecologist Samuel-Jean Pozzi (1846 1918) is at once illuminating, perplexing and downright hilarious. Yet even that short description is rather misleading, for this book evades classification all manner number of ways. For instance, it is unclear that, with the biographer's narrative voice so obviously manifest, it is even a biography in the useful sense of the word. After all, doesn't the implied pact between author and reader require the biographer to at least pretend that they are hiding from the reader? Perhaps this is just what happens when an author of very fine fiction turns his hand to non-fiction history, and, if so, it represents a deeper incursion into enemy territory after his 1984 metafictional Flaubert's Parrot. Indeed, upon encountering an intriguing mystery in Pozzi's life crying out for a solution, Barnes baldly turns to the reader, winks and states: These matters could, of course, be solved in a novel. Well, quite. Perhaps Barnes' broader point is that, given that's impossible for the author to completely melt into air, why not simply put down your cards and have a bit of fun whilst you're at it? If there's any biography that makes the case for a rambling and lightly polemical treatment, then it is this one. Speaking of having fun, however, two qualities you do not expect in a typical biography is simply how witty they can be, as well as it having something of the whiff of the thriller about it. A bullet might be mentioned in an early chapter, but given the name and history of Monsieur Pozzi is not widely known, one is unlikely to learn how he lived his final years until the closing chapters. (Or what happened to that turtle.) Humour is primarily incorporated into the book in two main ways: first, by explicitly citing the various wits of the day ( What is a vice? Merely a taste you don t share. etc.), but perhaps more powerful is the gentle ironies, bon mots and observations in Barnes' entirely unflappable prose style, along with the satire implicit in him writing this moreish pseudo-biography to begin with. The opening page, with its steadfast refusal to even choose where to begin, is somewhat characteristic of Barnes' method, so if you don't enjoy the first few pages then you are unlikely to like the rest. (Indeed, the whole enterprise may be something of an acquired taste. Like Campari.) For me, though, I was left wryly grinning and often couldn't wait to turn the page. Indeed, at times it reminded me of a being at a dinner party with an extremely charming guest at the very peak of his form as a wit and raconteur, delighting the party with his rambling yet well-informed discursive on his topic de jour. A significant book, and a book of significance.

12 December 2021

Andrej Shadura: Coffee gear upgrade

Two weeks ago I decided to make myself a combined birthday and Christmas present and upgrade my coffee gear. I ve got my first espresso machine back in 2013, it was a cheap Saeco Philips Poemia, which made reasonably drinkable coffee, but not being able to make good coffee made me increasingly unhappy about it. However, since it worked, I wasn t motivated enough to change anything until it stopped working. One day the nut holding the shower screen broke, and I couldn t replace it. Having no coffee machine is arguably worse than having a mediocre one, so I started looking for a new one in the budget range. Having spent about two months reading reviews for all sorts of manual espresso machines, I realised the best thing I can probably do for the money I was willing to spend at the time was to buy a second-hand Gaggia Classic. Which is what I did: I paid 260 to a person who apparently decided they prefer to press a button to get their espresso rather than have to prepare it themselves. My first attempts at making espresso weren t very much successful, as my hand grinder couldn t produce the right grind for espresso (without using pressurised baskets), so I quickly upgraded it to the 50 De Longhi electric grinder, which was much better for espresso.
Gaggia Classic 2015 and De Longhi grinderGaggia Classic 2015 and De Longhi grinder
This setup has worked for me for nearly 4 years, but over the time the Gaggia started malfunctioning. See, this particular Gaggia Classic is the 2015 model, which resulted in the overhaul of the design after Gaggia was acquired by Philips. They replaced the boiler, changed the exterior design a bit, and importantly for me replaced the fully metal group head with the metal and plastic version typically found in cheap espresso machines like my old Poemia.
The Gaggia Classic 2015 group headThe Gaggia Classic 2015 group head
The trouble with this one is that the plastic bit (barely seen on the picture, but it s inserted into the notches on the sides of the group head) is that it gets damaged over the time, especially when the portafilter is inserted very tightly. The more damaged it gets, the tigher it is necessary to insert the portafilter to avoid leakage, the more damaged it gets and so on. At one point, the Gaggia was leaking water every time I was making coffee, affecting the quality of the brew and making a mess in the kitchen. I made a mistake and removed the plastic bit only to realise it cannot be purchased separately and nobody knows how to put it back once it s been removed; I ended up paying more than a hundred euro to replace the group head as a whole. Once I ve got the Gaggia back, I became too conscious of the potential damage I can make by overengaging the portafilter, I decided it s probably the time to get a new coffee machine.
Gaggia Classic 2015 vs 2019Gaggia Classic 2015 vs 2019
The makers of Gaggia listened to the critics and undid the 2015 changes to the Gaggia Classic design, reverting to the previous one and fixing it they basically merged the fixes many of the owners of the old Gaggia did themselves. The group head is now without any plastic, so I don t have to worry that much about damaging it accidentally. A friend pointed out that my grinder is probably not good enough and recommended a couple of models to me; I checked Kev s Coffee Blog and found a grinder, Sage Dose Control Pro, which was available on sale in my local shop for a reasonable price. I ve also got a portafilter holder to make tamping more comfortable I used to tamp against an edge of the sink:
The final setup:
Gaggia Classic 2019 and Sage BCG600 Dose Control ProGaggia Classic 2019 and Sage BCG600 Dose Control Pro
What I learnt from this is that the grinder does indeed make a huge difference. I am now able to consistently produce brews I would only occasionally get with the old De Longhi grinder. There is one downside to the new grinder. Grinder review at Alza.sk Happy with my new purchase, I went to read this review and thought to myself: lucky me, my grinder is absolutely quiet! And then I realised that the noise in my kitchen is not, in fact, produced by the fridge, but the grinder. Well, a cheap switched plug solved the issue completely (I wish sockets here each had a switch like they usually to in the UK!) Switched plug

11 December 2021

Neil Williams: Diversity and gender

As a follow on to a previous blog entry of mine, Free and Open, I feel it worthwhile to do my bit to dismantle the pseudo-science and over simplification in the idea that gender is binary at a biological level.
TL;DR: Science simply does not support binary sexes or binary genders. Truth is a bit more complicated.
There is certainty and there are binary answers in mathematics. Things get less definitive in physics, certainly as soon as quantum is broached. Processes become more of an equilibrium between states in chemistry, never wholly one or the other. Yes, there is the oddity of absolute zero but no experiment has yet achieved that fully. It is accurate to describe physics as a development of applied mathematics and to view chemistry as applied physics. Biology, at the biochemical level, is applied chemistry. The sciences build on each other, "on the shoulders of giants", but at each level, some certainty is lost, some amount of uncertainty is expanded and measurements become probabilities, proportions and percentages. Biology is dependent on biochemistry - chemistry is how a biological change results in a different organism. Physics is how that chemical change occurs - temperature, pressure and physical states are inherent to all chemical changes. Outside laboratory constraints, few chemical reactions, especially in organic chemistry, produce one and only one result from two or more known reagents. In biology, everyone is familiar with genetic mutations but a genetic mutation only happens because a biochemical reaction (hydrogen bonding of nucleobases) does not always produce the expected result. Every cell division, every viral infection, there is a finite probability that a change will occur. It might be a small number but it is never zero and can never be dismissed. This is obvious in the current Covid pandemic - genetic mutations result in new variants. Some variants are inviable, some variants produce no net change in the way that the viral particles infect adjacent cells. Sometimes, a mutation happens that changes everything. These mutations are not mistakes - these are simply changes with undetermined outcomes. Genetic changes are the foundation of biodiversity and variety is what allows lifeforms of all kinds to survive changes in environmental factors and/or changes in prevalent diseases. It is precisely the same in humans, particularly in one of the principle spheres of human life that involves replicating genetic material - the creation of gametes for sexual reproduction. Every single time any DNA is copied, there is a finite chance that a different base will be put in place compared to the original. Copying genetic material is therefore non-binary. Given precisely the same initial conditions, the result is not always predictable and the range of how the results vary from one to another increases with every iteration. Let me stress that - at the molecular level, no genetic operation in any biological lifeform has a truly binary result. Repeat that operation sufficiently often and an unexpected result WILL inevitably occur. It is a mathematical certainty that genetic changes will arise by attempting precisely the same genetic operation enough times. Genetic changes are fundamental to how lifeforms survive changing conditions. Life would likely have died out a long time ago on this planet if every genetic operation was perfect. Diversity is life. Similarity leads to extinction. Viral load is interesting at this point. Someone can be infected with a virus, including coronavirus, by encountering a small number of viral particles. Some viruses, it may be a few hundred, some viruses may need a few thousand particles to infect a vulnerable host. But here's the thing, for that host to be at risk of infecting another host, the virus needs the host to produce billions upon billions of copies of the virus by taking over the genetic machinery within a huge number of cells in the host. This, as is accepted with Covid, is before the virus has been copied enough times to produce symptoms in the host. Before those symptoms become serious, billions more copies will be made. The numbers become unimaginable - and that is within a single host, let alone the 265 million (and counting) hosts in the current Covid19 pandemic. It's also no wonder that viral infections cause tiredness, the infection is diverting huge resources to propagating itself - before even considering the activity of the immune system. It is idiocy of the highest order to expect all those copies to be identical. The rise of variants is inevitable - indeed essential - in all spheres of biology. A single viral particle is absolutely no threat of any kind - it must first get inside and then copy the genetic information in a host cell. This is where the complexity lies in the definition of life itself. A virus can be considered a lifeform but it is only able to reproduce using another, more complex, lifeform. In truth, a viral particle does not and cannot mutate. The infected host mutates the virus. The longer it takes that host to clear the infection, the more mutations that host will create and then potentially spread to others. Now apply this to the creation of gametes in humans. With seven billion humans, the amount of copying of genetic material is not as large as the pandemic but it is still easy for everyone to understand that children do not merely combine the DNA of both parents. Changes happen. Human sexual reproduction is not as simple as 1 + 1 = 2. Sometimes, the copying of the genetic material produces an unexpected result. Sexual reproduction itself is non-binary. Sexual reproduction is not easy or simple for lifeforms to adopt - the diversity which results from the non-binary operations are exactly why so many lifeforms invest so much energy in reproducing in this way. Whilst many genetic changes in humans will be benign or beneficial, I d like to take an example of a genetic disorder that results from the non-binary nature of sex. Humans can be born with the XY phenotype - i.e. at a genetic level, the individual has the same combination of chromosomes as another XY individual but there are changes within the genes in those chromosomes. We accept this, some children of blonde parents do not have blonde hair, etc. There are also genetic changes where an XY phenotype is not binary. Some people, who at a genetic level would be almost identical to another person who is genetically male, have a genetic mutation which makes it impossible for the cells of that individual to respond to androgens (testosterone). (See Androgen insensitivity syndrome). Genetically, that individual has an X and a Y chromosome, just like many other individuals. However, due to a change in how the genes on those chromosomes were copied, that individual is biologically incapable of constructing the secondary sexual characteristics of a male. At a genetic level, the individual has the XY phenotype of a male. At the physical level, the individual has all the sexual characteristics of a female and none of the sexual characteristics of a male. The gender of that individual is not binary. Treatment is centred on supporting the individual and minimising some risks from the inactive genes on the Y chromosome. Human sexual reproduction is non-binary. The results of any sexual reproduction in humans will not always produce the binary option of male or female. It is a lie to claim that human gender is binary. The science is in plain view and cannot be ignored. Identifying as non-binary is not a "cop out" - it can be a biological, genetic, scientific fact. Human sexuality and gender are malleable. Where genetic changes result in symptoms, these can be ameliorated by treatment with human sex hormones, like oestrogen and testosterone. There are valid medical uses for anabolic steroids and hormone replacement therapies to help individuals who, at a genetic level, have non-binary gender. These treatments can help align the physical outer signs with the personality and identity of the individual, whether with or without surgery. It is unacceptable to abandon such people to suffer life long discrimination and harassment by imposing a binary definition that has no basis in science. When a human being has an XY phenotype, that human being is not necessarily male. That individual will be on a spectrum from female (left unaffected by sex hormones in the womb, the foetus will be female, even with an X and a Y chromosome), to various degrees of male. So, at a genetic, biological level, it is a scientific fact that human beings do not have binary gender. There is no evidence that this is new to the modern era, there is no scientific basis for thinking that copying of genetic material was somehow perfectly reliable in earlier history, or that such mutations are specific to homo sapiens. Changes in genetic material provide the diversity to fight infections and adapt to changing environmental factors. Species have and will continue to go extinct if this diversity is absent. With that out of the way, it is no longer a stretch to encompass other aspects of human non-binary genders beyond the known genetic syndromes based on changes in the XY phenotype. Science has not uncovered all of the ways that genes affect personality, behaviour, or identity. How other, less studied, genetic changes affect the much more subtle human facets, especially anything to do with consciousness, identity, personality, sexuality and behaviour, is guesswork. All of these facets can and likely are being affected by genetic factors as well as environmental factors in an endless range of permutations. Personality traits are a beautiful and largely unknowable blend of genes and environment. Genetic information has a finite probability of changes at each and every iteration. Environmental factors are more akin to chaos theory. The idea that the results will fit into binary constructs is laughable. Human society puts huge emphasis on societal norms. Individuals who do not fit into those norms suffer discrimination. The norms themselves have evolved over time as a response to various influences on human civilisation but most are not based on science. It is up to all humans in that society to call out discrimination, to call for changes in the accepted norms and support those who are marginalised. It is a precarious balance, one that humans rarely get right, but it must be based on an acceptance that variation is the natural state. Artificial constraints, like binary genders, must be dismantled because human beings and human sexual reproduction are not binary. To those who think, "well it is for 99%", think again about Covid. 99% (or closer to 98%) of infected humans recover without notable after effects. That has still crippled the nations of the globe and humbled all those who tried to deny it. Five million human beings are dead because "most infected people recover". Just because something only affects a proportion of human beings does not invalidate the suffering of those humans and the discrimination that those humans will face. Societal norms are not necessarily correct. Religious and other influences typically obscure and ignore scientific fact and undermine human kindness. The scientific truth of life on this planet is that gender is not binary. The more complex the lifeform, the more factors will affect where on the spectrum any one individual will appear. Just because we do not yet fully understand how genes affect human personality and sexuality, does not invalidate the science that variation is the natural order. My previous blog about diversity is not just about male vs female, one nationality vs another, one ethnicity compared to another. Diversity is diverse. Diversity requires accepting that every facet of humanity is subject to variation. That leads to tension at times, it is inevitable. Tension against societal norms, tension against discrimination, tension around those individuals who would abuse the tolerance of others for their own gratification or from their own ignorance. None of us are perfect, none of us have any of this fully sorted and all of us will make mistakes. Personally, I try to respect those around me. I will use whatever pronouns and other conventions that the person requests, from their perspective and not mine. To do otherwise is to deny the natural order and to deny the science. Celebrate all diversity, it is the very stuff of life. The discussions around (typically female) bathroom facilities often miss the point. The concern is not about individuals who describe themselves as non-binary. The concern is about individuals who are fully certain of their own sexuality and who act as sexual predators for their own gratification. These people are acting out a lie for their own ends. The problem people are the predators, so stop blaming the victims who are just as at risk as anyone else who identifies as female. Maybe the best people to spot such predators are those who are non-binary, who have had to pretend to fit into societal norms. Just as travel can be a good antidote to racism, openness and discussion can be a tool to undermine the lies of sexual predators and reassure those who are justifiably fearful. There can never be a biological binary test of gender, there can never be any scientific justification for binary division of facilities. Humanity itself is not binary, even life itself has blurry borders around comas, suspended animation and locked-in syndrome. Legal definitions of human death vary around the world. The only common thread I have ever found is: Be kind to each other. If you find anything above objectionable, then I can only suggest that you reconsider the science and learn to be kind to your fellow humans. None of us are getting out of this alive. I Think You ll Find It s a Bit More Complicated Than That - Ben Goldacre ISBN 978-0-00-750514-2 https://www.amazon.co.uk/dp/B00HATQA8K/ https://en.wikipedia.org/wiki/Androgen_insensitivity_syndrome https://www.bbc.co.uk/news/world-51235105 https://en.wikipedia.org/wiki/Nucleobase My degree is in pharmaceutical sciences and I practised community and hospital pharmacy for 20 years before moving into programming. I have direct experience of supporting people who were prescribed hormones to transition their physical characteristics to match their personal identity. I had a Christian upbringing but my work showed me that those religious norms were incompatible with being kind to others, so I rejected religion and I now consider myself a secular humanist.

6 December 2021

Matthias Klumpp: New things in AppStream 0.15

On the road to AppStream 1.0, a lot of items from the long todo list have been done so far only one major feature is remaining, external release descriptions, which is a tricky one to implement and specify. For AppStream 1.0 it needs to be present or be rejected though, as it would be a major change in how release data is handled in AppStream. Besides 1.0 preparation work, the recent 0.15 release and the releases before it come with their very own large set of changes, that are worth a look and may be interesting for your application to support. But first, for a change that affects the implementation and not the XML format: 1. Completely rewritten caching code Keeping all AppStream data in memory is expensive, especially if the data is huge (as on Debian and Ubuntu with their large repositories generated from desktop-entry files as well) and if processes using AppStream are long-running. The latter is more and more the case, not only does GNOME Software run in the background, KDE uses AppStream in KRunner and Phosh will use it too for reading form factor information. Therefore, AppStream via libappstream provides an on-disk cache that is memory-mapped, so data is only consuming RAM if we are actually doing anything with it. Previously, AppStream used an LMDB-based cache in the background, with indices for fulltext search and other common search operations. This was a very fast solution, but also came with limitations, LMDB s maximum key size of 511 bytes became a problem quite often, adjusting the maximum database size (since it has to be set at opening time) was annoyingly tricky, and building dedicated indices for each search operation was very inflexible. In addition to that, the caching code was changed multiple times in the past to allow system-wide metadata to be cached per-user, as some distributions didn t (want to) build a system-wide cache and therefore ran into performance issues when XML was parsed repeatedly for generation of a temporary cache. In addition to all that, the cache was designed around the concept of one cache for data from all sources , which meant that we had to rebuild it entirely if just a small aspect changed, like a MetaInfo file being added to /usr/share/metainfo, which was very inefficient. To shorten a long story, the old caching code was rewritten with the new concepts of caches not necessarily being system-wide and caches existing for more fine-grained groups of files in mind. The new caching code uses Richard Hughes excellent libxmlb internally for memory-mapped data storage. Unlike LMDB, libxmlb knows about the XML document model, so queries can be much more powerful and we do not need to build indices manually. The library is also already used by GNOME Software and fwupd for parsing of (refined) AppStream metadata, so it works quite well for that usecase. As a result, search queries via libappstream are now a bit slower (very much depends on the query, roughly 20% on average), but can be mmuch more powerful. The caching code is a lot more robust, which should speed up startup time of applications. And in addition to all of that, the AsPool class has gained a flag to allow it to monitor AppStream source data for changes and refresh the cache fully automatically and transparently in the background. All software written against the previous version of the libappstream library should continue to work with the new caching code, but to make use of some of the new features, software using it may need adjustments. A lot of methods have been deprecated too now. 2. Experimental compose support Compiling MetaInfo and other metadata into AppStream collection metadata, extracting icons, language information, refining data and caching media is an involved process. The appstream-generator tool does this very well for data from Linux distribution sources, but the tool is also pretty heavyweight with lots of knobs to adjust, an underlying database and a complex algorithm for icon extraction. Embedding it into other tools via anything else but its command-line API is also not easy (due to D s GC initialization, and because it was never written with that feature in mind). Sometimes a simpler tool is all you need, so the libappstream-compose library as well as appstreamcli compose are being developed at the moment. The library contains building blocks for developing a tool like appstream-generator while the cli tool allows to simply extract metadata from any directory tree, which can be used by e.g. Flatpak. For this to work well, a lot of appstream-generator s D code is translated into plain C, so the implementation stays identical but the language changes. Ultimately, the generator tool will use libappstream-compose for any general data refinement, and only implement things necessary to extract data from the archive of distributions. New applications (e.g. for new bundling systems and other purposes) can then use the same building blocks to implement new data generators similar to appstream-generator with ease, sharing much of the code that would be identical between implementations anyway. 2. Supporting user input controls Want to advertise that your application supports touch input? Keyboard input? Has support for graphics tablets? Gamepads? Sure, nothing is easier than that with the new control relation item and supports relation kind (since 0.12.11 / 0.15.0, details):
<supports>
  <control>pointing</control>
  <control>keyboard</control>
  <control>touch</control>
  <control>tablet</control>
</supports>
3. Defining minimum display size requirements Some applications are unusable below a certain window size, so you do not want to display them in a software center that is running on a device with a small screen, like a phone. In order to encode this information in a flexible way, AppStream now contains a display_length relation item to require or recommend a minimum (or maximum) display size that the described GUI application can work with. For example:
<requires>
  <display_length compare="ge">360</display_length>
</requires>
This will make the application require a display length greater or equal to 300 logical pixels. A logical pixel (also device independent pixel) is the amount of pixels that the application can draw in one direction. Since screens, especially phone screens but also screens on a desktop, can be rotated, the display_length value will be checked against the longest edge of a display by default (by explicitly specifying the shorter edge, this can be changed). This feature is available since 0.13.0, details. See also Tobias Bernard s blog entry on this topic. 4. Tags This is a feature that was originally requested for the LVFS/fwupd, but one of the great things about AppStream is that we can take very project-specific ideas and generalize them so something comes out of them that is useful for many. The new tags tag allows people to tag components with an arbitrary namespaced string. This can be useful for project-internal organization of applications, as well as to convey certain additional properties to a software center, e.g. an application could mark itself as featured in a specific software center only. Metadata generators may also add their own tags to components to improve organization. AppStream gives no recommendations as to how these tags are to be interpreted except for them being a strictly optional feature. So any meaning is something clients and metadata authors need to negotiate. It therefore is a more specialized usecase of the already existing custom tag, and I expect it to be primarily useful within larger organizations that produce a lot of software components that need sorting. For example:
<tags>
  <tag namespace="lvfs">vendor-2021q1</tag>
  <tag namespace="plasma">featured</tag>
</tags>
This feature is available since 0.15.0, details. 5. MetaInfo Creator changes The MetaInfo Creator (source) tool is a very simple web application that provides you with a form to fill out and will then generate MetaInfo XML to add to your project after you have answered all of its questions. It is an easy way for developers to add the required metadata without having to read the specification or any guides at all. Recently, I added support for the new control and display_length tags, resolved a few minor issues and also added a button to instantly copy the generated output to clipboard so people can paste it into their project. If you want to create a new MetaInfo file, this tool is the best way to do it! The creator tool will also not transfer any data out of your webbrowser, it is strictly a client-side application. And that is about it for the most notable changes in AppStream land! Of course there is a lot more, additional tags for the LVFS and content rating have been added, lots of bugs have been squashed, the documentation has been refined a lot and the library has gained a lot of new API to make building software centers easier. Still, there is a lot to do and quite a few open feature requests too. Onwards to 1.0!

Paul Tagliamonte: Proxying Ethernet Frames to PACKRAT (Part 5/5)

This post is part of a series called "PACKRAT". If this is the first post you've found, it'd be worth reading the intro post first and then looking over all posts in the series.
In the last post, we left off at being able to send and recieve PACKRAT frames to and from devices. Since we can transport IPv4 packets over the network, let s go ahead and see if we can read/write Ethernet frames from a Linux network interface, and on the backend, read and write PACKRAT frames over the air. This has the benifit of continuing to allow Linux userspace tools to work (like cURL, as we ll try!), which means we don t have to do a lot of work to implement higher level protocols or tactics to get a connection established over the link. Given that this post is less RF and more Linuxy, I m going to include more code snippits than in prior posts, and those snippits are closer to runable Go, but still not complete examples. There s also a lot of different ways to do this, I ve just picked the easiest one for me to implement and debug given my existing tooling for you, you may find another approach easier to implement! Again, deviation here is very welcome, and since this segment is the least RF centric post in the series, the pace and tone is going to feel different. If you feel lost here, that s OK. This isn t the most important part of the series, and is mostly here to give a concrete ending to the story arc. Any way you want to finish your own journy is the best way for you to finish it!

Implement Ethernet conversion code This assumes an importable package with a Frame struct, which we can use to convert a Frame to/from Ethernet. Given that the PACKRAT frame has a field that Ethernet doesn t (namely, Callsign), that will need to be explicitly passed in when turning an Ethernet frame into a PACKRAT Frame.
...
// ToPackrat will create a packrat frame from an Ethernet frame.
func ToPackrat(callsign [8]byte, frame *ethernet.Frame) (*packrat.Frame, error)  
var frameType packrat.FrameType
switch frame.EtherType  
case ethernet.EtherTypeIPv4:
frameType = packrat.FrameTypeIPv4
default:
return nil, fmt.Errorf("ethernet: unsupported ethernet type %x", frame.EtherType)
 
return &packrat.Frame 
Destination: frame.Destination,
Source: frame.Source,
Type: frameType,
Callsign: callsign,
Payload: frame.Payload,
 , nil
 
// FromPackrat will create an Ethernet frame from a Packrat frame.
func FromPackrat(frame *packrat.Frame) (*ethernet.Frame, error)  
var etherType ethernet.EtherType
switch frame.Type  
case packrat.FrameTypeRaw:
return nil, fmt.Errorf("ethernet: unsupported packrat type 'raw'")
case packrat.FrameTypeIPv4:
etherType = ethernet.EtherTypeIPv4
default:
return nil, fmt.Errorf("ethernet: unknown packrat type %x", frame.Type)
 
// We lose the Callsign here, which is sad.
 return &ethernet.Frame 
Destination: frame.Destination,
Source: frame.Source,
EtherType: etherType,
Payload: frame.Payload,
 , nil
 
Our helpers, ToPackrat and FromPackrat can now be used to transmorgify PACKRAT into Ethernet, or Ethernet into PACKRAT. Let s put them into use!

Implement a TAP interface On Linux, the networking stack can be exposed to userland using TUN or TAP interfaces. TUN devices allow a userspace program to read and write data at the Layer 3 / IP layer. TAP devices allow a userspace program to read and write data at the Layer 2 Data Link / Ethernet layer. Writing data at Layer 2 is what we want to do, since we re looking to transform our Layer 2 into Ethernet s Layer 2 Frames. Our first job here is to create the actual TAP interface, set the MAC address, and set the IP range to our pre-coordinated IP range.
...
import (
"net"
"github.com/mdlayher/ethernet"
"github.com/songgao/water"
"github.com/vishvananda/netlink"
)
...
config := water.Config DeviceType: water.TAP 
config.Name = "rat0"
iface, err := water.New(config)
...
netIface, err := netlink.LinkByName("rat0")
...
// Pick a range here that works for you!
 //
 // For my local network, I'm using some IPs
 // that AMPR (ampr.org) was nice enough to
 // allocate to me for ham radio use. Thanks,
 // AMPR!
 //
 // Let's just use 10.* here, though.
 //
 ip, cidr, err := net.ParseCIDR("10.0.0.1/24")
...
cidr.IP = ip
err = netlink.AddrAdd(netIface, &netlink.Addr 
IPNet: cidr,
Peer: cidr,
 )
...
// Add all our neighbors to the ARP table
 for _, neighbor := range neighbors  
netlink.NeighAdd(&netlink.Neigh 
LinkIndex: netIface.Attrs().Index,
Type: netlink.FAMILY_V4,
State: netlink.NUD_PERMANENT,
IP: neighbor.IP,
HardwareAddr: neighbor.MAC,
 )
 
// Pick a MAC that is globally unique here, this is
 // just used as an example!
 addr, err := net.ParseMAC("FA:DE:DC:AB:LE:01")
...
netlink.LinkSetHardwareAddr(netIface, addr)
...
err = netlink.LinkSetUp(netIface)
var frame = &ethernet.Frame 
var buf = make([]byte, 1500)
for  
n, err := iface.Read(buf)
...
err = frame.UnmarshalBinary(buf[:n])
...
// process frame here (to come)
  
...
Now that our network stack can resolve an IP to a MAC Address (via ip neigh according to our pre-defined neighbors), and send that IP packet to our daemon, it s now on us to send IPv4 data over the airwaves. Here, we re going to take packets coming in from our TAP interface, and marshal the Ethernet frame into a PACKRAT Frame and transmit it. As with the rest of the RF code, we ll leave that up to the implementer, of course, using what was built during Part 2: Transmitting BPSK symbols and Part 4: Framing data.
...
for  
// continued from above

n, err := iface.Read(buf)
...
err = frame.UnmarshalBinary(buf[:n])
...
switch frame.EtherType  
case 0x0800:
// ipv4 packet
 pack, err := ToPackrat(
// Add my callsign to all Frames, for now
 [8]byte 'K', '3', 'X', 'E', 'C' ,
frame,
)
...
err = transmitPacket(pack)
...
 
 
...
Now that we have transmitting covered, let s go ahead and handle the recieve path here. We re going to listen on frequency using the code built in Part 3: Receiving BPSK symbols and Part 4: Framing data. The Frames we decode from the airwaves are expected to come back from the call packratReader.Next in the code below, and the exact way that works is up to the implementer.
...
for  
// pull the next packrat frame from
 // the symbol stream as we did in the
 // last post
 packet, err := packratReader.Next()
...
// check for CRC errors and drop invalid
 // packets
 err = packet.Check()
...
if bytes.Equal(packet.Source, addr)  
// if we've heard ourself transmitting
 // let's avoid looping back
 continue
 
// create an ethernet frame
 frame, err := FromPackrat(packet)
...
buf, err := frame.MarshalBinary()
...
// and inject it into the tap
 err = iface.Write(buf)
...
 
...
Phew. Right. Now we should be able to listen for PACKRAT frames on the air and inject them into our TAP interface.

Putting it all Together After all this work weeks of work! we can finally get around to putting some real packets over the air. For me, this was an incredibly satisfying milestone, and tied together months of learning! I was able to start up a UDP server on a remote machine with an RTL-SDR dongle attached to it, listening on the TAP interface s host IP with my defined MAC address, and send UDP packets to that server via PACKRAT using my laptop, /dev/udp and an Ettus B210, sending packets into the TAP interface. Now that UDP was working, I was able to get TCP to work using two PlutoSDRs, which allowed me to run the cURL command I pasted in the first post (both simultaneously listen and transmit on behalf of my TAP interface). It s my hope that someone out there will be inspired to implement their own Layer 1 and Layer 2 as a learning exercise, and gets the same sense of gratification that I did! If you re reading this, and at a point where you ve been able to send IP traffic over your own Layer 1 / Layer 2, please get in touch! I d be thrilled to hear all about it. I d love to link to any posts or examples you publish here!

4 December 2021

Jonathan Dowland: Haskell mortgage calculator

A few months ago I was trying to compare two mortgage offers, and ended up writing a small mortgage calculator to help me. Both mortgages were fixed-term for the same time period (5 years). One of the mortgages had a lower rate than the other, but much higher arrangement fees. A broker recommended the mortgage with the higher rate but lower fee, on an affordability basis for the fixed term: over all, we would spend less money within the fixed term on that deal than the other. (I thought) this left one bit of information missing: what remaining balance would there be at the end of the term? The mortgages I want to model are defined in terms of a monthly repayment figure and an annual interest rate for the fixed period. I think interest is usually recalculated on a daily basis, so I convert the annual rate down to a daily rate. Repayments only happen once a month. Months are not all the same size. Using mod 30 on the 'day' approximates a monthly payment. Over 5 years, there would be 60 months, meaning 60 repayments. (I'm ignoring leap years)
 > length . filter id .take (5*365) $ [ x mod 30==0   x <- [1..]]
60
Here's what I came up with. I was a little concerned the repayment approximation was too far out so I compared the output with a more precise (but boring) spreadsheet and they agreed to within an acceptable tolerance. The numbers that follow are all made up to illustrate the function and don't reflect my actual mortgage. :)
borrowed = 1000000 -- day 0 amount outstanding
aer   = 0.89
repay = 1000
der   = aer / 36
owed n   n == 0          = borrowed
         n  mod  30 == 0 = last + interest - repay
         otherwise       = last + interest
    where
        last     = owed (n - 1)
        interest = last * der

Paul Tagliamonte: Receiving BPSK symbols (Part 3/5)

This post is part of a series called "PACKRAT". If this is the first post you've found, it'd be worth reading the intro post first and then looking over all posts in the series.
In the last post, we worked through how to generate a BPSK signal, and hopefully transmit it using one of our SDRs. Let s take that and move on to Receiving BPSK and turning that back into symbols! Demodulating BPSK data is a bit more tricky than transmitting BPSK data, mostly due to tedious facts of life such as space, time, and hardware built with compromises because not doing that makes the problem impossible. Unfortunately, it s now our job to work within our imperfect world to recover perfect data. We need to handle the addition of noise, differences in frequency, clock synchronization and interference in order to recover our information. This makes life a lot harder than when we transmit information, and as a result, a lot more complex.

Coarse Sync Our starting point for this section will be working from a capture of a number of generated PACKRAT packets as heard by a PlutoSDR at (xz compressed interleaved int16, 2,621,440 samples per second) Every SDR has its own oscillator, which eventually controls a number of different components of an SDR, such as the IF (if it s a superheterodyne architecture) and the sampling rate. Drift in oscillators lead to drifts in frequency such that what one SDR may think is 100MHz may be 100.01MHz for another radio. Even if the radios were perfectly in sync, other artifacts such as doppler time dilation due to motion can cause the frequency to appear higher or lower in frequency than it was transmitted. All this is a long way of saying, we need to determine when we see a strong signal that s close-ish to our tuned frequency, and take steps to roughly correct it to our center frequency (in the order of 100s of Hz to kHz) in order to acquire a phase lock on the signal to attempt to decode information contained within. The easiest way of detecting the loudest signal of interest is to use an FFT. Getting into how FFTs work is out of scope of this post, so if this is the first time you re seeing mention of an FFT, it may be a good place to take a quick break to learn a bit about the time domain (which is what the IQ data we ve been working with so far is), frequency domain, and how the FFT and iFFT operations can convert between them. Lastly, because FFTs average power over the window, swapping phases such that the transmitted wave has the same number of in-phase and inverted-phase symbols the power would wind up averaging to zero. This is not helpful, so I took a tip from Dr. Marc Lichtman s PySDR project and used complex squaring to drive our BPSK signal into a single detectable carrier by squaring the IQ data. Because points are on the unit circle and at tau/2 (specifically, tau/(2^1) for BPSK, 2^2 for QPSK) angles, and given that squaring has the effect of doubling the angle, and angles are all mod tau, this will drive our wave comprised of two opposite phases back into a continuous wave effectively removing our BPSK modulation, making it much easier to detect in the frequency domain. Thanks to Tom Bereknyei for helping me with that!
...
var iq []complex 
var freq []complex 
for i := range iq  
iq[i] = iq[i] * iq[i]
 
// perform an fft, computing the frequency
 // domain vector in  freq  given the iq data
 // contained in  iq .
 fft(iq, freq)
// get the array index of the max value in the
 // freq array given the magnitude value of the
 // complex numbers.
 var binIdx = max(abs(freq))
...
Now, most FFT operations will lay the frequency domain data out a bit differently than you may expect (as a human), which is that the 0th element of the FFT is 0Hz, not the most negative number (like in a waterfall). Generally speaking, zero first is the most common frequency domain layout (and generally speaking the most safe assumption if there s no other documentation on fft layout). Negative first is usually used when the FFT is being rendered for human consumption such as a waterfall plot. Given that we now know which FFT bin (which is to say, which index into the FFT array) contains the strongest signal, we ll go ahead and figure out what frequency that bin relates to. In the time domain, each complex number is the next time instant. In the frequency domain, each bin is a discrete frequency or more specifically a frequency range. The bandwidth of the bin is a function of the sampling rate and number of time domain samples used to do the FFT operation. As you increase the amount of time used to preform the FFT, the more precise the FFT measurement of frequency can be, but it will cover the same bandwidth, as defined by the sampling rate.
...
var sampleRate = 2,621,440
// bandwidth is the range of frequencies
 // contained inside a single FFT bin,
 // measured in Hz.
 var bandwidth = sampleRate/len(freq)
...
Now that we know we have a zero-first layout and the bin bandwidth, we can compute what our frequency offset is in Hz.
...
// binIdx is the index into the freq slice
 // containing the frequency domain data.
 var binIdx = 0
// binFreq is the frequency of the bin
 // denoted by binIdx
 var binFreq = 0
if binIdx > len(freq)/2  
// This branch covers the case where the bin
 // is past the middle point - which is to say,
 // if this is a negative frequency.
 binFreq = bandwidth * (binIdx - len(freq))
  else  
// This branch covers the case where the bin
 // is in the first half of the frequency array,
 // which is to say - if this frequency is
 // a positive frequency.
 binFreq = bandwidth * binIdx
 
...
However, sice we squared the IQ data, we re off in frequency by twice the actual frequency if we are reading 12kHz, the bin is actually 6kHz. We need to adjust for that before continuing with processing.
...
var binFreq = 0
...
// [compute the binFreq as above]
 ...
// Adjust for the squaring of our IQ data
 binFreq = binFreq / 2
...
Finally, we need to shift the frequency by the inverse of the binFreq by generating a carrier wave at a specific frequency and rotating every sample by our carrier wave so that a wave at the same frequency will slow down (or stand still!) as it approaches 0Hz relative to the carrier wave.
 var tau = pi * 2
// ts tracks where in time we are (basically: phase)
 var ts float
// inc is the amount we step forward in time (seconds)
 // each sample.
 var inc float = (1 / sampleRate)
// amount to shift frequencies, in Hz,
 // in this case, shift +12 kHz to 0Hz
 var shift = -12,000
for i := range iq  
ts += inc
if ts > tau  
// not actually needed, but keeps ts within
 // 0 to 2*pi (since it is modulus 2*pi anyway)
 ts -= tau
 
// Here, we're going to create a carrier wave
 // at the provided frequency (in this case,
 // -12kHz)
 cwIq = complex(cos(tau*shift*ts), sin(tau*shift*ts))
iq[i] = iq[i] * cwIq
 
Now we ve got the strong signal we ve observed (which may or may not be our BPSK modulated signal!) close enough to 0Hz that we ought to be able to Phase Lock the signal in order to begin demodulating the signal.

Filter After we re roughly in the neighborhood of a few kHz, we can now take some steps to cut out any high frequency components (both positive high frequencies and negative high frequencies). The normal way to do this would be to do an FFT, apply the filter in the frequency domain, and then do an iFFT to turn it back into time series data. This will work in loads of cases, but I ve found it to be incredibly tricky to get right when doing PSK. As such, I ve opted to do this the old fashioned way in the time domain. I ve again opted to go simple rather than correct, and haven t used nearly any of the advanced level trickery I ve come across for fear of using it wrong. As a result, our process here is going to be generating a sinc filter by computing a number of taps, and applying that in the time domain directly on the IQ stream.
// Generate sinc taps

func sinc(x float) float  
if x == 0  
return 1
 
var v = pi * x
return sin(v) / v
 
...
var dst []float
var length = float(len(dst))
if int(length)%2 == 0  
length++
 
for j := range dst  
i := float(j)
dst[j] = sinc(2 * cutoff * (i - (length-1)/2))
 
...
then we apply it in the time domain
...
// Apply sinc taps to an IQ stream

var iq []complex
// taps as created in  dst  above
 var taps []float
var delay = make([]complex, len(taps))
for i := range iq  
// let's shift the next sample into
 // the delay buffer
 copy(delay[1:], delay)
delay[0] = iq[i]
var phasor complex
for j := range delay  
// for each sample in the buffer, let's
 // weight them by the tap values, and
 // create a new complex number based on
 // filtering the real and imag values.
 phasor += complex(
taps[j] * real(delay[j]),
taps[j] * imag(delay[j]),
)
 
// now that we've run this sample
 // through the filter, we can go ahead
 // and scale it back (since we multiply
 // above) and drop it back into the iq
 // buffer.
 iq[i] = complex(
real(phasor) / len(taps),
imag(phasor) / len(taps),
)
 
...
After running IQ samples through the taps and back out, we ll have a signal that s been filtered to the shape of our designed Sinc filter which will cut out captured high frequency components (both positive and negative). Astute observers will note that we re using the real (float) valued taps on both the real and imaginary values independently. I m sure there s a way to apply taps using complex numbers, but it was a bit confusing to work through without being positive of the outcome. I may revisit this in the future!

Downsample Now, post-filter, we ve got a lot of extra RF bandwidth being represented in our IQ stream at our high sample rate All the high frequency values are now filtered out, which means we can reduce our sampling rate without losing much information at all. We can either do nothing about it and process at the fairly high sample rate we re capturing at, or we can drop the sample rate down and help reduce the volume of numbers coming our way. There s two big ways of doing this; either you can take every Nth sample (e.g., take every other sample to half the sample rate, or take every 10th to decimate the sample stream to a 10th of what it originally was) which is the easiest to implement (and easy on the CPU too), or to average a number of samples to create a new sample. A nice bonus to averaging samples is that you can trade-off some CPU time for a higher effective number of bits (ENOB) in your IQ stream, which helps reduce noise, among other things. Some hardware does exactly this (called Oversampling ), and like many things, it has some pros and some cons. I ve opted to treat our IQ stream like an oversampled IQ stream and average samples to get a marginal bump in ENOB. Taking a group of 4 samples and averaging them results in a bit of added precision. That means that a stream of IQ data at 8 ENOB can be bumped to 9 ENOB of precision after the process of oversampling and averaging. That resulting stream will be at 1/4 of the sample rate, and this process can be repeated 4 samples can again be taken for a bit of added precision; which is going to be 1/4 of the sample rate (again), or 1/16 of the original sample rate. If we again take a group of 4 samples, we ll wind up with another bit and a sample rate that s 1/64 of the original sample rate.

Phase Lock Our starting point for this section is the same capture as above, but post-coarse sync, filtering downsampling (xz compressed interleaved float32, 163,840 samples per second) The PLL in PACKRAT was one of the parts I spent the most time stuck on. There s no shortage of discussions of how hardware PLLs work, or even a few software PLLs, but very little by way of how to apply them and/or troubleshoot them. After getting frustrated trying to follow the well worn path, I decided to cut my own way through the bush using what I had learned about the concept, and hope that it works well enough to continue on. PLLs, in concept are fairly simple you generate a carrier wave at a frequency, compare the real-world SDR IQ sample to where your carrier wave is in phase, and use the difference between the local wave and the observed wave to adjust the frequency and phase of your carrier wave. Eventually, if all goes well, that delta is driven as small as possible, and your carrier wave can be used as a reference clock to determine if the observed signal changes in frequency or phase. In reality, tuning PLLs is a total pain, and basically no one outlines how to apply them to BPSK signals in a descriptive way. I ve had to steal an approach I ve seen in hardware to implement my software PLL, with any hope it s close enough that this isn t a hazard to learners. The concept is to generate the carrier wave (as above) and store some rolling averages to tune the carrier wave over time. I use two constants, alpha and beta (which appear to be traditional PLL variable names for this function) which control how quickly the frequency and phase is changed according to observed mismatches. Alpha is set fairly high, which means discrepancies between our carrier and observed data are quickly applied to the phase, and a lower constant for Beta, which will take long-term errors and attempt to use that to match frequency. This is all well and good. Getting to this point isn t all that obscure, but the trouble comes when processing a BPSK signal. Phase changes kick the PLL out of alignment and it tends to require some time to get back into phase lock, when we really shouldn t even be loosing it in the first place. My attempt is to generate two predicted samples, one for each phase of our BPSK signal. The delta is compared, and the lower error of the two is used to adjust the PLL, but the carrier wave itself is used to rotate the sample.
 var alpha = 0.1
var beta = (alpha * alpha) / 2
var phase = 0.0
var frequency = 0.0
...
for i := range iq  
predicted = complex(cos(phase), sin(phase))
sample = iq[i] * conj(predicted)
delta = phase(sample)
predicted2 = complex(cos(phase+pi), sin(phase+pi))
sample2 = iq[i] * conj(predicted2)
delta2 = phase(sample2)
if abs(delta2) < abs(delta)  
// note that we do not update 'sample'.
 delta = delta2
 
phase += alpha * delta
frequency += beta * delta
// adjust the iq sample to the PLL rotated
 // sample.
 iq[i] = sample
 
...
If all goes well, this loop has the effect of driving a BPSK signal s imaginary values to 0, and the real value between +1 and -1.

Average Idle / Carrier Detect Our starting point for this section is the same capture as above, but post-PLL (xz compressed interleaved float32, 163,840 samples per second) When we start out, we have IQ samples that have been mostly driven to an imaginary component of 0 and real value range between +1 and -1 for each symbol period. Our goal now is to determine if we re receiving a signal, and if so, determine if it s +1 or -1. This is a deceptively hard problem given it spans a lot of other similarly entertaining hard problems. I ve opted to not solve the hard problems involved and hope that in practice my very haphazard implementation works well enough. This turns out to be both good (not solving a problem is a great way to not spend time on it) and bad (turns out it does materially impact performance). This segment is the one I plan on revisiting, first. Expect more here at some point! Given that I want to be able to encapsulate three states in the output from this section (our Symbols are no carrier detected ( 0 ), real value 1 ( 1 ) or real value -1 ("-1")), which means spending cycles to determine what the baseline noise is to try and identify when a signal breaks through the noise becomes incredibly important.
var idleThreshold
var thresholdFactor = 10
...
// sigThreshold is used to determine if the symbol
 // is -1, +1 or 0. It's 1.3 times the idle signal
 // threshold.
 var sigThreshold = (idleThreshold * 0.3) + idleThreshold
// iq contains a single symbol's worth of IQ samples.
 // clock alignment isn't really considered; so we'll
 // get a bad packet if we have a symbol transition
 // in the middle of this buffer. No attempt is made
 // to correct for this yet.
 var iq []complex
// avg is used to average a chunk of samples in the
 // symbol buffer.
 var avg float
var mid = len(iq) / 2
// midNum is used to determine how many symbols to
 // average at the middle of the symbol.
 var midNum = len(iq) / 50
for j := mid; j < mid+midNum; j++  
avg += real(iq[j])
 
avg /= midNum
var symbol float
switch  
case avg > sigThreshold:
symbol = 1
case avg < -sigThreshold:
symbol = -1
default:
symbol = 0
// update the idleThreshold using the thresholdFactor
 // to average the idleThreshold over more samples to
 // get a better idea of average noise.
 idleThreshold = (
(idleThreshold*(thresholdFactor-1) + symbol) \
/ thresholdFactor
)
 
// write symbol to output somewhere
...

Next Steps Now that we have a stream of values that are either +1, -1 or 0, we can frame / unframe the data contained in the stream, and decode Packets contained inside, coming next in Part 4!

30 November 2021

Russell Coker: Links November 2021

The Guardian has an amusing article by Sophie Elmhirst about Libertarians buying a cruise ship to make a seasteading project off the coast of Panama [1]. It turns out that you need permits etc to do this and maintaining a ship is expensive. Also you wouldn t want to mine cryptocurrency in a ship cabin as most cabins are small and don t have enough airconditioning to remain pleasant if you dump 1kW or more into the air. NPR has an interesting article about the reaction of the NRA to the Columbine shootings [2]. Seems that some NRA person isn t a total asshole and is sharing their private information, maybe they are dying and are worried about going to hell. David Brin wrote an insightful blog post about the singleton hypothesis where he covers some of the evidence of autocratic societies failing [3]. I think he makes a convincing point about a single centralised government for human society not being viable. But something like the EU on a world wide scale could work well. Ken Shirriff wrote an interesting blog post about reverse engineering the Yamaha DX7 synthesiser [4]. The New York Times has an interesting article about a Baboon troop that became less aggressive after the alpha males all died at once from tuberculosis [5]. They established a new more peaceful culture that has outlived the beta males who avoided tuberculosis. The Guardian has an interesting article about how sequencing the genomes of the entire population can save healthcare costs while improving the health of the population [6]. This is somthing wealthy countries should offer for free to the world population. At a bit under $1000 per test that s only about $7 trillion to test everyone, and of course the price should drop significantly if there were billions of tests being done. The Strategy Bridge has an interesting article about SciFi books that have useful portrayals of military strategy [7]. The co-author is Major General Mick Ryan of the Australian Army which is noteworthy as Major General is the second highest rank in use by the Australian Army at this time. Vice has an interesting article about the co-evolution of penises and vaginas and how a lot of that evolution is based on avoiding impregnation from rape [8]. Cory Doctorow wrote an insightful Medium article about the way that governments could force interoperability through purchasing power [9]. Cory Doctorow wrote an insightful article for Locus Magazine about imagining life after capitalism and how capitalism might be replaced [10]. We need a Star Trek future! Arstechnica has an informative article about new developmenet in the rowhammer category of security attacks on DRAM [11]. It seems that DDR4 with ECC is the best current mitigation technique and that DDR3 with ECC is harder to attack than non-ECC RAM. So the thing to do is use ECC on all workstations and avoid doing security critical things on laptops because they can t juse ECC RAM.

Russell Coker: Your Device Has Been Improved

I ve just started a Samsung tablet downloading a 770MB update, the description says:
Technically I have no doubt that both those claims are true and accurate. But according to common understanding of the English language I think they are both misleading. By stability improved they mean fixed some bugs that made it unstable and no technical person would imagine that after a certain number of such updates the number of bugs will ever reach zero and the tablet will be perfectly reliable. In fact if you should consider yourself lucky if they fix more bugs than they add. It s not THAT uncommon for phones and tablets to be bricked (rendered unusable by software) by an update. In the past I got a Huawei Mate9 as a warranty replacement for a Nexus 6P because an update caused so many Nexus 6P phones to fail that they couldn t be replaced with an identical phone [1]. By security improved they usually mean fixed some security flaws that were recently discovered to make it almost as secure as it was designed to be . Note that I deliberately say almost as secure because it s sometimes impossible to fix a security flaw without making significant changes to interfaces which requires more work than desired for an old product and also gives a higher probability of things going wrong. So it s sometimes better to aim for almost as secure or alternatively just as secure but with some features disabled. Device manufacturers (and most companies in the Android space make the same claims while having the exact same bugs to deal with, Samsung is no different from the others in this regards) are not making devices more secure or more reliable than when they were initially released. They are aiming to make them almost as secure and reliable as when they were released. They don t have much incentive to try too hard in this regard, Samsung won t suffer if I decide my old tablet isn t reliable enough and buy a new one, which will almost certainly be from Samsung because they make nice tablets. As a thought experiment, consider if car repairers did the same thing. Getting us to service your car will improve fuel efficiency , great how much more efficient will it be than when I purchased it? As another thought experiment, consider if car companies stopped providing parts for car repair a few years after releasing a new model. This is effectively what phone and tablet manufacturers have been doing all along, software updates for stability and security are to devices what changing oil etc is for cars.

21 November 2021

Julian Andres Klode: APT Z3 Solver Basics

Z3 is a theorem prover developed at Microsoft research and available as a dynamically linked C++ library in Debian-based distributions. While the library is a whopping 16 MB, and the solver is a tad slow, it s permissive licensing, and number of tactics offered give it a huge potential for use in solving dependencies in a wide variety of applications. Z3 does not need normalized formulas, but offers higher level abstractions like atmost and atleast and implies, that we will make use of together with boolean variables to translate the dependency problem to a form Z3 understands. In this post, we ll see how we can apply Z3 to the dependency resolution in APT. We ll only discuss the basics here, a future post will explore optimization criteria and recommends.

Translating the universe APT s package universe consists of 3 relevant things: packages (the tuple of name and architecture), versions (basically a .deb), and dependencies between versions. While we could translate our entire universe to Z3 problems, we instead will construct a root set from packages that were manually installed and versions marked for installation, and then build the transitive root set from it by translating all versions reachable from the root set. For each package P in the transitive root set, we create a boolean literal P. We then translate each version P1, P2, and so on. Translating a version means building a boolean literal for it, e.g. P1, and then translating the dependencies as shown below. We now need to create two more clauses to satisfy the basic requirements for debs:
  1. If a version is installed, the package is installed; and vice versa. We can encode this requirement for P above as P == atleast( P1,P2 , 1).
  2. There can only be one version installed. We add an additional constraint of the form atmost( P1,P2 , 1).
We also encode the requirements of the operation.
  1. For each package P that is manually installed, add a constraint P.
  2. For each version V that is marked for install, add a constraint V.
  3. For each package P that is marked for removal, add a constraint !P.

Dependencies Packages in APT have dependencies of two basic forms: Depends and Conflicts, as well as variations like Breaks (identical to Conflicts in solving terms), and Recommends (soft Depends) - we ll ignore those for now. We ll discuss Conflicts in the next section. Let s take a basic dependency list: A Depends: X Y, Z. To represent that dependency, we expand each name to a list of versions that can satisfy the dependency, for example X1 X2 Y1, Z1. Translating this dependency list to our Z3 solver, we create boolean variables X1,X2,Y1,Z1 and define two rules:
  1. A implies atleast( X1,X2,Y1 , 1)
  2. A implies atleast( Z1 , 1)
If there actually was nothing that satisfied the Z requirement, we d have added a rule not A. It would be possible to simply not tell Z3 about the version at all as an optimization, but that adds more complexity, and the not A constraint should not cause too many problems.

Conflicts Conflicts cannot have or in them. A dependency B Conflicts: X, Y means that only one of B, X, and Y can be installed. We can directly encode this in Z3 by using the constraint atmost( B,X,Y , 1). This is an optimized encoding of the constraint: We could have encoded each conflict in the form !B or !X, !B or !X, and so on. Usually this leads to worse performance as it introduces additional clauses.

Complete example Let s assume we start with an empty install and want to install the package a below.
Package: a
Version: 1
Depends: c   b
Package: b
Version: 1
Package: b
Version: 2
Conflicts: x
Package: d
Version: 1
Package: x
Version: 1
The translation in Z3 rules looks like this:
  1. Package rules for a:
    1. a == atleast( a1 , 1) - package is installed iff one version is
    2. atmost( a1 , 1) - only one version may be installed
    3. a a must be installed
  2. Dependency rules for a
    1. implies(a1, atleast( b2, b1 , 1)) the translated dependency above. note that c is gone, it s not reachable.
  3. Package rules for b:
    1. b == atleast( b1,b2 , 1) - package is installed iff one version is
    2. atmost( b1, b2 , 1) - only one version may be installed
  4. Dependencies for b (= 2):
    1. atmost( b2, x1 , 1) - the conflicts between x and b = 2 above
  5. Package rules for x:
    1. x == atleast( x1 , 1) - package is installed iff one version is
    2. atmost( x1 , 1) - only one version may be installed
The package d is not translated, as it is not reachable from the root set a1 , the transitive root set is a1,b1,b2,x1 .

Next iteration: Optimization We have now constructed the basic set of rules that allows us to solve solve our dependency problems (equivalent to SAT), however it might lead to suboptimal solutions where it removes automatically installed packages, or installs more packages than necessary, to name a few examples. In our next iteration, we have to look at introducing optimization; for example, have the minimum number of removals, the minimal number of changed packages, or satisfy as many recommends as possible. We will also look at the upgrade problem (upgrade as many packages as possible), the autoremove problem (remove as many automatically installed packages as possible).

16 November 2021

Paul Tagliamonte: Measuring the Power Output of my SDRs

Over the last few years, I ve often wondered what the true power output of my SDRs are. It s a question with a shocking amount of complexity in the response, due to a number of factors (mostly Frequency). The ranges given in spec sheets are often extremely vague, and if I m being honest with myself, not incredibly helpful for being able to determine what specific filters and amplifiers I ll need to get a clean signal transmitted.
Hey, heads up! - This post contains extremely unvalidated and back of the napkin quality work to understand how my equipment works. Hopefully this work can be of help to others, but please double check any information you need for your own work!
I was specifically interested in what gain output (in dBm) looks like across the frequency range in particular, how variable the output dBm is when I change frequencies. The second question I had was understanding how linear the output gain is when adjusting the requested gain from the radio. Does a 2 dB increase on a HackRF API mean 2 dB of gain in dBm, no matter what the absolute value of the gain stage is? I ve finally bit the bullet and undertaken work to characterize the hardware I do have, with some outdated laboratory equipment I found on eBay. Of course, if it s worth doing, it s worth overdoing, so I spent a bit of time automating a handful of components in order to collect the data that I need from my SDRs. I bought an HP 437B, which is the cutting edge of 30 years ago, but still accurate to within 0.01dBm. I paired this Power Meter with an Agilent 8481A Power Sensor (-30 dBm to 20 dBm from 10MHz to 18GHz). For some of my radios, I was worried about exceeding the 20 dBm mark, so I used a 20db attenuator while I waited for a higher power power sensor. Finally, I was able to find a GPIB to USB interface, and get that interface working with the GPIB Kernel driver on my system. With all that out of the way, I was able to write Go bindings to my HP 437B to allow for totally headless and automated control in sync with my SDR s RF output. This allowed me to script the transmission of a sine wave at a controlled amplitude across a defined gain range and frequency range and read the Power Sensor s measured dBm output to characterize the Gain across frequency and configured Gain.

HackRF Looking at configured Gain against output power, the requested gain appears to have a fairly linear relation to the output signal power. The measured dBm ranged between the sensor noise floor to approx +13dBm. The average standard deviation of all tested gain values over the frequency range swept was +/-2dBm, with a minimum standard deviation of +/-0.8dBm, and a maximum of +/-3dBm. When looking at output power over the frequency range swept, the HackRF contains a distinctive (and frankly jarring) ripple across the Frequency range, with a clearly visible jump in gain somewhere around 2.1GHz. I have no idea what is causing this massive jump in output gain, nor what is causing these distinctive ripples. I d love to know more if anyone s familiar with HackRF s RF internals!

PlutoSDR The power output is very linear when operating above -20dB TX channel gain, but can get quite erratic the lower the output power is configured. The PlutoSDR s output power is directly related to the configured power level, and is generally predictable once a minimum power level is reached. The measured dBm ranged from the noise floor to 3.39 dBm, with an average standard deviation of +/-1.98 dBm, a minimum standard deviation of +/-0.91 dBm and a maximum standard deviation of +/-3.37 dBm. Generally, the power output is quite stable, and looks to have very even and wideband gain control. There s a few artifacts, which I have not confidently isolated to the SDR TX gain, noise (transmit artifacts such as intermodulation) or to my test setup. They appear fairly narrowband, so I m not overly worried about them yet. If anyone has any ideas what this could be, I d very much appreciate understanding why they exist!

Ettus B210 The power output on the Ettus B210 is higher (in dBm) than any of my other radios, but it has a very odd quirk where the power becomes nonlinear somewhere around -55dB TX channel gain. After that point, adding gain has no effect on the measured signal output in dBm up to 0 dB gain. The measured dBm ranged from the noise floor to 18.31 dBm, with an average standard deviation of +/-2.60 dBm, a minimum of +/-1.39 dBm and a maximum of +/-5.82 dBm. When the Gain is somewhere around the noise floor, the measured gain is incredibly erratic, which throws the maximum standard deviation significantly. I haven t isolated that to my test setup or the radio itself. I m inclined to believe it s my test setup. The radio has a fairly even and wideband gain, and so long as you re operating between -70dB to -55dB, fairly linear as well.

Summary Of all my radios, the Ettus B210 has the highest output (in dBm) over the widest frequency range, but the HackRF is a close second, especially after the gain bump kicks in around 2.1GHz. The Pluto SDR feels the most predictable and consistent, but also a very low output, comparatively - right around 0 dBm.
Name Max dBm stdev dBm stdev min dBm stdev max dBm
HackRF +12.6 +/-2.0 +/-0.8 +/-3.0
PlutoSDR +3.3 +/-2.0 +/-0.9 +/-3.7
B210 +18.3 +/-2.6 +/-1.4 +/-6.0

Next.

Previous.