Search Results: "tach"

1 June 2025

Emmanuel Kasper: ARM64 desktop as daily driver

I have bought myself an expensive ARM64 workstation, the System 76 Thelio Astra that I intend to use as my main desktop computer for the next 15 years, running Debian. The box is basically a server motherboard repurposed in a good desktop chassis. In Europe it seems you can order similar ready systems here. The hardware is well supported by Debian 12 and Debian testing.I had some initial issues with graphics, due to the board being designed for a server use, but I am solving these as we go. Annoyances I got so far:
  • When you power on the machine using the power supply switch, you have to wait for the BMC to finished its startup sequence, before the front power button does anything. As starting the BMC can take 90 seconds, I thought initially the machine was dead on arrival.
  • The default graphical output is redirected to the BMC Serial over LAN, which means if you want to install Debian using an attached display you need to force the output on the attached display passing console=tty0 as an installer parameter.
  • Finally the Xorg Nouveau driver does not work with the Nvidia A400 GPU I got with the machine. After passing nodemodeset as a kernel parameter, I can force Xorg to use an unaccelerated framebuffer, which at least displays something. I passed this parameter to the installer, so that I could install in graphical mode. The driver from Nvidia works, but I d like very much to get Nouveau running.
Ugly point
  • A server mother board we said. This mean there is NO suspend to RAM, you have to power off if you don t want to keep the machine on all the time. As the boot sequence is long (server board again) I am pondering setting a startup time in the UEFI firmware to turn the box on at specific usage time.
Good points
  • The firmware of the machine is a standard EFI, which means you can use the debian arm64 installer on an USB stick straight away, without any kind of device tree / bootloader fiddling.
  • The 3 Nics, Wifi, bluetooth were all recognized on first boot.
  • I was afraid the machine would be loud. However it is quiet, you hear the humming of a fan, but it is quieter than most desktops I owned, from the Atari TT to an all in one Lenovo M92z I used for 10 years. I am certainly not a hardware and cooling specialist, but meseems the quietness comes from slow rotating but very large fans.
  • Due the clean design of Linux and Debian, thousands of packages working correctly on ARM64, starting with the Gnome desktop environment and Firefox.
  • The documentation from system76 is fine, their Ubuntu 20.04 setup guide was helpful to understand the needed parameters mentioned above.
Update: The display is working correctly with the nouveau driver after installing the non-free Nvidia firmware. See the Debian wiki.

30 May 2025

Russell Coker: Machine Learning Security

I just read an interesting blog post about ML security recommended by Bruce Schneier [1]. This approach of having 2 AI systems where one processes user input and the second performs actions on quarantined data is good and solves some real problems. But I think the bigger issue is the need to do this. Why not have a multi stage approach, instead of a single user input to do everything (the example given is Can you send Bob the document he requested in our last meeting? Bob s email and the document he asked for are in the meeting notes file ) you could have get Bob s email address from the meeting notes file followed by create a new email to that address and find the document etc. A major problem with many plans for ML systems is that they are based around automating relatively simple tasks. The example of sending an email based on meeting notes is a trivial task that s done many times a day but for which expressing it verbally isn t much faster than doing it the usual way. The usual way of doing such things (manually finding the email address from the meeting notes etc) can be accelerated without ML by having a recent documents access method that gets the notes, having the email address be a hot link to the email program (IE wordprocessor or note taking program being able to call the MUA), having a put all data objects of type X into the clipboard (where X can be email address, URL, filename, or whatever), and maybe optimising the MUA UI. The problems that people are talking about solving via ML and treating everything as text to be arbitrarily parsed can in many cases by solved by having the programs dealing with the data know what they have and have support for calling system services accordingly. The blog post suggests a problem of user fatigue from asking the user to confirm all actions, that is a real concern if the system is going to automate everything such that the user gives a verbal description of the problem and then says yes many times to confirm it. But if the user is at every step of the way pushing the process take this email address attach this file it won t be a series of yes operations with a risk of saying yes once too often. I think that one thing that should be investigated is better integration between services to allow working live on data. If in an online meeting someone says I ll work on task A please send me an email at the end of the meeting with all issues related to it then you should be able to click on their email address in the meeting software to bring up the MUA to send a message and then just paste stuff in. The user could then not immediately send the message and clicking on the email address again would bring up the message in progress to allow adding to it (the behaviour of most MUAs of creating a new message for every click on a mailto:// URL is usually not what you desire). In this example you could of course use ALT-TAB or other methods to switch windows to the email, but imagine the situation of having 5 people in the meeting who are to be emailed about different things and that wouldn t scale. Another thing for the meeting example is that having a text chat for a video conference is a standard feature now and being able to directly message individuals is available in BBB and probably some other online meeting systems. It shouldn t be hard to add a feature to BBB and similar programs to have each user receive an email at the end of the meeting with the contents of every DM chat they were involved in and have everyone in the meeting receive an emailed transcript of the public chat. In conclusion I think that there are real issues with ML security and something like this technology is needed. But for most cases the best option is to just not have ML systems do such things. Also there is significant scope for improving the integration of various existing systems in a non-ML way.

22 May 2025

Scarlett Gately Moore: KDE Application Snaps 25.04.1 with Major Bug Fix!,Life ( Good news finally!)

Snaps!

I actually released last week  I haven t had time to blog, but today is my birthday and taking some time to myself!This release came with a major bugfix. As it turns out our applications were very crashy on non-KDE platforms including Ubuntu proper. Unfortunately, for years, and I didn t know. Developers were closing the bug reports as invalid because users couldn t provide a stacktrace. I have now convinced most developers to assign snap bugs to the Snap platform so I at least get a chance to try and fix them. So with that said, if you tried our snaps in the past and gave up in frustration, please do try them again! I also spent some time cleaning up our snaps to only have current releases in the store, as rumor has it snapcrafters will be responsible for any security issues. With 200+ snaps I maintain, that is a lot of responsibility. We ll see if I can pull it off.

Life!

My last surgery was a success! I am finally healing and out of a sling for the first time in almost a year. I have also lined up a good amount of web work for next month and hopefully beyond. I have decided to drop the piece work for donations and will only accept per project proposals for open source work. I will continue to maintain KDE snaps for as long as time allows. A big thank you to everyone that has donated over the last year to fund my survival during this broken arm fiasco. I truly appreciate it!
With that said, if you want to drop me a donation for my work, birthday or well-being until I get paid for the aforementioned web work please do so here:

19 May 2025

Simon Quigley: Toolboxes and Hammers Be You

Toolboxes and Hammers Be YouEveryone has a story. We all started from somewhere, and we re all going somewhere.Ten years ago this summer, I first heard of Ubuntu. It took me time to learn how to properly pronounce the word, although I m glad I learned that early on. I was less fortunate when it came to the pronunciation of the acronym for the Ubuntu Code of Conduct. I had spent time and time again breaking my computer, and I d wanted to start fresh.I ve actually talked about this in an interview before, which you can find here (skip to 5:02 6:12 for my short explanation, I m in orange):https://medium.com/media/ad59becdbd06d230b875fb1512df1921/hrefI ve also done a few interviews over the years, here are some of the more recent ones:https://medium.com/media/83bda448d5f2a979f848e17f04376aa6/hrefAsk Noah Show 377Lastly, I did a few talks at SCaLE 21x (which the Ubuntu community donation funds helped me attend, thank you for that!):https://medium.com/media/0fbde7ef0ed83c2272a8653a5ea38b67/hrefhttps://medium.com/media/4d18f1770dc7eed6c7a9d711ff6a6e89/hrefMy story is fairly simple to summarize, if you don t have the time to go through all the clips.I started in the Ubuntu project at 13 years old, as a middle school student living in Green Bay, WI. I m now 23 years old, still living in Green Bay, but I became an Ubuntu Core Developer, Lubuntu s Release Manager, and worked up to a very great and comfortable spot.So, Simon, what advice would you give to someone at 13 who wants to do the same thing? Here are a few tips * Don t be afraid to be yourself. If you put on a mask, it hinders your growth, and you ll end up paying for it later anyway.
* Find a mentor. Someone who is okay working with someone your age, and ideally someone who works well with people your age (quick shoutout to Aaron Prisk and Walter Lapchynski for always being awesome to me and other folks starting out at high school.) This is probably the most important part.
* Ask questions. Tons of them. Ask questions until you re blue in the face. Ask questions until you get a headache so bad that your weekend needs to come early. Okay, maybe don t go that far, but at the very least, always stay curious.
* Own up to your mistakes. Even the most experienced people you know have made tons of mistakes. It s not about the mistake itself, it s about how you handle it and grow as a person.Now, after ten years, I ve seen many people come and go in Ubuntu. I was around for the transition from upstart to systemd. I was around for the transition from Unity to GNOME. I watched Kubuntu as a flavor recover from the arguments only a few years before I first started, only to jump in and help years later when the project started to trend downwards again.I have deep love, respect, and admiration for Ubuntu and its community. I also have deep love, respect, and admiration for Canonical as a company. It s all valuable work. That being said, I need to recognize where my own limits are, and it s not what you d think. This isn t some big burnout rant.Some of you may have heard rumors about an argument between me and the Ubuntu Community Council. I refuse to go into the private details of that, but what I ll tell you is this in retrospect, it was in good faith. The entire thing, from both my end and theirs, was to try to either help me as a person, or the entire community. If you think any part of this was bad faith from either side, you re fooling yourself. Plus, tons of great work and stories actually came out of this.The Ubuntu Community Council really does care. And so does Mark Shuttleworth.Now, I won t go into many specifics. If you want specifics, I d direct you to the Ubuntu Community Council who would be more than happy to answer any questions (actually they d probably stay silent. Nevermind.) That being said, I can t really talk about any of this without mentioning how great Mark has become.Remember, I was around for a few different major changes within the project. I ve heard and seen stories about Mark that actually match what Reddit says about him. But in 2025, out of the bottom of my heart, I m here to tell you that you re all wrong now.See, Mark didn t just side with somebody and be done with it. He actually listened, and I could tell, he cares very very deeply. I really enjoyed reading ogra s recent blog post, you should seriously check it out. Of course, I m only 23 years old, but I have to say, my experiences with Mark match that too.Now, as for what happens from here. I m taking a year off from Ubuntu. I talked this over with a wide variety of people, and I think it s the right decision. People who know me personally know that I m not one to make a major decision like this without a very good reason to. Well, I d like to share my reasons with you, because I think they d help.People who contribute time to open source find it to be very rewarding. Sometimes so rewarding, in fact, that no matter how many economics and finance books they read, they still haven t figured out how to balance that with a job that pays money. I m sure everyone deeply involved in this space has had the urge to quit their job at least once or twice to pursue their passions.Here s the other element too I ve had a handful of romantic relationships before, and they ve never really panned out. I found the woman that I truly believe I m going to marry. Is it going to be a rough road ahead of us? Absolutely, and to be totally honest, there is still a (small, at this point) chance it doesn t work out.That being said I remain optimistic. I m not taking a year off because I m in some kind of trouble. I haven t burned any bridge here except for one.You know who you are. You need help. I d be happy to reconnect with you once you realize that it s not okay to do what you did. An apology letter is all I want. I don t want Mutually Assured Destruction, I don t want to sit and battle on this for years on end. Seriously dude, just back off. Please.I hate having to take out the large hammer. But sometimes, you just have to do it. I ve quite enjoyed Louis Rossmann s (very not-safe-for-work) videos on BwE.https://medium.com/media/ab64411c41e65317f271058f56bb2aba/hrefI genuinely enjoy being nice to people. I want to see everyone be successful and happy, in that order (but with both being very important). I m not perfect, I m a 23-year-old who just happened to stumble into this space at the right time.To this specific person only, I tell you, please, let me go take my year off in peace. I don t wish you harm, and I won t make anything public, including your name, if you just back off.Whew. Okay. Time to be happy again.Again, I want to see people succeed. That goes for anyone in Ubuntu, Lubuntu, Kubuntu, Canonical, you name it. I m going to remain detached from Ubuntu for at least a year. If circumstances change, or if I feel the timing just isn t right, I ll wait longer. My point is, I ll be back, the when of it will just never be public before it happens.In the meantime, you re welcome to reach out to me. It ll take me some time to bootstrap things, more than I originally thought, but I m hoping it ll be quick. After all, I ve had practice.I m also going to continue writing. About what? I don t know yet.But, I ll just keep writing. I want to share all of the useful tips I ve learned over the years. If you actually liked this post, or if you ve enjoyed my work in the Ubuntu project, please do subscribe to my personal blog, which will be here on Medium (unless someone can give me an open source alternative with a funding model). This being said, while I d absolutely take any donations people would like to provide, at the end of the day, I don t do this for the money. I do this for the people just like me, out of love.So you, just like me, can make your dreams happen.Don t give up, it ll come. Just be patient with yourself.As for me, I have business to attend to. What business is that, exactly? Read Walden, and you ll find out.I wish you all well, even the person I called out. I sincerely hope you find what you re looking for in life. It takes time. Sometimes you have to listen to some music to pass the time, so I created a conceptual mixtape if you want to listen to some of the same music as me.I ll do another blog post soon, don t worry.Be well. Much, much more to come.

14 May 2025

Jonathan Dowland: Orbital

Orbital at NX, Newcastle in 2023 Orbital at NX, Newcastle in 2023
I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album. I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld) As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)
  1. The Naked and the Dead (1992) The Naked and the Dead by Orbital From an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended. Anyway I've now written more about a song I haven't recommended than the one I did
  2. Remind (1993) Remind by Orbital From the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there. In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)
  3. You Lot (2004) From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.
  4. Beached (2000) Beached (Long version) by Orbital, Angelo Badalamenti Co-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.
  5. Spare Parts Express (1999) Spare Parts Express by Orbital Critics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later. Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)
  6. Attached (1994) Attached by Orbital This originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.

21 April 2025

Gunnar Wolf: Want your title? Here, have some XML!

As it seems ChatGPT would phrase it Sweet Mother of God! I received a mail from my University s Scholar Administrative division informing me my Doctor degree has been granted and emitted (yayyyyyy! ), and before printing the corresponding documents, I should review all of the information is correct. Attached to the mail, I found they sent me a very friendly and welcoming XML file, that stated it followed the schema at https://www.siged.sep.gob.mx/titulos/schema.xsd Wait! There is nothing to be found in that address! Well, never mind, I can make sense out of a XML document, right? XML sample Of course, who needs an XSD schema? Everybody can parse through the data in a XML document, right? Of course, it took me close to five seconds to spot a minor mistake (in the finish and start dates of my previous degree), for which I mailed the relevant address But What happens if I try to undestand the world as seen by 9.8 out of 10 people getting a title from UNAM, in all of its different disciplines (scientific, engineering, humanities ) Some people will have no clue about what to do with a XML file. Fortunately, the mail has a link to a very useful tutorial (roughly translated by myself):
The attached file has an XML extension, so in order to visualize it, you must open it with a text editor such as Notepad or Sublime Text. In case you have any questions on how to open the file, please refer to the following guide: https://www.dgae.unam.mx/guia_abrir_xml.html
Seriously! Asking people getting a title in just about any area of knowledge to Install SublimeText to validate the content of a XML (that includes the oh-so-very-readable signature of some universitary bureaucrat). Of course, for many years Mexican people have been getting XML files by mail (for any declared monetary exchange, i.e. buying goods or offering services), but they are always sent together with a render of such XML to a personalized PDF. And yes the PDF is there only to give the human receiving the file an easier time understanding it. Who thought a bare XML was a good idea?

15 April 2025

Russell Coker: What Desktop PCs Need

It seems to me that we haven t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25 floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren t really modern) and carriers for 4*2.5 drives both of which most people don t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25 drive bays, inefficient PSUs, hardware that doesn t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly. Here are some of the things that I think should be in a modern PC System Design Guide. Power Supply The power supply is a core part of the computer and it s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs. A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn t needed) that have been around for ~20 years but there hasn t been a standard so all white-box PC systems have had really large PSUs. PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it s not unreasonable to expect a PC to be able to charge a phone at it s maximum speed. GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior. All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice. Motherboard Features Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature). On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future. ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that. There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it. Case Features The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap. The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go. GPU Placement In modern systems it s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots). A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn t be an afterthought, it should be central to the design. Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU? External Cooling There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good. Noise For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like under typical load or with a typical feature set that excuse them from liability if the noise is louder than expected. It doesn t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same. We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold. What Else? Please comment about other things you think should be standard PC features.

5 April 2025

Russell Coker: More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It s now March 2025 and I have replaced it. Hardware Issues My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it s cable wasn t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape. The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each. The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it s doing but was confusing initially. Booting One insidious problem is that when booting in legacy mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe. Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it s not booting and needs to have the power button pressed again which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on. Noise This was always a noisy system. When I upgraded the CPU from an 8 core with 85W typical TDP to an 18 core with 145W typical TDP it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy. Replacement I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don t know if all that HP made have it) and which the cheaper models in the ML-110 line don t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn t seem viable. I replaced it with one of the HP z640 workstations I got in 2023 [5]. The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs. At this time I m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits. The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet. The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that s a significant performance reduction. But the benefits outweigh the disadvantages. Conclusion I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

1 April 2025

Michael Ablassmeier: qmpbackup 0.46 - add image fleecing

I ve released qmpbackup 0.46 which now utilizes the image fleecing technique for backup. Usually, during backup, Qemu will use a so called copy-before-write filter so that data for new guest writes is sent to the backup target first, the guest write blocks until this operation is finished. If the backup target is flaky, or becomes unavailable during backup operation, this could lead to high I/O wait times or even complete VM lockups. To fix this, a so called fleecing image is introduced during backup being used as temporary cache for write operations by the guest. This image can be placed on the same storage as the virtual machine disks, so is independent from the backup target performance. The documentation on which steps are required to get this going, using the Qemu QMP protocol is, lets say.. lacking.. The following examples show the general functionality, but should be enhanced to use transactions where possible. All commands are in qmp-shell command format. Lets start with a full backup:
# create a new bitmap
block-dirty-bitmap-add node=disk1 name=bitmap persistent=true
# add the fleece image to the virtual machine (same size as original disk required)
blockdev-add driver=qcow2 node-name=fleecie file= "driver":"file","filename":"/tmp/fleece.qcow2" 
# add the backup target file to the virtual machine
blockdev-add driver=qcow2 node-name=backup-target-file file= "driver":"file","filename":"/tmp/backup.qcow2" 
# enable the copy-before-writer for the first disk attached, utilizing the fleece image
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie
# "blockdev-replace": make the copy-before-writer filter the major device (use "query-block" to get path parameter value, qdev node)
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter backing the copy-before-writer
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# create a full backup
blockdev-backup device=snapshot-backup-source target=backup-target-file sync=full job-id=test
[ wait until block job finishes]
# remove the snapshot access filter from the virtual machine
blockdev-del node-name=snapshot-backup-source
# switch back to the regular disk
qom-set path=/machine/unattached/device[20] property=drive value=node-disk1
# remove the copy-before-writer
blockdev-del node-name=cbw
# remove the backup-target-file
blockdev-del node-name=backup-target-file
# detach the fleecing image
blockdev-del node-name=fleecie
After this process, the temporary fleecing image can be deleted/recreated. Now lets go for a incremental backup:
# add the fleecing and backup target image, like before
blockdev-add driver=qcow2 node-name=fleecie file= "driver":"file","filename":"/tmp/fleece.qcow2" 
blockdev-add driver=qcow2 node-name=backup-target-file file= "driver":"file","filename":"/tmp/backup-incremental.qcow2" 
# add the copy-before-write filter, but utilize the bitmap created during full backup
blockdev-add driver=copy-before-write node-name=cbw file=disk1 target=fleecie bitmap= "node":"disk1","name":"bitmap" 
# switch device to the copy-before-write filter
qom-set path=/machine/unattached/device[20] property=drive value=cbw
# add the snapshot-access filter
blockdev-add driver=snapshot-access file=cbw node-name=snapshot-backup-source
# merge the bitmap created during full backup to the snapshot-access device so
# the backup operation can access it. (you better use an transaction here)
block-dirty-bitmap-add node=snapshot-backup-source name=bitmap
block-dirty-bitmap-merge node=snapshot-backup-source target=bitmap bitmaps=[ "node":"disk1","name":"bitmap" ]
# create incremental backup (you better use an transaction here)
blockdev-backup device=snapshot-backup-source target=backup-target-file job-id=test sync=incremental bitmap=bitmap
 [ wait until backup has finished ]
 [ cleanup like before ]
# clear the dirty bitmap (you better use an transaction here)
block-dirty-bitmap-clear node=disk1 name=bitmap
Or, use a simple reproducer by directly passing qmp commands via stdio:
#!/usr/bin/bash
qemu-img create -f raw disk 1M
qemu-img create -f raw fleece 1M
qemu-img create -f raw backup 1M
qemu-system-x86_64 -drive node-name=disk,file=disk,format=file -qmp stdio -nographic -nodefaults <<EOF
 "execute": "qmp_capabilities" 
 "execute": "block-dirty-bitmap-add", "arguments":  "node": "disk", "name": "bitmap" 
 "execute": "blockdev-add", "arguments":  "node-name": "fleece", "driver": "file", "filename": "fleece" 
 "execute": "blockdev-add", "arguments":  "node-name": "backup", "driver": "file", "filename": "backup" 
 "execute": "blockdev-add", "arguments":  "node-name": "cbw", "driver": "copy-before-write", "file": "disk", "target": "fleece", "bitmap":  "node": "disk", "name": "bitmap" 
 "execute": "query-block" 
 "execute": "qom-set", "arguments":  "path": "/machine/unattached/device[4]", "property": "drive", "value": "cbw" 
 "execute": "blockdev-add", "arguments":  "node-name": "snapshot", "driver": "snapshot-access", "file": "cbw" 
 "execute": "block-dirty-bitmap-add", "arguments":  "node": "snapshot", "name": "tbitmap" 
 "execute": "block-dirty-bitmap-merge", "arguments":  "node": "snapshot", "target": "tbitmap", "bitmaps": [ "node": "disk", "name": "bitmap" ] 
[..]
 "execute": "quit" 
EOF

24 March 2025

Jonathan McDowell: Who pays the cost of progress in software?

I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt. My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else s problem). There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt. I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It s not quite as extreme as rolling out a new service, because it s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors. A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn t managed to catch everyone who might be affected, so by the time it s discovered it s release critical, because at least one package no longer builds in unstable. Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it s easier to track the interconnections between different components. Debian doesn t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I m going to have to figure out what changed elsewhere to cause it. However he s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem. I don t know if I have a point to this post. I think it s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don t then allow a tardy developer to block progress.

20 March 2025

C.J. Collier: Installing a desktop environment on the HP Omen

dmidecode grep -A8 ^System Information tells me that the Manufacturer is HP and Product Name is OMEN Transcend Gaming Laptop 14-fb0xxx I m provisioning a new piece of hardware for my eng consultant and it s proving more difficult than I expected. I must admit guilt for some of this difficulty. Instead of installing using the debian installer on my keychain, I dd d the pv block device of the 16 inch 2023 version onto the partition set aside from it. I then rebooted into rescue mode and cleaned up the grub config, corrected the EFI boot partition s path in /etc/fstab, ran the grub installer from the rescue menu, and rebooted. On the initial boot of the system, X or Wayland or whatever is supposed to be talking to this vast array of GPU hardware in this device, it s unable to do more than create a black screen on vt1. It s easy enough to switch to vt2 and get a shell on the installed system. So I m doing that and investigating what s changed in Trixie. It seems like it s pretty significant. Did they just throw out Keith Packard s and Behdad Esfahbod s work on font rendering? I don t understand what s happening in this effort to abstract to a simpler interface. I ll probably end up reading more about it. In an effort to have Debian re-configure the system for Desktop use, I have uninstalled as many packages as I could find that were in the display and human interface category, or were firmware/drivers for devices not present in this Laptop s SoC. Some commands I used to clear these packages and re-install connamon follow:
 
dpkg -S /etc/X11
dpkg -S /usr/lib/firmware
apt-get purge $(dpkg -l   grep -i \
  -e gnome -e gtk -e x11-common -e xfonts- -e libvdpau -e dbus-user-session -e gpg-agent \
  -e bluez -e colord -e cups -e fonts -e drm -e xf86 -e mesa -e nouveau -e cinnamon \
  -e avahi -e gdk -e pixel -e desktop -e libreoffice -e x11 -e wayland -e xorg \
  -e firmware-nvidia-graphics -e firmware-amd-graphics -e firmware-mediatek -e firmware-realtek \
    awk ' print $2 ')
apt-get autoremove
apt-get purge $(dpkg -l   grep '^r'   awk ' print $2 ')
tasksel install cinnamon-desktop
 
And then I rebooted. When it came back up, I was greeted with a login prompt, and Trixie looks to be fully functional on this device, including the attached wifi radio, tethering to my android, and the thunderbolt-attached Marvell SFP+ enclosure. I m also installing libvirt and fetched the DVD iso material for Debian, Ubuntu and Rocky in case we have a need of building VMs during the development process. These are the platforms that I target at work with gcp Dataproc, so I m pretty good at performing maintenance operation on them at this point.

18 March 2025

Matthew Garrett: Failing upwards: the Twitter encrypted DM failure

Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.

But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.

This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.

To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.

Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.

Anyway. Use Signal.

comment count unavailable comments

10 March 2025

Joachim Breitner: Extrinsic termination proofs for well-founded recursion in Lean

A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere. This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog Want to share your thoughts about this? Please join the discussion on the Lean community zulip!

Background When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map, then extra attention used to be necessary so that Lean can see that xs.map applies its argument only elements of the list xs. The usual idiom is to write xs.attach.map instead, where List.attach attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example Nested Recursion in Higher-order Functions . To make this step less tedious I taught Lean to automatically rewrite xs.map to xs.attach.map (where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then else to the dependent if h : c then else , but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof. I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better way to do this, and well-founded recursion in general:

A simpler fix Recall that to use WellFounded.fix
WellFounded.fix : (hwf : WellFounded r) (F : (x :  )   ((y :  )   r y x   C y)   C x) (x :  ) : C x
we have to rewrite the functorial of the recursive function, which naturally has type
F : ((y :  )    C y)   ((x :  )   C x)
to the one above, where all recursive calls take the termination proof r y x. This is a fairly hairy operation, mangling the type of matcher s motives and whatnot. Things are simpler for recursive definitions using the new partial_fixpoint machinery, where we use Lean.Order.fix
Lean.Order.fix : [CCPO  ] (F :      ) (hmono : monotone F) :  
so the functorial s type is unmodified (here will be ((x : ) C x)), and everything else is in the propositional side-condition montone F. For this predicate we have a syntax-guided compositional tactic, and it s easily extensible, e.g. by
theorem monotone_mapM (f :         m  ) (xs : List  ) (hmono : monotone f) :
    monotone (fun x => xs.mapM (f x)) 
Once given, we don t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F that closely matches the function definition as written by the user. Much simpler!

Isabelle has it easier Isabelle also supports well-founded recursion, and has great support for nested recursion. And it s much simpler! There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map something like our List.map_congr_left
List.map_congr_left : (h :   a   l, f a = g a) :
    List.map f l = List.map g l
This is because in Isabelle, too, the termination proofs is a side-condition that essentially states the functorial F calls its argument f only on smaller arguments .

Can we have it easy, too? I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn t strong enough for us. But maybe there is a way to do it, using an existential to give a witness that F can alternatively implemented using the more restrictive argument. The following callsOn P F predicate can express that F calls its higher-order argument only on arguments that satisfy the predicate P:
section setup
variable   : Sort u 
variable   :     Sort v 
variable   : Sort w 
def callsOn (P :     Prop) (F : (  y,   y)    ) :=
    (F': (  y, P y     y)    ),   f, F' (fun y _ => f y) = F f
variable (R :         Prop)
variable (F : (  y,   y)   (  x,   x))
local infix:50 "   " => R
def recursesVia : Prop :=   x, callsOn (    x) (fun f => F f x)
noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : (  x,   x) :=
  wf.fix (fun x => (h x).choose)
def fix_eq (wf : WellFounded R) h x :
    fix R F wf h x = F (fix R F wf h) x := by
  unfold fix
  rw [wf.fix_eq]
  apply (h x).choose_spec
This allows nice compositional lemmas to discharge callsOn predicates:
theorem callsOn_base (y :  ) (hy : P y) :
    callsOn P (fun (f :   x,   x) => f y) := by
  exists fun f => f y hy
  intros; rfl
@[simp]
theorem callsOn_const (x :  ) :
    callsOn P (fun (_ :   x,   x) => x) :=
   fun _ => x, fun _ => rfl 
theorem callsOn_app
      : Sort uu    : Sort ww 
    (F  :  (  y,   y)        ) -- can this also support dependent types?
    (F  :  (  y,   y)    )
    (h  : callsOn P F )
    (h  : callsOn P F ) :
    callsOn P (fun f => F  f (F  f)) := by
  obtain  F ', h  := h 
  obtain  F ', h  := h 
  exists (fun f => F ' f (F ' f))
  intros; simp_all
theorem callsOn_lam
      : Sort uu 
    (F :     (  y,   y)    ) -- can this also support dependent types?
    (h :   x, callsOn P (F x)) :
    callsOn P (fun f x => F x f) := by
  exists (fun f x => (h x).choose f)
  intro f
  ext x
  apply (h x).choose_spec
theorem callsOn_app2
      : Sort uu    : Sort ww 
    (g :          )
    (F  :  (  y,   y)    ) -- can this also support dependent types?
    (F  :  (  y,   y)    )
    (h  : callsOn P F )
    (h  : callsOn P F ) :
    callsOn P (fun f => g (F  f) (F  f)) := by
  apply_rules [callsOn_app, callsOn_const]
With this setup, we can have the following, possibly user-defined, lemma expressing that List.map calls its arguments only on elements of the list:
theorem callsOn_map (  : Type uu) (  : Type ww)
    (P :     Prop) (F : (  y,   y)        ) (xs : List  )
    (h :   x, x   xs   callsOn P (fun f => F f x)) :
    callsOn P (fun f => xs.map (fun x => F f x)) := by
  suffices callsOn P (fun f => xs.attach.map (fun  x, h  => F f x)) by
    simpa
  apply callsOn_app
    apply callsOn_app
      apply callsOn_const
      apply callsOn_lam
      intro  x', hx' 
      dsimp
      exact (h x' hx')
    apply callsOn_const
end setup
So here is the (manual) construction of a nested map for trees:
section examples
structure Tree (  : Type u) where
  val :  
  cs : List (Tree  )
-- essentially
-- def Tree.map (f :      ) : Tree     Tree   :=
--   fun t =>  f t.val, t.cs.map Tree.map )
noncomputable def Tree.map (f :      ) : Tree     Tree   :=
  fix (sizeOf   < sizeOf  ) (fun map t =>  f t.val, t.cs.map map )
    (InvImage.wf (sizeOf  ) WellFoundedRelation.wf) <  by
  intro  v, cs 
  dsimp only
  apply callsOn_app2
    apply callsOn_const
    apply callsOn_map
    intro t' ht'
    apply callsOn_base
    -- ht' : t'   cs -- !
    --   sizeOf t' < sizeOf   val := v, cs := cs  
    decreasing_trivial
end examples
This makes me happy! All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint. I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn t fit, but the above is dependent, so it looks good. With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.

The cake is a lie What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F to return a proof, but since the extra assumptions (e.g. for ite or List.map) only exist in the termination proof, they are not available in F. Oh wey, how anticlimactic.

PS: Path dependencies Curiously, if we didn t have functional induction at this point yet, then very likely I d change Lean to use this construction, and then we d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that s called path dependence.

5 March 2025

Reproducible Builds: Reproducible Builds in February 2025

Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. Table of contents:
  1. Reproducible Builds at FOSDEM 2025
  2. Reproducible Builds at PyCascades 2025
  3. Does Functional Package Management Enable Reproducible Builds at Scale?
  4. reproduce.debian.net updates
  5. Upstream patches
  6. Distribution work
  7. diffoscope & strip-nondeterminism
  8. Website updates
  9. Reproducibility testing framework

Reproducible Builds at FOSDEM 2025 Similar to last year s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew J drzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: It s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different. The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn t exist any monitoring of Nixpkgs as a whole. In this talk I ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache. Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon s talk describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research. The slides for the talk are available, as is the full video (23m17s).

Reproducible Builds at PyCascades 2025 Vagrant Cascadian presented at this year s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant s talk, entitled Re-Py-Ducible Builds caught the audience s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing or even more important, if someone else builds it, they get the exact same thing too.
More info is available on the talk s page.

Does Functional Package Management Enable Reproducible Builds at Scale? On our mailing list last month, Julien Malka, Stefano Zacchiroli and Th o Zimmermann of T l com Paris in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF). This month, however, Ludovic Court s followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.

reproduce.debian.net updates The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
  • Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
  • Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.
  • Discovered an issue in the Debian build service where some new incoming build-dependencies do not end up historically archived.
  • Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script specifically to fix the handling the Rules-Requires-Root header in Debian source packages.
  • Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild package to:
  • Obey requests from the user/developer for a different temporary directory.
  • Use the root/superuser for some values of Rules-Requires-Root.
  • Don t pass --root-owner-group to old versions of dpkg.
and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. [ ][[ ][ ]
Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial copr repository and in the official repositories after all the dependencies are packaged.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Distribution work There as been the usual work in various distributions this month, such as: In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew J drzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project s work on unprivileged and reproducible builds continued this month. Notable fixes include:
The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions. Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.

diffoscope & strip-nondeterminism diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:
  • Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) [ ]
  • Catch a CalledProcessError when calling html2text. [ ]
  • Update the minimal Black version. [ ]
Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 [ ][ ] and 288 [ ][ ] as well as submitted a patch to update to 289 [ ]. Vagrant also fixed an issue that was breaking reprotest on Guix [ ][ ]. strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.

Website updates There were a large number of improvements made to our website this month, including:

Reproducibility testing framework The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
  • reproduce.debian.net-related:
    • Add a helper script to manually schedule packages. [ ][ ][ ][ ][ ]
    • Fix a link in the website footer. [ ]
    • Strip the emojis from package names on the manual rebuilder in order to ease copy-and-paste. [ ]
    • On the various statistics pages, provide the number of affected source packages [ ][ ] as well as provide various totals [ ][ ].
    • Fix graph labels for the various architectures [ ][ ] and make them clickable too [ ][ ][ ].
    • Break the displayed HTML in blocks of 256 packages in order to address rendering issues. [ ][ ]
    • Add monitoring jobs for riscv64 archicture nodes and integrate them elsewhere in our infrastructure. [ ][ ]
    • Add riscv64 architecture nodes. [ ][ ][ ][ ][ ]
    • Update much of the documentation. [ ][ ][ ]
    • Make a number of improvements to the layout and style. [ ][ ][ ][ ][ ][ ][ ]
    • Remove direct links to JSON and database backups. [ ]
    • Drop a Blues Brothers reference from frontpage. [ ]
  • Debian-related:
    • Deal with /boot/vmlinuz* being called vmlinux* on the riscv64 architecture. [ ]
    • Add a new ionos17 node. [ ][ ][ ][ ][ ]
    • Install debian-repro-status on all Debian trixie and unstable jobs. [ ]
  • FreeBSD-related:
    • Switch to run latest branch of FreeBSD. [ ]
  • Misc:
    • Fix /etc/cron.d and /etc/logrotate.d permissions for Jenkins nodes. [ ]
    • Add support for riscv64 architecture nodes. [ ][ ]
    • Grant Jochen Sprickerhof access to the o4 node. [ ]
    • Disable the janitor-setup-worker. [ ][ ]
In addition:
  • kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. [ ]
  • James Addison updated reproduce.debian.net to display the so-called bad reasons hyperlink inline [ ] and merged the Categorized issues links into the Reproduced builds column [ ].
  • Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package [ ] as well as updating some documentation [ ].
  • Roland Clobus continued their work on reproducible live images for Debian, making changes related to new clustering of jobs in openQA. [ ]
And finally, both Holger Levsen [ ][ ][ ] and Vagrant Cascadian performed significant node maintenance. [ ][ ][ ][ ][ ]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

23 February 2025

Valhalla's Things: Water Resistant Hood

Posted on February 23, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear
a person wearing a relatively boxy water resistant jacket with pockets and a zipper, and a detached hood with a big square cowl that reaches mid-torso. Many years ago I made myself a vest with lots of pockets 1 in a few layers of cheap cotton, and wore the hell out of it, for the added warmth, but most importantly for the convenience provided by the pockets. the same person showing just the vest, with two applied pockets on the bust, closed with buttons, and two big flaps covering two welted pockets at waist level, plus a strip of fabric with loops where things may be attached. Then a few years ago the cheap cotton had started to get worn, and I decided I needed to replace it. I found a second choice (and thus cheaper :) ) version of a water-repellent cotton and made another vest, lined with regular cotton, for a total of just two layers. the same person, this time there are also two sleeves, attached to the vest with big snaps, the outline of which can be seen on the vest. they are significantly less faded than the vest. This time I skipped a few pockets that I had found I didn t use that much, and I didn t add a hood, which didn t play that well when worn over a hoodie, but I added some detached sleeves, for additional wind protection. This left about 60 cm and some odd pieces of leftover fabric in my stash, for which I had no plan. the hood pulled down on the back, showing the big square cowl. And then February2 came, and I needed a quick simple mindless handsewing projects for the first weekend, I saw the vest (which I m wearing as much as the old one), the sleeves (which have been used much less, but I d like to change this) and thought about making a matching hood for it, using my square hood pattern. Since the etaproof is a bit stiff and not that nice to the touch I decide to line3 it with the same cotton as the vest and sleeves, and in the style of the pattern I did so by finishing each panel with its own lining (with regular cotton thread) and then whipstitching the panels together with the corespun cotton/poly thread recommended by the seller of the fabric. I m not sure this is the best way to construct something that is supposed to resist the rain, but if I notice issues I can always add some sealing tape afterwards. I do have a waterproof cape to wear in case of real rain, so this is only supposed to work for light rain anyway, and it may prove not to be an issue. As something designed to be worn in light rain, this is also something likely to be worn in low light conditions, where 100% black may not be the wisest look. On the vest I had added reflective piping to the armscyes, but I was out of the same piping. from the front; a flash was used to take the picture, making the border of the cowl very visible. I did however have a spool of reflector thread made of glass fibre by Rico Design, which I think was originally sold to be worked into knitting or crochet projects (it is now discontinued) and I had never used. I decided to try and sew a decorative blanket stitch border, a decision I had reasons to regret, since the thread broke and tangled like crazy, but in the end it was done, I like how it looks, and it seems pretty functional. I hope it won t break with time and use, and if it does I ll either fix it or try to redo with something else. Of course, the day I finished sewing the reflective border it stopped raining, so I haven t worn it yet, but I hope I ll be able to, and if it is an horrible failure I ll make sure to update this post.

  1. and I ve just realized that I haven t migrated that pattern to my pattern website, and I should do that. just don t hold your breath for it to happen O:-). And for the time being it will not have step-by-step pictures, as I currently don t need another vest.
  2. and February of course means a weekend in front of a screen that is showing a live-streamed conference.
  3. and of course I updated the pattern with instructions on how to add a lining.

20 February 2025

Paul Tagliamonte: boot2kier

I can t remember exactly the joke I was making at the time in my work s slack instance (I m sure it wasn t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with userland . I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven t seen it, this post will seem perhaps weirder than it actually is. I promise I haven t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan. As for how to do it I figured I d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn t the sort of thing I d usually post about except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI. First thing s first gotta create a rust project (I ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:
uefi =   version = "0.33", features = ["panic_handler", "alloc", "global_allocator"]  
We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we re interested in:
[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]
Unfortunately, I wasn t able to use the image crate, since it won t build against the uefi target. This looks like it s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we re nostd for a non-hardfloat target. So-called softening requires a software floating point implementation that the compiler can use to polyfill (feels weird to use the term polyfill here, but I guess it s spiritually right?) the lack of hardware floating point operations, which rust hasn t implemented for this target yet. As a result, I changed tactics, and figured I d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result but it s entirely manageable.
$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin
This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it s also important to remember that the size of the kier.full.jpg file may not actually be the requested size it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file. Last step with the image is to compile it into our Rust bianary, since we don t want to struggle with trying to read this off disk, which is thankfully real easy to do.
const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;
Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu s default resolution for me) which means we ll get a semi-annoying black band under the image when we go to run it but it ll work. Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We ll just need to hack up a container for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
///  image::RgbImage , but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage  
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
 
impl RgbImage  
/// Create a new  RgbImage .
 fn new(width: usize, height: usize) -> Self  
RgbImage  
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
 
 
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result  
gop.blt(BltOp::BufferToVideo  
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
 )
 
 
impl Index<(usize, usize)> for RgbImage  
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel  
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
 
 
impl IndexMut<(usize, usize)> for RgbImage  
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel  
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
 
 
We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution so we need to do some capping to ensure that we don t write more pixels than the display can handle. Writing fewer than the display s maximum seems fine, though.
fn praise() -> Result  
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height  
for x in 0..width  
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
 
 
buffer.write(&mut gop)?;
Ok(())
 
Not so bad! A bit tedious we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image but this will do for now. This is a joke, after all, let s not go nuts. All that s left with our code is for us to write our main function and try and boot the thing!
#[entry]
fn main() -> Status  
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
 
If you re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.
$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob While I can definitely get my machine to boot these blobs to test, I figured I d save myself some time by using QEMU to test without a full boot. If you ve not done this sort of thing before, we ll need two packages, qemu and ovmf. It s a bit different than most invocations of qemu you may see out there so I figured it d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu has a nice feature where it ll create us an EFI partition as a drive and attach it to the VM off a local directory so let s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven t done this before, and are only interested in running this in a VM, don t worry too much about it, a lot of it is convention and this layout should work for you.
$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi
With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.
$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp
If all goes well, soon you ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it s incredibly well done.

Booting a live system Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size K,M,G,T,P  (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you ll need to work out depending on how you ve decided to manage your MOK.
$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi
I figured I d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I m sure there is a way to do it using efibootmgr, but I wasn t smart enough to do that quickly. I let er rip, and it booted up and worked great! It was a bit hard to get a video of my laptop, though but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I d grab a video of me booting a physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted system which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don t think I care to do that much work.

But now I must away If I wanted to keep this joke going, I d likely try and find a copy of the original video when Helly 100%s her file and boot into that or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don t have any friends involved with production (yet?), so I reckon all that s out for now. I ll likely stop playing with this the joke was done and I m only writing this post because of how great everything was along the way. All in all, this reminds me so much of building a homebrew kernel to boot a system into but like, good, though, and it s a nice reminder of both how fun this stuff can be, and how far we ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can t believe how good the uefi crate is specifically. Praise Kier! Kudos, to everyone involved in making this so delightful .

4 February 2025

Valhalla's Things: Conference Talk Timeout Ring, Part One

Posted on February 4, 2025
Tags: madeof:atoms, madeof:bits
low quality video of a ring of rgb LED in front of a computer: the LED light up one at a time in colours that go from yellow to red. A few ago I may have accidentally bought a ring of 12 RGB LEDs; I soldered temporary leads on it, connected it to a CircuitPython supported board and played around for a while. They we had a couple of friends come over to remote FOSDEM together, and I had talked with one of them about WS2812 / NeoPixels, so I brought them to the living room, in case there was a chance to show them in sort-of-use. Then I was dealing with playing the various streams as we moved from one room to the next, which lead to me being called video team , which lead to me wearing a video team shirt (from an old DebConf, not FOSDEM, but still video team), which lead to somebody asking me whether I also had the sheet with the countdown to the end of the talk, and the answer was sort-of-yes (I should have the ones we used to use for our Linux Day), but not handy. But I had a thing with twelve things in a clock-like circle. A bit of fiddling on the CircuitPython REPL resulted, if I remember correctly, in something like:
import board
import neopixel
import time
num_pixels = 12
pixels = neopixel.NeoPixel(board.GP0, num_pixels)
pixels.brightness = 0.1
def end(min):
    pixels.fill((0, 0, 0))
    for i in range(12):
        pixels[i] = (127 + 10 * i, 8 * (12 - i), 0)
        pixels[i-1] = (0, 0, 0)
        time.sleep(min * 5)  # min * 60 / 12
Now, I wasn t very consistent in running end, especially since I wasn t sure whether I wanted to run it at the beginning of the talk with the full duration or just in the last 5 - 10 minutes depending of the length of the slot, but I ve had at least one person agree that the general idea has potential, so I m taking these notes to be able to work on it in the future. One thing that needs to be fixed is the fact that with the ring just attached with temporary wires and left on the table it isn t clear which LED is number 0, so it will need a bit of a case or something, but that s something that can be dealt with before the next fosdem. And I should probably add some input interface, so that it is self-contained and not tethered to a computer and run from the REPL. (And then I may also have a vague idea for putting that ring into some wearable thing: good thing that I actually bought two :D )

2 February 2025

Dave Hibberd: SOTA Trip Reports: Feb 02, 2025 - Bennachie

This was originally posted on SOTA Forums. It s here for completeness of my writing.
To Quote @MM0EFI and the GM0ESS gang, today was a particularly Amateur showing! Having spent all weekend locked in the curling rink ruining my knees and inflicting mild liver damage in the Aberdeen City Open competition, I needed some outside time away from people to stretch the legs and loosen my knees. With my teammates/guests shipped off early on account of our quality performance and the days fair drawin out now, I found myself with a free afternoon to have a quick run up something nearby before a 1640 sunset! Up the back of Bennachie is a quick steady ascent and in 13 years of living up here I ve never summited the big hill! Now is as good a time as any. In SOTA terms, this hill is GM/ES-061. In Geographical terms, it s around 20 miles inland from Aberdeen city here. I ve been experimenting with these Aliexpress whips since the end of last year and the forecast wind was low enough to take one into the hills. I cut and terminated 8x 2.5m radials for an effective ground plane last week and wanted to try that against the flat ribbon that it came with. The ascent was pleasant enough, got to the summit in good time, and out came my Quansheng radio to get the GM/ES-Society on 2m. First my Nagoya whip - called CQ and heard nothing, with general poor reports in WhatsApp I opted to get the slim-g up my aliexpress fibreglass mast. In an amateur showing last week, I broke the tip of the mast on Cat Law helping 2M0HSK do his first activation due to the wind, and had forgotten this until I summited this week. Squeezing my antenna on was tough, and after many failed attempts to get it up (the mast kept collapsing as I was rushing and not getting the friction hold on each section correctly) and still not hearing anything at all, I changed location and tried again. In my new position, I received 2M0RVZ 4/4 at best, but he was hearing my 5/9. Similarly GM5ALX and GM4JXP were patiently receiving me loud and clear but I couldn t hear them at all. I fiddled with settings and decided the receive path of the Quansheng must be fried or sad somehow, but I don t yet have a full set of diagnostics run. I ll take my Anytone on the next hill and compare them against each other I think. I gave up and moved to HF, getting my whip and new radials into the ground: 295B58E1-BA43-4348-A4C7-0B1E013C4006_1_102_o 375x500 Quick to deploy which is what I was after. My new 5m of coax with a choke fitted attached to the radio and we were off to the races - A convenient thing of beauty when it s up: 33C35D56-F470-46BB-B31E-F66361504A1C_1_102_o 375x500 I ve made a single guy with a sotabeams top insulator to brace against wind if need be, but that didn t need to be used today. I hit tune, and the G90 spent ages clicking away. In fact, tuning to 14.074, I could only see the famed FT8 signals at S2. What could be wrong here? Was it my new radials? the whip has behaved before Minutes turned into tens of minutes playing with everything, and eventually I worked out what was up - my coax only passed signal when I the PL259 connector at the antenna juuuust right. Once I did that, I could take the tuner out the system and work 20 spectacularly well. Until now, I d been tuning the coax only. Another Quality Hibby Build Job . That s what s wrong! I managed to struggle my way through a touch of QRM and my wonky cable woes to make enough contacts with some very patient chasers and a summit to summit before my frustration at the situation won out, and down the hill I went after a quick pack up period. I managed to beat the sunset - I think if the system had worked fine, I d have stayed on the hill for sunset. I think it s time for a new mast and a coax retermination!

Joachim Breitner: Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It s winter right now, but when its summer again it s always a bit. this weekend I got closer to that goal. TL;DR: Using code-server on a beefy machine seems to be quite neat.
Passively lit coding Passively lit coding

Personal history Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since. Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I d attach it to my laptop and it just works . Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet s screen s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that. Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don t have to use Boox s monitor app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn t as convenient to be used regularly. I also played around with pure terminal approaches, e.g. SSH ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience). My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it s a bit spooky to run these work loads without the laptop s fan spinning up. In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there? VSCode itself doesn t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix) but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet s system.

code-server, running remotely A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment. I also decided against the more involved reverse proxy behind proper hostname with SSL setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use. That left me with the option of using a code-server s built-in support for self-signed certificates and a password:
$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true
With trust-on-first-use this seems reasonably secure. Update: I noticed that the browsers would forget that I trust this self-signed cert after restarting the browser, and also that I cannot install the page (as a Progressive Web App) unless it has a valid certificate. But since I don t have superuser access to that machine, I can t just follow the official recommendation of using a reverse proxy on port 80 or 431 with automatic certificates. Instead, I pointed a hostname that I control to that machine, obtained a certificate manually on my laptop (using acme.sh) and copied the files over, so the configuration now reads as follows:
bind-addr: 1.2.3.4:3933
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.cer
cert-key: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.key
(This is getting very specific to my particular needs and constraints, so I ll spare you the details.)

Service To keep code-server running I created a systemd service that s managed by my user s systemd instance:
~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target
[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server
[Install]
WantedBy=default.target
(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server. For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved. I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn t be horrible, but I still wondered if I can do better. I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn t want to bother anyone on the weekend. So I am experimenting with Github s git-credential-manager now. I have configured it to use git s credential cache with an elevated timeout, so that once I log in, I don t have to again for one workday.
$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"
To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working! I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn t on Open-VSX. For some reason the F11 key doesn t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings. A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed. I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions. Now that I use a proper certificate, I can use it as a Progressive Web App, and with Firefox on Android this starts the app in full-screen mode (no system bars, no location bar). The F11 key still does t work, and using the command palette to enter fullscreen does nothing visible, but then Esc leaves that fullscreen mode and I suddenly have the system bars again. But maybe if I just don t do that I get the full screen experience. We ll see. I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically. I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer. A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

28 January 2025

Bits from Debian: New Debian Developers and Maintainers (November and December 2024)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

Next.