Search Results: "andy"

18 April 2024

Jonathan McDowell: Sorting out backup internet #2: 5G modem

Having setup recursive DNS it was time to actually sort out a backup internet connection. I live in a Virgin Media area, but I still haven t forgiven them for my terrible Virgin experiences when moving here. Plus it involves a bigger contractual commitment. There are no altnets locally (though I m watching youfibre who have already rolled out in a few Belfast exchanges), so I decided to go for a 5G modem. That gives some flexibility, and is a bit easier to get up and running. I started by purchasing a ZTE MC7010. This had the advantage of being reasonably cheap off eBay, not having any wifi functionality I would just have to disable (it s going to plug it into the same router the FTTP connection terminates on), being outdoor mountable should I decide to go that way, and, finally, being powered via PoE. For now this device sits on the window sill in my study, which is at the top of the house. I printed a table stand for it which mostly does the job (though not as well with a normal, rather than flat, network cable). The router lives downstairs, so I ve extended a dedicated VLAN through the study switch, down to the core switch and out to the router. The PoE study switch can only do GigE, not 2.5Gb/s, but at present that s far from the limiting factor on the speed of the connection. The device is 3 branded, and, as it happens, I ve ended up with a 3 SIM in it. Up until recently my personal phone was with them, but they ve kicked me off Go Roam, so I ve moved. Going with 3 for the backup connection provides some slight extra measure of resiliency; we now have devices on all 4 major UK networks in the house. The SIM is a preloaded data only SIM good for a year; I don t expect to use all of the data allowance, but I didn t want to have to worry about unexpected excess charges. Performance turns out to be disappointing; I end up locking the device to 4G as the 5G signal is marginal - leaving it enabled results in constantly switching between 4G + 5G and a significant extra latency. The smokeping graph below shows a brief period where I removed the 4G lock and allowed 5G: Smokeping 4G vs 5G graph (There s a handy zte.js script to allow doing this from the device web interface.) I get about 10Mb/s sustained downloads out of it. EE/Vodafone did not lead to significantly better results, so for now I m accepting it is what it is. I tried relocating the device to another part of the house (a little tricky while still providing switch-based PoE, but I have an injector), without much improvement. Equally pinning the 4G to certain bands provided a short term improvement (I got up to 40-50Mb/s sustained), but not reliably so. speedtest.net results This is disappointing, but if it turns out to be a problem I can look at mounting it externally. I also assume as 5G is gradually rolled out further things will naturally improve, but that might be wishful thinking on my part. Rather than wait until my main link had a problem I decided to try a day working over the 5G connection. I spend a lot of my time either in browser based apps or accessing remote systems via SSH, so I m reasonably sensitive to a jittery or otherwise flaky connection. I picked a day that I did not have any meetings planned, but as it happened I ended up with an adhoc video call arranged. I m pleased to say that it all worked just fine; definitely noticeable as slower than the FTTP connection (to be expected), but all workable and even the video call was fine (at least from my end). Looking at the traffic graph shows the expected ~ 10Mb/s peak (actually a little higher, and looking at the FTTP stats for previous days not out of keeping with what we see there), and you can just about see the ~ 3Mb/s symmetric use by the video call at 2pm: 4G traffic during the work day The test run also helped iron out the fact that the content filter was still enabled on the SIM, but that was easily resolved. Up next, vaguely automatic failover.

14 March 2024

Dirk Eddelbuettel: ciw 0.0.1 on CRAN: New Package!

Happy to share that ciw is now on CRAN! I had tooted a little bit about it, e.g., here. What it provides is a single (efficient) function incoming() which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release. For example, when I do this right now as I type this, I see
edd@rob:~$ ciw.r
    Folder                   Name                Time   Size          Age
    <char>                 <char>              <POSc> <char>   <difftime>
1: waiting   maximin_1.0-5.tar.gz 2024-03-13 22:22:00    20K   2.48 hours
2: inspect    GofCens_0.97.tar.gz 2024-03-13 21:12:00    29K   3.65 hours
3: inspect verbalisr_0.5.2.tar.gz 2024-03-13 20:09:00    79K   4.70 hours
4: waiting    rnames_1.0.1.tar.gz 2024-03-12 15:04:00   2.7K  33.78 hours
5: waiting  PCMBase_1.2.14.tar.gz 2024-03-10 12:32:00   406K  84.32 hours
6: pending        MPCR_1.1.tar.gz 2024-02-22 11:07:00   903K 493.73 hours
edd@rob:~$ 
which is rather compact as CRAN kept busy! This call runs in about (or just over) one second, which includes launching r. Good enough for me. From a well-connected EC2 instance it is about 800ms on the command-line. When I do I from here inside an R session it is maybe 700ms. And doing it over in Europe is faster still. (I am using ping=FALSE for these to omit the default sanity check of can I haz networking? to speed things up. The check adds another 200ms or so.) The function (and the wrapper) offer a ton of options too this is ridiculously easy to do thanks to the docopt package:
edd@rob:~$ ciw.r -x
Usage: ciw.r [-h] [-x] [-a] [-m] [-i] [-t] [-p] [-w] [-r] [-s] [-n] [-u] [-l rows] [-z] [ARG...]

-m --mega           use 'mega' mode of all folders (see --usage)
-i --inspect        visit 'inspect' folder
-t --pretest        visit 'pretest' folder
-p --pending        visit 'pending' folder
-w --waiting        visit 'waiting' folder
-r --recheck        visit 'waiting' folder
-a --archive        visit 'archive' folder
-n --newbies        visit 'newbies' folder
-u --publish        visit 'publish' folder
-s --skipsort       skip sorting of aggregate results by age
-l --lines rows     print top 'rows' of the result object [default: 50]
-z --ping           run the connectivity check first
-h --help           show this help text
-x --usage          show help and short example usage 

where ARG... can be one or more file name, or directories or package names.

Examples:
  ciw.r -ip                            # run in 'inspect' and 'pending' mode
  ciw.r -a                             # run with mode 'auto' resolved in incoming()
  ciw.r                                # run with defaults, same as '-itpwr'

When no argument is given, 'auto' is selected which corresponds to 'inspect', 'waiting',
'pending', 'pretest', and 'recheck'. Selecting '-m' or '--mega' are select as default.

Folder selecting arguments are cumulative; but 'mega' is a single selections of all folders
(i.e. 'inspect', 'waiting', 'pending', 'pretest', 'recheck', 'archive', 'newbies', 'publish').

ciw.r is part of littler which brings 'r' to the command-line.
See https://dirk.eddelbuettel.com/code/littler.html for more information.
edd@rob:~$ 
The README at the git repo and the CRAN page offer a screenshot movie showing some of the options in action. I have been using the little tools quite a bit over the last two or three weeks since I first put it together and find it quite handy. With that again a big Thank You! of appcreciation for all that CRAN does which this week included letting this past the newbies desk in under 24 hours. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 February 2024

Andrew Cater: Lessons from (and for) colleagues - and, implicitly, how NOT to get on

I have had excellent colleagues both at my day job and, especially, in Debian over the last thirty-odd years. Several have attempted to give me good advice - others have been exemplars. People retire: sadly, people die. What impression do you want to leave behind when you leave here?Belatedly, I've come to realise that obduracy, sheer bloody mindedness, force of will and obstinacy will only get you so far. The following began very much as a tongue in cheek private memo to myself a good few years ago. I showed it to a colleague who suggested at the time that I should share it to a wider audience.

SOME ADVICE YOU MAY BENEFIT FROM

Personal conduct
  • Never argue with someone you believe to be arguing idiotically - a dispassionate bystander may have difficulty telling who's who.
  • You can't make yourself seem reasonable by behaving unreasonably
  • It does not matter how correct your point of view is if you get people's backs up
  • They may all be #####, @@@@@, %%%% and ******* - saying so out loud doesn't help improve matters and may make you seem intemperate.

Working with others
  • Be the change you want to be and behave the way you want others to behave in order to achieve the desired outcome.
  • You can demolish someone's argument constructively and add weight to good points rather than tearing down their ideas and hard work and being ultra-critical and negative - no-one likes to be told "You know - you've got a REALLY ugly baby there"
  • It's easier to work with someone than to work against them and have to apologise repeatedly.
  • Even when you're outstanding and superlative, even you had to learn it all once. Be generous to help others learn: you shouldn't have to teach too many times if you teach correctly once and take time in doing so.

Getting the message across
  • Stop: think: write: review: (peer review if necessary): publish.
  • Clarity is all: just because you understand it doesn't mean anyone else will.
  • It does not matter how correct your point of view is if you put it across badly.
  • If you're giving advice: make sure it is:
  • Considered
  • Constructive
  • Correct as far as you can (and)
  • Refers to other people who may be able to help
  • Say thank you promptly if someone helps you and be prepared to give full credit where credit's due.

Work is like that
  • You may not know all the answers or even have the whole picture - consult, take advice - LISTEN TO THE ADVICE
  • Sometimes the right answer is not the immediately correct answer
  • Corollary: Sometimes the right answer for the business is not your suggested/preferred outcome
  • Corollary: Just because you can do it like that in the real world doesn't mean that you can do it that way inside the business. [This realisation is INTENSELY frustrating but you have to learn to deal with it]
  • DON'T ALWAYS DO IT YOURSELF - Attempt to fix the system, sometimes allow the corporate monster to fail - then do it yourself and fix it. It is always easier and tempting to work round the system and Just Flaming Do It but it doesn't solve problems in the longer term and may create more problems and ill-feeling than it solves.

    [Worked out for Andy Cater for himself after many years of fighting the system as a misguided missile - though he will freely admit that he doesn't always follow them as often as he should :) ]


29 January 2024

Russ Allbery: Review: Bluebird

Review: Bluebird, by Ciel Pierlot
Publisher: Angry Robot
Copyright: 2022
ISBN: 0-85766-967-2
Format: Kindle
Pages: 458
Bluebird is a stand-alone far-future science fiction adventure. Ten thousand years ago, a star fell into the galaxy carrying three factions of humanity. The Ascetics, the Ossuary, and the Pyrites each believe that only their god survived and the other two factions are heretics. Between them, they have conquered the rest of the galaxy and its non-human species. The only thing the factions hate worse than each other are those who attempt to stay outside the faction system. Rig used to be a Pyrite weapon designer before she set fire to her office and escaped with her greatest invention. Now she's a Nightbird, a member of an outlaw band that tries to help refugees and protect her fellow Kashrini against Pyrite genocide. On her side, she has her girlfriend, an Ascetic librarian; her ship, Bluebird; and her guns, Panache and Pizzazz. And now, perhaps, the mysterious Ginka, a Zazra empath and remarkably capable fighter who helps Rig escape from an ambush by Pyrite soldiers. Rig wants to stay alive, help her people, and defy the factions. Pyrite wants Rig's secrets and, as leverage, has her sister. What Ginka wants is not entirely clear even to Ginka. This book is absurd, but I still had fun with it. It's dangerous for me to compare things to anime given how little anime that I've watched, but Bluebird had that vibe for me: anime, or maybe Japanese RPGs or superhero comics. The storytelling is very visual, combat-oriented, and not particularly realistic. Rig is a pistol sharpshooter and Ginka is the type of undefined deadly acrobatic fighter so often seen in that type of media. In addition to her ship, Rig has a gorgeous hand-maintained racing hoverbike with a beautiful paint job. It's that sort of book. It's also the sort of book where the characters obey cinematic logic designed to maximize dramatic physical confrontations, even if their actions make no logical sense. There is no facial recognition or screening, and it's bizarrely easy for the protagonists to end up in same physical location as high-up bad guys. One of the weapon systems that's critical to the plot makes no sense whatsoever. At critical moments, the bad guys behave more like final bosses in a video game, picking up weapons to deal with the protagonists directly instead of using their supposedly vast armies of agents. There is supposedly a whole galaxy full of civilizations with capital worlds covered in planet-spanning cities, but politics barely exist and the faction leaders get directly involved in the plot. If you are looking for a realistic projection of technology or society, I cannot stress enough that this is not the book that you're looking for. You probably figured that out when I mentioned ten thousand years of war, but that will only be the beginning of the suspension of disbelief problems. You need to turn off your brain and enjoy the action sequences and melodrama. I'm normally good at that, and I admit I still struggled because the plot logic is such a mismatch with the typical novels I read. There are several points where the characters do something that seems so monumentally dumb that I was sure Pierlot was setting them up for a fall, and then I got wrong-footed because their plan worked fine, or exploded for unrelated reasons. I think this type of story, heavy on dramatic eye-candy and emotional moments with swelling soundtracks, is a lot easier to pull off in visual media where all the pretty pictures distract your brain. In a novel, there's a lot of time to think about the strategy, technology, and government structure, which for this book is not a good idea. If you can get past that, though, Rig is entertainingly snarky and Ginka, who turns out to be the emotional heart of the book, is an enjoyable character with a real growth arc. Her background is a bit simplistic and the villains are the sort of pure evil that you might expect from this type of cinematic plot, but I cared about the outcome of her story. Some parts of the plot dragged and I think the editing could have been tighter, but there was enough competence porn and banter to pull me through. I would recommend Bluebird only cautiously, since you're going to need to turn off large portions of your brain and be in the right mood for nonsensically dramatic confrontations, but I don't regret reading it. It's mostly in primary colors and the emotional conflicts are not what anyone would call subtle, but it delivers a character arc and a somewhat satisfying ending. Content warning: There is a lot of serious physical injury in this book, including surgical maiming. If that's going to bother you, you may want to give this one a pass. Rating: 6 out of 10

22 January 2024

Paul Tagliamonte: Writing a simulator to check phased array beamforming

Interested in future updates? Follow me on mastodon at @paul@soylent.green. Posts about hz.tools will be tagged #hztools.

If you're on the Fediverse, I'd very much appreciate boosts on my toot!
While working on hz.tools, I started to move my beamforming code from 2-D (meaning, beamforming to some specific angle on the X-Y plane for waves on the X-Y plane) to 3-D. I ll have more to say about that once I get around to publishing the code as soon as I m sure it s not completely wrong, but in the meantime I decided to write a simple simulator to visually check the beamformer against the textbooks. The results were pretty rad, so I figured I d throw together a post since it s interesting all on its own outside of beamforming as a general topic. I figured I d write this in Rust, since I ve been using Rust as my primary language over at zoo, and it s a good chance to learn the language better.
This post has some large GIFs

It make take a little bit to load depending on your internet connection. Sorry about that, I'm not clever enough to do better without doing tons of complex engineering work. They may be choppy while they load or something. I tried to compress an ensmall them, so if they're loaded but fuzzy, click on them to load a slightly larger version.
This post won t cover the basics of how phased arrays work or the specifics of calculating the phase offsets for each antenna, but I ll dig into how I wrote a simple simulator and how I wound up checking my phase offsets to generate the renders below.

Assumptions I didn t want to build a general purpose RF simulator, anything particularly generic, or something that would solve for any more than the things right in front of me. To do this as simply (and quickly all this code took about a day to write, including the beamforming math) I had to reduce the amount of work in front of me. Given that I was concerend with visualizing what the antenna pattern would look like in 3-D given some antenna geometry, operating frequency and configured beam, I made the following assumptions: All anetnnas are perfectly isotropic they receive a signal that is exactly the same strength no matter what direction the signal originates from. There s a single point-source isotropic emitter in the far-field (I modeled this as being 1 million meters away 1000 kilometers) of the antenna system. There is no noise, multipath, loss or distortion in the signal as it travels through space. Antennas will never interfere with each other.

2-D Polar Plots The last time I wrote something like this, I generated 2-D GIFs which show a radiation pattern, not unlike the polar plots you d see on a microphone. These are handy because it lets you visualize what the directionality of the antenna looks like, as well as in what direction emissions are captured, and in what directions emissions are nulled out. You can see these plots on spec sheets for antennas in both 2-D and 3-D form. Now, let s port the 2-D approach to 3-D and see how well it works out.

Writing the 3-D simulator As an EM wave travels through free space, the place at which you sample the wave controls that phase you observe at each time-step. This means, assuming perfectly synchronized clocks, a transmitter and receiver exactly one RF wavelength apart will observe a signal in-phase, but a transmitter and receiver a half wavelength apart will observe a signal 180 degrees out of phase. This means that if we take the distance between our point-source and antenna element, divide it by the wavelength, we can use the fractional part of the resulting number to determine the phase observed. If we multiply that number (in the range of 0 to just under 1) by tau, we can generate a complex number by taking the cos and sin of the multiplied phase (in the range of 0 to tau), assuming the transmitter is emitting a carrier wave at a static amplitude and all clocks are in perfect sync.
 let observed_phases: Vec<Complex> = antennas
.iter()
.map( antenna   
let distance = (antenna - tx).magnitude();
let distance = distance - (distance as i64 as f64);
((distance / wavelength) * TAU)
 )
.map( phase  Complex(phase.cos(), phase.sin()))
.collect();
At this point, given some synthetic transmission point and each antenna, we know what the expected complex sample would be at each antenna. At this point, we can adjust the phase of each antenna according to the beamforming phase offset configuration, and add up every sample in order to determine what the entire system would collectively produce a sample as.
 let beamformed_phases: Vec<Complex> = ...;
let magnitude = beamformed_phases
.iter()
.zip(observed_phases.iter())
.map( (beamformed, observed)  observed * beamformed)
.reduce( acc, el  acc + el)
.unwrap()
.abs();
Armed with this information, it s straight forward to generate some number of (Azimuth, Elevation) points to sample, generate a transmission point far away in that direction, resolve what the resulting Complex sample would be, take its magnitude, and use that to create an (x, y, z) point at (azimuth, elevation, magnitude). The color attached two that point is based on its distance from (0, 0, 0). I opted to use the Life Aquatic table for this one. After this process is complete, I have a point cloud of ((x, y, z), (r, g, b)) points. I wrote a small program using kiss3d to render point cloud using tons of small spheres, and write out the frames to a set of PNGs, which get compiled into a GIF. Now for the fun part, let s take a look at some radiation patterns!

1x4 Phased Array The first configuration is a phased array where all the elements are in perfect alignment on the y and z axis, and separated by some offset in the x axis. This configuration can sweep 180 degrees (not the full 360), but can t be steared in elevation at all. Let s take a look at what this looks like for a well constructed 1x4 phased array: And now let s take a look at the renders as we play with the configuration of this array and make sure things look right. Our initial quarter-wavelength spacing is very effective and has some outstanding performance characteristics. Let s check to see that everything looks right as a first test. Nice. Looks perfect. When pointing forward at (0, 0), we d expect to see a torus, which we do. As we sweep between 0 and 360, astute observers will notice the pattern is mirrored along the axis of the antennas, when the beam is facing forward to 0 degrees, it ll also receive at 180 degrees just as strong. There s a small sidelobe that forms when it s configured along the array, but it also becomes the most directional, and the sidelobes remain fairly small.

Long compared to the wavelength (1 ) Let s try again, but rather than spacing each antenna of a wavelength apart, let s see about spacing each antenna 1 of a wavelength apart instead. The main lobe is a lot more narrow (not a bad thing!), but some significant sidelobes have formed (not ideal). This can cause a lot of confusion when doing things that require a lot of directional resolution unless they re compensated for.

Going from ( to 5 ) The last model begs the question - what do things look like when you separate the antennas from each other but without moving the beam? Let s simulate moving our antennas but not adjusting the configured beam or operating frequency. Very cool. As the spacing becomes longer in relation to the operating frequency, we can see the sidelobes start to form out of the end of the antenna system.

2x2 Phased Array The second configuration I want to try is a phased array where the elements are in perfect alignment on the z axis, and separated by a fixed offset in either the x or y axis by their neighbor, forming a square when viewed along the x/y axis. Let s take a look at what this looks like for a well constructed 2x2 phased array: Let s do the same as above and take a look at the renders as we play with the configuration of this array and see what things look like. This configuration should suppress the sidelobes and give us good performance, and even give us some amount of control in elevation while we re at it. Sweet. Heck yeah. The array is quite directional in the configured direction, and can even sweep a little bit in elevation, a definite improvement from the 1x4 above.

Long compared to the wavelength (1 ) Let s do the same thing as the 1x4 and take a look at what happens when the distance between elements is long compared to the frequency of operation say, 1 of a wavelength apart? What happens to the sidelobes given this spacing when the frequency of operation is much different than the physical geometry? Mesmerising. This is my favorate render. The sidelobes are very fun to watch come in and out of existence. It looks absolutely other-worldly.

Going from ( to 5 ) Finally, for completeness' sake, what do things look like when you separate the antennas from each other just as we did with the 1x4? Let s simulate moving our antennas but not adjusting the configured beam or operating frequency. Very very cool. The sidelobes wind up turning the very blobby cardioid into an electromagnetic dog toy. I think we ve proven to ourselves that using a phased array much outside its designed frequency of operation seems like a real bad idea.

Future Work Now that I have a system to test things out, I m a bit more confident that my beamforming code is close to right! I d love to push that code over the line and blog about it, since it s a really interesting topic on its own. Once I m sure the code involved isn t full of lies, I ll put it up on the hztools org, and post about it here and on mastodon.

7 January 2024

Jonathan McDowell: Free Software Activities for 2023

This year was hard from a personal and work point of view, which impacted the amount of Free Software bits I ended up doing - even when I had the time I often wasn t in the right head space to make progress on things. However writing this annual recap up has been a useful exercise, as I achieved more than I realised. For previous years see 2019, 2020, 2021 + 2022.

Conferences The only Free Software related conference I made it to this year was DebConf23 in Kochi, India. Changes with projects at work meant I couldn t justify anything work related. This year I m planning to make it to FOSDEM, and haven t made a decision on DebConf24 yet.

Debian Most of my contributions to Free software continue to happen within Debian. I started the year working on retrogaming with Kodi on Debian. I got this to a much better state for bookworm, with it being possible to run the bsnes-mercury emulator under Kodi using RetroArch. There are a few other libretro backends available for RetroArch, but Kodi needs some extra controller mappings packaged up first. Plenty of uploads were involved, though some of this was aligning all the dependencies and generally cleaning things up in iterations. I continued to work on a few packages within the Debian Electronics Packaging Team. OpenOCD produced a new release in time for the bookworm release, so I uploaded 0.12.0-1. There were a few minor sigrok cleanups - sigrok 0.3, libsigrokdecode 0.5.3-4 + libsigrok 0.5.2-4 / 0.5.2-5. While I didn t manage to get the work completed I did some renaming of the ESP8266 related packages - gcc-xtensa-lx106 (which saw a 13 upload pre-bookworm) has become gcc-xtensa (with 14) and binutils-xtensa-lx106 has become binutils-xtensa (with 6). Binary packages remain the same, but this is intended to allow for the generation of ESP32 compiler toolchains from the same source. onak saw 0.6.3-1 uploaded to match the upstream release. I also uploaded libgpg-error 1.47-1 (though I can claim no credit for any of the work in preparing the package) to help move things forward on updating gnupg2 in Debian. I NMUed tpm2-pkcs11 1.9.0-0.1 to fix some minor issues pre-bookworm release; I use this package myself to store my SSH key within my laptop TPM, so I care about it being in a decent state. sg3-utils also saw a bit of love with 1.46-2 + 1.46-3 - I don t work in the storage space these days, but I m still listed as an uploaded and there was an RC bug around the library package naming that I was qualified to fix and test pre-bookworm. Related to my retroarch work I sponsored uploads of mgba for Ryan Tandy: 0.10.0+dfsg-1, 0.10.0+dfsg-2, 0.10.1+dfsg-1, 0.10.2+dfsg-1, mgba 0.10.1+dfsg-1+deb12u1. As part of the Data Protection Team I responded to various inbound queries to that team, both from project members and those external to the project. I continue to keep an eye on Debian New Members, even though I m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions as well as our panel at DebConf23. Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.26, 2023.06.29, 2023.09.10, 2023.09.24 + 2023.12.24.

Linux I had a few minor patches accepted to the kernel this year. A pair of safexcel cleanups (improved error logging for firmware load fail and cleanup on load failure) came out of upgrading the kernel running on my RB5009. The rest were related to my work on repurposing my C.H.I.P.. The AXP209 driver needed extended to support GPIO3 (with associated DT schema update). That allowed Bluetooth to be enabled. Adding the AXP209 internal temperature ADC as an iio-hwmon node means it can be tracked using the normal sensor monitoring framework. And finally I added the pinmux settings for mmc2, which I use to support an external microSD slot on my C.H.I.P.

Personal projects 2023 saw another minor release of onak, 0.6.3, which resulted in a corresponding Debian upload (0.6.3-1). It has a couple of bug fixes (including a particularly annoying, if minor, one around systemd socket activation that felt very satisfying to get to the bottom of), but I still lack the time to do any of the major changes I would like to. I wrote listadmin3 to allow easy manipulation of moderation queues for Mailman3. It s basic, but it s drastically improved my timeliness on dealing with held messages.

3 January 2024

John Goerzen: Live Migrating from Raspberry Pi OS bullseye to Debian bookworm

I ve been getting annoyed with Raspberry Pi OS (Raspbian) for years now. It s a fork of Debian, but manages to omit some of the most useful things. So I ve decided to migrate all of my Pis to run pure Debian. These are my reasons:
  1. Raspberry Pi OS has, for years now, specified that there is no upgrade path. That is, to get to a newer major release, it s a reinstall. While I have sometimes worked around this, for a device that is frequently installed in hard-to-reach locations, this is even more important than usual. It s common for me to upgrade machines for a decade or more across Debian releases and there s no reason that it should be so much more difficult with Raspbian.
  2. As I noted in Consider Security First, the security situation for Raspberry Pi OS isn t as good as it is with Debian.
  3. Raspbian lags behind Debian often times by 6 months or more for major releases, and days or weeks for bug fixes and security patches.
  4. Raspbian has no direct backports support, though Raspberry Pi 3 and above can use Debian s backports (per my instructions as Installing Debian Backports on Raspberry Pi)
  5. Raspbian uses a custom kernel without initramfs support
It turns out it is actually possible to do an in-place migration from Raspberry Pi OS bullseye to Debian bookworm. Here I will describe how. Even if you don t have a Raspberry Pi, this might still be instructive on how Raspbian and Debian packages work.

WARNINGS Before continuing, back up your system. This process isn t for the neophyte and it is entirely possible to mess up your boot device to the point that you have to do a fresh install to get your Pi to boot. This isn t a supported process at all.

Architecture Confusion Debian has three ARM-based architectures:
  • armel, for the lowest-end 32-bit ARM devices without hardware floating point support
  • armhf, for the higher-end 32-bit ARM devices with hardware float (hence hf )
  • arm64, for 64-bit ARM devices (which all have hardware float)
Although the Raspberry Pi 0 and 1 do support hardware float, they lack support for other CPU features that Debian s armhf architecture assumes. Therefore, the Raspberry Pi 0 and 1 could only run Debian s armel architecture. Raspberry Pi 3 and above are capable of running 64-bit, and can run both armhf and arm64. Prior to the release of the Raspberry Pi 5 / Raspbian bookworm, Raspbian only shipped the armhf architecture. Well, it was an architecture they called armhf, but it was different from Debian s armhf in that everything was recompiled to work with the more limited set of features on the earlier Raspberry Pi boards. It was really somewhere between Debian s armel and armhf archs. You could run Debian armel on those, but it would run more slowly, due to doing floating point calculations without hardware support. Debian s raspi FAQ goes into this a bit. What I am going to describe here is going from Raspbian armhf to Debian armhf with a 64-bit kernel. Therefore, it will only work with Raspberry Pi 3 and above. It may theoretically be possible to take a Raspberry Pi 2 to Debian armhf with a 32-bit kernel, but I haven t tried this and it may be more difficult. I have seen conflicting information on whether armhf really works on a Pi 2. (If you do try it on a Pi 2, ignore everything about arm64 and 64-bit kernels below, and just go with the linux-image-armmp-lpae kernel per the ARMMP page) There is another wrinkle: Debian doesn t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 like x86_64 is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions). (It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)

Prerequisites and Limitations In addition to the need for a Raspberry Pi 3 or above in order for this to work, there are a few other things to mention. If you are using the GPIO features of the Pi, I don t know if those work with Debian. I think Raspberry Pi OS modified the desktop environment more than other components. All of my Pis are headless, so I don t know if this process will work if you use a desktop environment. I am assuming you are booting from a MicroSD card as is typical in the Raspberry Pi world. The Pi s firmware looks for a FAT partition (MBR type 0x0c) and looks within it for boot information. Depending on how long ago you first installed an OS on your Pi, your /boot may be too small for Debian. Use df -h /boot to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I ve done this, but it s outside the scope of this article.) You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You did make a backup, right?)

Basic idea The basic idea here is that since bookworm has almost entirely newer packages then bullseye, we can just switch over to it and let the Debian packages replace the Raspbian ones as they are upgraded. Well, it s not quite that easy, but that s the main idea.

Preparation First, make a backup. Even an image of your MicroSD card might be nice. OK, I think I ve said that enough now. It would be a good idea to have a HDMI cable (with the appropriate size of connector for your particular Pi board) and a HDMI display handy so you can troubleshoot any bootup issues with a console.

Preparation: access The Raspberry Pi OS by default sets up a user named pi that can use sudo to gain root without a password. I think this is an insecure practice, but assuming you haven t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat:
pi ALL=(ALL) NOPASSWD: ALL
Also, there may be no password set for the root account. It would be a good idea to set one; it makes it easier to log in at the console. Use the passwd command as root to do so.

Preparation: bluetooth Debian doesn t correctly identify the Bluetooth hardware address. You can save it off to a file by running hcitool dev > /root/bluetooth-from-raspbian.txt. I don t use Bluetooth, but this should let you develop a script to bring it up properly.

Preparation: Debian archive keyring You will next need to install Debian s archive keyring so that apt can authenticate packages from Debian. Go to the bookworm download page for debian-archive-keyring and copy the URL for one of the files, then download it on the pi. For instance:
wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb
Use sha256sum to verify the checksum of the downloaded file, comparing it to the package page on the Debian site. Now, you ll install it with:
dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb

Package first steps From here on, we are making modifications to the system that can leave it in a non-bootable state. Examine /etc/apt/sources.list and all the files in /etc/apt/sources.list.d. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:
deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free
(you might leave off contrib and non-free depending on your needs) Now, we re going to tell it that we ll support arm64 packages:
dpkg --add-architecture arm64
And finally, download the bookworm package lists:
apt-get update
If there are any errors from that command, fix them and don t proceed until you have a clean run of apt-get update.

Moving /boot to /boot/firmware The boot FAT partition I mentioned above is mounted at /boot by Raspberry Pi OS, but Debian s scripts assume it will be at /boot/firmware. We need to fix this. First:
umount /boot
mkdir /boot/firmware
Now, edit fstab and change the reference to /boot to be to /boot/firmware. Now:
mount -v /boot/firmware
cd /boot/firmware
mv -vi * ..
This mounts the filesystem at the new location, and moves all its contents back to where apt believes it should be. Debian s packages will populate /boot/firmware later.

Installing the first packages Now we start by installing the first of the needed packages. Eventually we will wind up with roughly the same set Debian uses.
apt-get install linux-image-arm64
apt-get install firmware-brcm80211=20230210-5
apt-get install raspi-firmware
If you get errors relating to firmware-brcm80211 from any commands, run that install firmware-brcm80211 command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that s one of them.

Configuring the bootloader We need to configure a few things in /etc/default/raspi-firmware before proceeding. Edit that file. First, uncomment (or add) a line like this:
KERNEL_ARCH="arm64"
Next, in /boot/cmdline.txt you can find your old Raspbian boot command line. It will say something like:
root=PARTUUID=...
Save off the bit starting with PARTUUID. Back in /etc/default/raspi-firmware, set a line like this:
ROOTPART=PARTUUID=abcdef00
(substituting your real value for abcdef00). This is necessary because the microSD card device name often changes from /dev/mmcblk0 to /dev/mmcblk1 when switching to Debian s kernel. raspi-firmware will encode the current device name in /boot/firmware/cmdline.txt by default, which will be wrong once you boot into Debian s kernel. The PARTUUID approach lets it work regardless of the device name.

Purging the Raspbian kernel Run:
dpkg --purge raspberrypi-kernel

Upgrading the system At this point, we are going to run the procedure beginning at section 4.4.3 of the Debian release notes. Generally, you will do:
apt-get -u upgrade
apt full-upgrade
Fix any errors at each step before proceeding to the next. Now, to remove some cruft, run:
apt-get --purge autoremove
Inspect the list to make sure nothing important isn t going to be removed.

Removing Raspbian cruft You can list some of the cruft with:
apt list '~o'
And remove it with:
apt purge '~o'
I also don t run Bluetooth, and it seemed to sometimes hang on boot becuase I didn t bother to fix it, so I did:
apt-get --purge remove bluez

Installing some packages This makes sure some basic Debian infrastructure is available:
apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools
apt-get --purge autoremove

Installing firmware Now run:
apt-get install firmware-linux

Resolving firmware package version issues If it gives an error about the installed version of a package, you may need to force it to the bookworm version. For me, this often happened with firmware-atheros, firmware-libertas, and firmware-realtek. Here s how to resolve it, with firmware-realtek as an example:
  1. Go to https://packages.debian.org/PACKAGENAME for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm in this case, 20230210-5.
  2. Now, you will force the installation of that package at that version:
    apt-get install firmware-realtek=20230210-5
    
  3. Repeat with every conflicting package until done.
  4. Rerun apt-get install firmware-linux and make sure it runs cleanly.
Also, in the end you should be able to:
apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux

Dealing with other Raspbian packages The Debian release notes discuss removing non-Debian packages. There will still be a few of those. Run:
apt list '?narrow(?installed, ?not(?origin(Debian)))'
Deal with them; mostly you will need to force the installation of a bookworm version using the procedure in the section Resolving firmware package version issues above (even if it s not for a firmware package). For non-firmware packages, you might possibly want to add --mark-auto to your apt-get install command line to allow the package to be autoremoved later if the things depending on it go away. If you aren t going to use Bluetooth, I recommend apt-get --purge remove bluez as well. Sometimes it can hang at boot if you don t fix it up as described above.

Set up networking We ll be switching to the Debian method of networking, so we ll create some files in /etc/network/interfaces.d. First, eth0 should look like this:
allow-hotplug eth0
iface eth0 inet dhcp
iface eth0 inet6 auto
And wlan0 should look like this:
allow-hotplug wlan0
iface wlan0 inet dhcp
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
Raspbian is inconsistent about using eth0/wlan0 or renamed interface. Run ifconfig or ip addr. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0 file to the one named after the enx interface, or the wlan0 file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name. If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf. It should have lines like:
network= 
   ssid="NetworkName"
   psk="passwordHere"
 
(This is where Raspberry Pi OS put them).

Deal with DHCP Raspberry Pi OS used dhcpcd, whereas bookworm normally uses isc-dhcp-client. Verify the system is in the correct state:
apt-get install isc-dhcp-client
apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus

Set up LEDs To set up the LEDs to trigger on MicroSD activity as they did with Raspbian, follow the Debian instructions. Run apt-get install sysfsutils. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf:
class/leds/ACT/brightness = 1
class/leds/ACT/trigger = mmc1

Prepare for boot To make sure all the /boot/firmware files are updated, run update-initramfs -u. Verify that root in /boot/firmware/cmdline.txt references the PARTUUID as appropriate. Verify that /boot/firmware/config.txt contains the lines arm_64bit=1 and upstream_kernel=1. If not, go back to the section on modifying /etc/default/raspi-firmware and fix it up.

The moment arrives Cross your fingers and try rebooting into your Debian system:
reboot
For some reason, I found that the first boot into Debian seems to hang for 30-60 seconds during bootstrap. I m not sure why; don t panic if that happens. It may be necessary to power cycle the Pi for this boot.

Troubleshooting If things don t work out, hook up the Pi to a HDMI display and see what s up. If I anticipated a particular problem, I would have documented it here (a lot of the things I documented here are because I ran into them!) So I can t give specific advice other than to watch boot messages on the console. If you don t even get kernel messages going, then there is some problem with your partition table or /boot/firmware FAT partition. Otherwise, you ve at least got the kernel going and can troubleshoot like usual from there.

28 December 2023

Antonio Terceiro: Debian CI: 10 years later

It was 2013, and I was on a break from work between Christmas and New Year of 2013. I had been working at Linaro for well over a year, on the LAVA project. I was living and breathing automated testing infrastructure, mostly for testing low-level components such as kernels and bootloaders, on real hardware. At this point I was also a Debian contributor for quite some years, and had become an official project members two years prior. Most of my involvement was in the Ruby team, where we were already consistently running upstream test suites during package builds. During that break, I put these two contexts together, and came to the conclusion that Debian needed a dedicated service that would test the contents of the Debian archive. I was aware of the existance of autopkgtest, and started working on a very simple service that would later become Debian CI. In January 2014, debci was initially announced on that month's Misc Developer News, and later uploaded to Debian. It's been continuously developed for the last 10 years, evolved from a single shell script running tests in a loop into a distributed system with 47 geographically-distributed machines as of writing this piece, became part of the official Debian release process gating migrations to testing, had 5 Summer of Code and Outrechy interns working on it, and processed beyond 40 million test runs. In there years, Debian CI has received contributions from a lot of people, but I would like to give special credits to the following:

27 November 2023

Andrew Cater: 20231123 - UEFI install on a Raspberry Pi 4 - step by step instructions to a modified d-i

Motivation
Andy (RattusRattus) and I have been formalising instructions for using Pete Batard's version of Tianocore (and therefore UEFI booting) for the Raspberry Pi 4 together with a Debian arm64 netinst to make a modified Debian installer on a USB stick which "just works" for a Raspberry Pi 4.
Thanks also to Steve McIntyre for initial notes that got this working for us and also to Emmanuele Rocca for putting up some useful instructions for copying.

Recipe

Plug in a USB stick - use dmesg or your favourite method to see how it is identified.

Make a couple of mount points under /mnt - /mnt/data and /mnt/cdrom


1. Grab a USB stick, Partition using MBR. Make a single VFAT
partition, type 0xEF (i.e. EFI System Partition)

For a USB stick (identified as sdX) below:
$ sudo parted --script /dev/sdX mklabel msdos $ sudo parted --script /dev/sdX mkpart primary fat32 0% 100% $ sudo mkfs.vfat /dev/sdX1 $ sudo mount /dev/sdX1 /mnt/data/

Download an arm64 netinst.iso

https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-12.2.0-arm64-netinst.iso

2. Copy the complete contents of partition *1* from a Debian arm64
installer image into the filesystem (partition 1 is the installer
stuff itself) on the USB stick, in /

$ sudo kpartx -v -a debian-12.2.0-arm64-netinst.iso # Mount the first partition on the ISO and copy its contents to the stick $ sudo mount /dev/mapper/loop0p1 /mnt/cdrom/ $ sudo rsync -av /mnt/cdrom/ /mnt/data/ $ sudo umount /mnt/cdrom

3. Copy the complete contents of partition *2* from that Debian arm64
installer image into that filesystem (partition 2 is the ESP) on
the USB stick, in /

# Same story with the second partition on the ISO

$ sudo mount /dev/mapper/loop0p2 /mnt/cdrom/

$ sudo rsync -av /mnt/cdrom/ /mnt/data/ $ sudo umount /mnt/cdrom

$ sudo kpartx -d debian-testing-amd64-netinst.iso $ sudo umount /mnt/data


4. Grab the rpi edk2 build from https://github.com/pftf/RPi4/releases
(I used 1.35) and extract it. I copied the files there into *2*
places for now on the USB stick:

/ (so the Pi will boot using it)
/rpi4 (so we can find the files again later)

5. Add the preseed.cfg file (attached) into *both* of the two initrd
files on the USB stick

- /install.a64/initrd.gz and
- /install.a64/gtk/initrd.gz

cpio is an awful tool to use :-(. In each case:

$ cp /path/to/initrd.gz .
$ gunzip initrd.gz
$ echo preseed.cfg cpio -H newc -o -A -F initrd

$ gzip -9v initrd

$ cp initrd.gz /path/to/initrd.gz

If you look at the preseed file, it will do a few things:

- Use an early_command to unmount /media (to work around Debian bug
#1051964)

- Register a late_command call for /cdrom/finish-rpi (the next
file - see below) to run at the end of the installation.

- Force grub installation also to the EFI removable media path,
needed as the rpi doesn't store EFI boot variables.

- Stop the installer asking for firmware from removable media (as
the rpi4 will ask for broadcom bluetooth fw that we can't
ship. Can be ignored safely.)

6. Copy the finish-rpi script (attached) into / on the USB stick. It
will be run at the end of the installation, triggered via the
preseed. It does a couple of things:

- Copy the edk2 firmware files into the ESP on the system that's
just been installer

- Remove shim-signed from the installed systems, as there's a bug
that causes it to fail on rpi4. I need to dig into this to see
what the issue is.

That's it! Run the installer as normal, all should Just Work (TM).

BlueTooth didn't quite work : raspberrypi-firmware didn't install until adding a symlink for boot/efi to /boot/firmware

20231127 - This may not be necessary because raspberrypi-firmware path has been fixed

Preseed.cfg
# The preseed file itself causes a problem - the installer medium is
# left mounted on /medis so things break in cdrom-detect. Let's see if
# we can fix that!
d-i preseed/early_command string umount /media true

# Run our command to do rpi setup before reboot
d-i preseed/late_command string /cdrom/finish-rpi

# Force grub installation to the RM path
grub-efi-arm64 grub2/force_efi_extra_removable boolean true

# Don't prompt for missing firmware from removable media,
# e.g. broadcom bluetooth on the rpi.
d-i hw-detect/load_firmware boolean false

Finish.rpi
!/bin/sh

set -x

grep -q -a RPI4 /sys/firmware/acpi/tables/CSRT
if [ $? -ne 0 ]; then
echo "Not running on a Pi 4, exit!"
exit 0
fi

# Copy the rpi4 firmware binaries onto the installed system.
# Assumes the installer media is mounted on /cdrom.
cp -vr /cdrom/rpi4/. /target/boot/efi/.

# shim-signed doesn't seem happy on rpi4, so remove it
mount --bind /sys /target/sys
mount --bind /proc /target/proc
mount --bind /dev /target/dev

in-target apt-get remove --purge --autoremove -y shim-signed




26 November 2023

Ian Jackson: Hacking my filter coffee machine

I hacked my coffee machine to let me turn it on from upstairs in bed :-). Read on for explanation, circuit diagrams, 3D models, firmware source code, and pictures. Background: the Morphy Richards filter coffee machine I have a Morphy Richards filter coffee machine. It makes very good coffee. But the display and firmware are quite annoying: Also, I m lazy and wanted to be able to cause coffee to exist from upstairs in bed, without having to make a special trip down just to turn the machine on. Planning My original feeling was I can t be bothered dealing with the coffee machine innards so I thought I would make a mechanical contraption to physically press the coffee machine s on button. I could have my contraption press the button to turn the machine on (timed, or triggered remotely), and then periodically in pairs to reset the 25-minute keep-warm timer. But a friend pointed me at a blog post by Andy Bradford, where Andy recounts modifying his coffee machine, adding an ESP8266 and connecting it to his MQTT-based Home Assistant setup. I looked at the pictures and they looked very similar to my machine. I decided to take a look inside. Inside the Morphy Richards filter coffee machine My coffee machine seemed to be very similar to Andy s. His disassembly report was very helpful. Inside I found the high-voltage parts with the heating elements, and the front panel with the display and buttons. I spent a while poking about, masuring things, and so on. Unexpected electrical hazard At one point I wanted to use my storage oscilloscope to capture the duration and amplitude of the beep signal. I needed to connect the scope ground to the UI board s ground plane, but then when I switched the coffee machine on at the wall socket, it tripped the house s RCD. It turns out that the low voltage UI board is coupled to the mains. In my setting, there s an offset of about 8V between the UI board ground plane, and true earth. (In my house the neutral is about 2-3V away from true earth.) This alarmed me rather. To me, this means that my modifications needed to still properly electrically isolate everything connected to the UI board from anything external to the coffee machine s housing. In Andy s design, I think the internal UI board ground plane is directly brought out to an external USB-A connector. This means that if there were a neutral fault, the USB-A connector would be at live potential, possibly creating an electrocution or fire hazard. I made a comment in Andy Bradford s blog, reporting this issue, but it doesn t seem to have appeared. This is all quite alarming. I hope Andy is OK! Design approach I don t have an MQTT setup at home, or an installation of Home Assistant. I didn t feel like adding a lot of complicated software to my life, if I could avoid it. Nor did I feel like writing a web UI myself. I ve done that before, but I m lazy and in this case my requirements were quite modest. Also, the need for electrical isolation would further complicate any attempt to do something sophisticated (that could, for example, sense the state of the coffee machine). I already had a Tasmota-based cloud-free smart plug, which controls the fairy lights on our gazebo. We just operate that through its web UI. So, I decided I would add a small and stupid microcontroller. The microcontroller would be powered via a smart plug and an off-the-shelf USB power supply. The microcontroller would have no inputs. It would simply simulate an on button press once at startup, and thereafter two presses every 24 minutes. After the 4th double press the microcontroller would stop, leaving the coffee machine to time out itself, after a total period of about 2h. Implementation - hardware I used a DigiSpark board with an ATTiny85. One of the GPIOs is connected to an optoisolator, whose output transistor is wired across the UI board s on button. circuit diagram; board layout diagram; (click for diagram scans as pdfs). The DigiSpark has just a USB tongue, which is very wobbly in a normal USB socket. I designed a 3D printed case which also had an approximation of the rest of the USB A plug. The plug is out of spec; our printer won t go fine enough - and anyway, the shield is supposed to be metal, not fragile plastic. But it fit in the USB PSU I was using, satisfactorily if a bit stiffly, and also into the connector for programming via my laptop. Inside the coffee machine, there s the boundary between the original, coupled to mains, UI board, and the isolated low voltage of the microcontroller. I used a reasonably substantial cable to bring out the low voltage connection, past all the other hazardous innards, to make sure it stays isolated. I added a drain power supply resistor on another of the GPIOs. This is enabled, with a draw of about 30mA, when the microcontroller is soon going to off / on cycle the coffee machine. That reduces the risk that the user will turn off the smart plug, and turn off the machine, but that the microcontroller turns the coffee machine back on again using the remaining power from USB PSU. Empirically in my setup it reduces the time from smart plug off to microcontroller stops from about 2-3s to more like 1s. Optoisolator board (inside coffee machine) pictures (Click through for full size images.) optoisolator board, front; optoisolator board, rear; optoisolator board, fitted. Microcontroller board (in USB-plug-ish housing) pictures microcontroller board, component side; microcontroller board, wiring side, part fitted; microcontroller in USB-plug-ish housing. Implementation - software I originally used the Arduino IDE, writing my program in C. I had a bad time with that and rewrote it in Rust. The firmware is in a repository on Debian s gitlab Results I can now cause the coffee to start, from my phone. It can be programmed more than 12h in advance. And it stays warm until we ve drunk it. UI is worse There s one aspect of the original Morphy Richards machine that I haven t improved: the user interface is still poor. Indeed, it s now even worse: To turn the machine on, you probably want to turn on the smart plug instead. Unhappily, the power button for that is invisible in its installed location. In particular, in the usual case, if you want to turn it off, you should ideally turn off both the smart plug (which can be done with the button on it) and the coffee machine itself. If you forget to turn off the smart plug, the machine can end up being turned on, very briefly, a handful of times, over the next hour or two. Epilogue We had used the new features a handful of times when one morning the coffee machine just wouldn t make coffee. The UI showed it turning on, but it wouldn t get hot, so no coffee. I thought oh no, I ve broken it! But, on investigation, I found that the machine s heating element was open circuit (ie, completely broken). I didn t mess with that part. So, hooray! Not my fault. Probably, just being inverted a number of times and generally lightly jostled, had precipitated a latent fault. The machine was a number of years old. Happily I found a replacement, identical, machine, online. I ve transplanted my modification and now it all works well. Bonus pictures (Click through for full size images.) probing the innards; machine base showing new cable route.
edited 2023-11-26 14:59 UTC in an attempt to fix TOC links


comment count unavailable comments

23 November 2023

Andrew Cater: Arm Cambridge - mini-Debcamp 23 November 2023


At Arm for two days before the mini-Debconf this weekend.First time at Arm for a few years: huge new buildings, shiny lecture theatre.Arm have made us very welcome. A superb buffet lunch and unlimited coffee plus soft drinks - I think they know what Debian folk are like.
Not enough power blocks laid out at the beginning - only one per table - but we soon fixed that
The room is full of Debian folk: some I know, some new faces. Reminiscing about meeting some of them from 25 years ago - and the chance to thank people for help over a long time.
Andy (RattusRattus) and I have been working out the bugs on an install script using UEFI for a Raspberry Pi 4. More on that in the next post, maybe.As ever, it's the sort of place where "I can't get into the wiki" is sorted by walking three metres across the room or where an "I can't find where to get X for Raspberry Pi" can be solved by asking the person who builds Raspbian. "Did you try and sign up to the Debian wiki last week - you didn't follow the instructions to mail wiki@ - I _know_ you didn't because I didn't see the mail ... "

My kind of place and my kind of people, as ever.

Thanks again to Arm who are one of our primary sponsors for this mini-Debconf.

21 November 2023

Mike Hommey: How I (kind of) killed Mercurial at Mozilla

Did you hear the news? Firefox development is moving from Mercurial to Git. While the decision is far from being mine, and I was barely involved in the small incremental changes that ultimately led to this decision, I feel I have to take at least some responsibility. And if you are one of those who would rather use Mercurial than Git, you may direct all your ire at me. But let's take a step back and review the past 25 years leading to this decision. You'll forgive me for skipping some details and any possible inaccuracies. This is already a long post, while I could have been more thorough, even I think that would have been too much. This is also not an official Mozilla position, only my personal perception and recollection as someone who was involved at times, but mostly an observer from a distance. From CVS to DVCS From its release in 1998, the Mozilla source code was kept in a CVS repository. If you're too young to know what CVS is, let's just say it's an old school version control system, with its set of problems. Back then, it was mostly ubiquitous in the Open Source world, as far as I remember. In the early 2000s, the Subversion version control system gained some traction, solving some of the problems that came with CVS. Incidentally, Subversion was created by Jim Blandy, who now works at Mozilla on completely unrelated matters. In the same period, the Linux kernel development moved from CVS to Bitkeeper, which was more suitable to the distributed nature of the Linux community. BitKeeper had its own problem, though: it was the opposite of Open Source, but for most pragmatic people, it wasn't a real concern because free access was provided. Until it became a problem: someone at OSDL developed an alternative client to BitKeeper, and licenses of BitKeeper were rescinded for OSDL members, including Linus Torvalds (they were even prohibited from purchasing one). Following this fiasco, in April 2005, two weeks from each other, both Git and Mercurial were born. The former was created by Linus Torvalds himself, while the latter was developed by Olivia Mackall, who was a Linux kernel developer back then. And because they both came out of the same community for the same needs, and the same shared experience with BitKeeper, they both were similar distributed version control systems. Interestingly enough, several other DVCSes existed: In this landscape, the major difference Git was making at the time was that it was blazing fast. Almost incredibly so, at least on Linux systems. That was less true on other platforms (especially Windows). It was a game-changer for handling large codebases in a smooth manner. Anyways, two years later, in 2007, Mozilla decided to move its source code not to Bzr, not to Git, not to Subversion (which, yes, was a contender), but to Mercurial. The decision "process" was laid down in two rather colorful blog posts. My memory is a bit fuzzy, but I don't recall that it was a particularly controversial choice. All of those DVCSes were still young, and there was no definite "winner" yet (GitHub hadn't even been founded). It made the most sense for Mozilla back then, mainly because the Git experience on Windows still wasn't there, and that mattered a lot for Mozilla, with its diverse platform support. As a contributor, I didn't think much of it, although to be fair, at the time, I was mostly consuming the source tarballs. Personal preferences Digging through my archives, I've unearthed a forgotten chapter: I did end up setting up both a Mercurial and a Git mirror of the Firefox source repository on alioth.debian.org. Alioth.debian.org was a FusionForge-based collaboration system for Debian developers, similar to SourceForge. It was the ancestor of salsa.debian.org. I used those mirrors for the Debian packaging of Firefox (cough cough Iceweasel). The Git mirror was created with hg-fast-export, and the Mercurial mirror was only a necessary step in the process. By that time, I had converted my Subversion repositories to Git, and switched off SVK. Incidentally, I started contributing to Git around that time as well. I apparently did this not too long after Mozilla switched to Mercurial. As a Linux user, I think I just wanted the speed that Mercurial was not providing. Not that Mercurial was that slow, but the difference between a couple seconds and a couple hundred milliseconds was a significant enough difference in user experience for me to prefer Git (and Firefox was not the only thing I was using version control for) Other people had also similarly created their own mirror, or with other tools. But none of them were "compatible": their commit hashes were different. Hg-git, used by the latter, was putting extra information in commit messages that would make the conversion differ, and hg-fast-export would just not be consistent with itself! My mirror is long gone, and those have not been updated in more than a decade. I did end up using Mercurial, when I got commit access to the Firefox source repository in April 2010. I still kept using Git for my Debian activities, but I now was also using Mercurial to push to the Mozilla servers. I joined Mozilla as a contractor a few months after that, and kept using Mercurial for a while, but as a, by then, long time Git user, it never really clicked for me. It turns out, the sentiment was shared by several at Mozilla. Git incursion In the early 2010s, GitHub was becoming ubiquitous, and the Git mindshare was getting large. Multiple projects at Mozilla were already entirely hosted on GitHub. As for the Firefox source code base, Mozilla back then was kind of a Wild West, and engineers being engineers, multiple people had been using Git, with their own inconvenient workflows involving a local Mercurial clone. The most popular set of scripts was moz-git-tools, to incorporate changes in a local Git repository into the local Mercurial copy, to then send to Mozilla servers. In terms of the number of people doing that, though, I don't think it was a lot of people, probably a few handfuls. On my end, I was still keeping up with Mercurial. I think at that time several engineers had their own unofficial Git mirrors on GitHub, and later on Ehsan Akhgari provided another mirror, with a twist: it also contained the full CVS history, which the canonical Mercurial repository didn't have. This was particularly interesting for engineers who needed to do some code archeology and couldn't get past the 2007 cutoff of the Mercurial repository. I think that mirror ultimately became the official-looking, but really unofficial, mozilla-central repository on GitHub. On a side note, a Mercurial repository containing the CVS history was also later set up, but that didn't lead to something officially supported on the Mercurial side. Some time around 2011~2012, I started to more seriously consider using Git for work myself, but wasn't satisfied with the workflows others had set up for themselves. I really didn't like the idea of wasting extra disk space keeping a Mercurial clone around while using a Git mirror. I wrote a Python script that would use Mercurial as a library to access a remote repository and produce a git-fast-import stream. That would allow the creation of a git repository without a local Mercurial clone. It worked quite well, but it was not able to incrementally update. Other, more complete tools existed already, some of which I mentioned above. But as time was passing and the size and depth of the Mercurial repository was growing, these tools were showing their limits and were too slow for my taste, especially for the initial clone. Boot to Git In the same time frame, Mozilla ventured in the Mobile OS sphere with Boot to Gecko, later known as Firefox OS. What does that have to do with version control? The needs of third party collaborators in the mobile space led to the creation of what is now the gecko-dev repository on GitHub. As I remember it, it was challenging to create, but once it was there, Git users could just clone it and have a working, up-to-date local copy of the Firefox source code and its history... which they could already have, but this was the first officially supported way of doing so. Coincidentally, Ehsan's unofficial mirror was having trouble (to the point of GitHub closing the repository) and was ultimately shut down in December 2013. You'll often find comments on the interwebs about how GitHub has become unreliable since the Microsoft acquisition. I can't really comment on that, but if you think GitHub is unreliable now, rest assured that it was worse in its beginning. And its sustainability as a platform also wasn't a given, being a rather new player. So on top of having this official mirror on GitHub, Mozilla also ventured in setting up its own Git server for greater control and reliability. But the canonical repository was still the Mercurial one, and while Git users now had a supported mirror to pull from, they still had to somehow interact with Mercurial repositories, most notably for the Try server. Git slowly creeping in Firefox build tooling Still in the same time frame, tooling around building Firefox was improving drastically. For obvious reasons, when version control integration was needed in the tooling, Mercurial support was always a no-brainer. The first explicit acknowledgement of a Git repository for the Firefox source code, other than the addition of the .gitignore file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py, from September 2012. Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format command that would apply clang-format-diff to the output from hg diff. Obviously, running hg diff on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014. A year later, when the initial implementation of mach artifact was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact was eventually added 14 months later, in March 2016. From gecko-dev to git-cinnabar Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands. Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now). Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence. Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch). One Firefox repository to rule them all For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them. And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though. I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such. In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary. 7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository. Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated. Hosting is simple, right? Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones). And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches. Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication. Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar). Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is. "But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way. And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while). Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016. The growing difficulty of maintaining the status quo Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin. Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things). On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option. But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one. Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours). I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem. Taking the leap I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective. Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more. It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different. You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved. And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?). Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git). Heck, even Microsoft moved to Git! With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems. But... GitHub? I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far. After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those). Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that). I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened). "But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion. At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then. So, what's next? The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not. While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now: As there is no one-size-fits-all workflow, I won't tell you how to organize yourself from there. I'll just say this: if you know the Mercurial sha1s of your previous local work, you can create branches for them with:
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post. If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them. Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far). What about git-cinnabar? With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained. I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches). Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there. So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either. Final words That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;) So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change. To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire. And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.

7 November 2023

Jonathan Dowland: gitsigns (useful neovim plugins)

gitsigns is a Neovim plugin which adds a wonderfully subtle colour annotation in the left-hand gutter to reflect changes in the buffer since the last git commit1. My long-term habit with Vim and Git is to frequently background Vim (^Z) to invoke the git command directly and then foreground Vim again (fg). Over the last few years I've been trying more and more to call vim-fugitive from within Vim instead. (I still do rebases and most merges the old-fashioned way). For the most part Gitsigns is a nice passive addition to that, but it can also do a lot of useful things that Fugitive also does. Previewing changed hunks in a little floating window, in particular when resolving an awkward merge conflict, is very handy.
An example screenshot of the Gitsigns plugin
The above picture shows it in action. I've changed two lines and added another three in the top-left buffer. Gitsigns tries to choose colours from the currently active colour scheme (in this case, tender)

  1. by default Gitsigns shows changes since the last commit, as git diff does, but you can easily switch out the base from which it compares.

22 October 2023

Aigars Mahinovs: Figuring out finances part 3

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that? See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case. So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal. So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd. xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing. But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience. The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have. What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures. But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers. However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later. And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon. This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together. So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export. Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot. There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying. An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5 amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty. So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

7 October 2023

Andrew Cater: Point release weekend for Debian: two releases this weekend: 202311071653

Over in Cambridge with RattusRattus, Sledge, egw and Isy. Andy is very kindly putting us up.

We're almost all of the way through testing 12.2 and some of the way through testing 11.8.

It's a LONG day - heads down into laptops and relatively quiet - I think we're all tired and we've a way to go yet.


10 September 2023

Freexian Collaborators: Debian Contributions: /usr-merge updates, Salsa CI progress, DebConf23 lead-up, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

/usr-merge work, by Helmut Grohne, et al. Given that we now have consensus on moving forward by moving aliased files from / to /usr, we will also run into the problems that the file move moratorium was meant to prevent. The way forward is detecting them early and applying workarounds on a per-package basis. Said detection is now automated using the Debian Usr Merge Analysis Tool. As problems are reported to the bug tracking system, they are connected to the reports if properly usertagged. Bugs and patches for problem categories DEP17-P2 and DEP17-P6 have been filed. After consensus has been reached on the bootstrapping matters, debootstrap has been changed to swap the initial unpack and merging to avoid unpack errors due to pre-existing links. This is a precondition for having base-files install the aliasing symbolic links eventually. It was identified that the root filesystem used by the Debian installer is still unmerged and a change has been proposed. debhelper was changed to recognize systemd units installed to /usr. A discussion with the CTTE and release team on repealing the moratorium has been initiated.

Salsa CI work, by Santiago Ruano Rinc n August was a busy month in the Salsa CI world. Santiago reviewed and merged a bunch of MRs that have improved the project in different aspects: The aptly job got two MRs from Philip Hands. With the first one, the aptly now can export a couple of variables in a dotenv file, and with the second, it can include packages from multiple artifact directories. These MRs bring the base to improve how to test reverse dependencies with Salsa CI. Santiago is working on documenting this. As a result of the mass bug filing done in August, Salsa CI now includes a job to test how a package builds twice in a row. Thanks to the MRs of Sebastiaan Couwenberg and Johannes Schauer Marin Rodrigues. Last but not least, Santiago helped Johannes Schauer Marin Rodrigues to complete the support for arm64-only pipelines.

DebConf23 lead-up, by Stefano Rivera Stefano wears a few hats in the DebConf organization and in the lead up to the conference in mid-September, they ve all been quite busy. As one of the treasurers of DebConf 23, there has been a final budget update, and quite a few payments to coordinate from Debian s Trusted Organizations. We try to close the books from the previous conference at the next one, so a push was made to get DebConf 22 account statements out of TOs and record them in the conference ledger. As a website developer, we had a number of registration-related tasks, emailing attendees and trying to estimate numbers for food and accommodation. As a conference committee member, the job was mostly taking calls and helping the local team to make decisions on urgent issues. For example, getting conference visas issued to attendees required getting political approval from the Indian government. We only discovered the full process for this too late to clear some complex cases, so this required some hard calls on skipping some countries from the application list, allowing everyone else to get visas in time. Unfortunate, but necessary.

Miscellaneous contributions
  • Rapha l Hertzog updated gnome-shell-extension-hamster to a new upstream git snapshot that is compatible with GNOME Shell 44 that was recently uploaded to Debian unstable/testing. This extension makes it easy to start/stop tracking time with Hamster Time Tracker. Very handy for consultants like us who are billing their work per hour.
  • Rapha l also updated zim to the latest upstream release (0.74.2). This is a desktop wiki that can be very useful as a note-taking tool to build your own personal knowledge base or even to manage your personal todo lists.
  • Utkarsh reviewed and sponsored some uploads from mentors.debian.net.
  • Utkarsh helped the local team and the bursary team with some more DebConf activities and helped finalize the data.
  • Thorsten tried to update package hplip. Unfortunately upstream added some new compressed files that need to appear uncompressed in the package. Even though this sounded like an easy task, which seemed to be already implemented in the current debian/rules, the new type of files broke this implementation and made the package no longer buildable. The problem has been solved and the upload will happen soon.
  • Helmut sent 7 patches for cross build failures. Since dpkg-buildflags now defaults to issue arm64-specific compiler flags, more care is needed to distinguish between build architecture flags and host architecture flags than previously.
  • Stefano pushed the final bit of the tox 4 transition over the line in Debian, allowing dh-python and tox 4 to migrate to testing. We got caught up in a few unusual bugs in tox and the way we run it in Debian package building (which had to change with tox 4). This resulted in a couple of patches upstream.
  • Stefano visited Haifa, Israel, to see the proposed DebConf 24 venue and meet with the local team. While the venue isn t committed yet, we have high hopes for it.

1 September 2023

Simon Josefsson: Trisquel on ppc64el: Talos II

The release notes for Trisquel 11.0 Aramo mention support for POWER and ARM architectures, however the download area only contains links for x86, and forum posts suggest there is a lack of instructions how to run Trisquel on non-x86. Since the release of Trisquel 11 I have been busy migrating x86 machines from Debian to Trisquel. One would think that I would be finished after this time period, but re-installing and migrating machines is really time consuming, especially if you allow yourself to be distracted every time you notice something that Really Ought to be improved. Rabbit holes all the way down. One of my production machines is running Debian 11 bullseye on a Talos II Lite machine from Raptor Computing Systems, and migrating the virtual machines running on that host (including the VM that serves this blog) to a x86 machine running Trisquel felt unsatisfying to me. I want to migrate my computing towards hardware that harmonize with FSF s Respects Your Freedom and not away from it. Here I had to chose between using the non-free software present in newer Debian or the non-free software implied by most x86 systems: not an easy chose. So I have ignored the dilemma for some time. After all, the machine was running Debian 11 bullseye , which was released before Debian started to require use of non-free software. With the end-of-life date for bullseye approaching, it seems that this isn t a sustainable choice. There is a report open about providing ppc64el ISOs that was created by Jason Self shortly after the release, but for many months nothing happened. About a month ago, Luis Guzm n mentioned an initial ISO build and I started testing it. The setup has worked well for a month, and with this post I want to contribute instructions how to get it up and running since this is still missing. The setup of my soon-to-be new production machine: According to the notes in issue 14 the ISO image is available at https://builds.trisquel.org/debian-installer-images/ and the following commands download, integrity check and write it to a USB stick:
wget -q https://builds.trisquel.org/debian-installer-images/debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz
tar xfa debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso
echo '6df8f45fbc0e7a5fadf039e9de7fa2dc57a4d466e95d65f2eabeec80577631b7  ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso'   sha256sum -c
sudo wipefs -a /dev/sdX
sudo dd if=./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso of=/dev/sdX conv=sync status=progress
Sadly, no hash checksums or OpenPGP signatures are published. Power off your device, insert the USB stick, and power it up, and you see a Petitboot menu offering to boot from the USB stick. For some reason, the "Expert Install" was the default in the menu, and instead I select "Default Install" for the regular experience. For this post, I will ignore BMC/IPMI, as interacting with it is not necessary. Make sure to not connect the BMC/IPMI ethernet port unless you are willing to enter that dungeon. The VGA console works fine with a normal USB keyboard, and you can chose to use only the second enP4p1s0f1 network card in the network card selection menu. If you are familiar with Debian netinst ISO s, the installation is straight-forward. I complicate the setup by partitioning two RAID1 partitions on the two NVMe sticks, one RAID1 for a 75GB ext4 root filesystem (discard,noatime) and one RAID1 for a 900GB LVM volume group for virtual machines, and two 20GB swap partitions on each of the NVMe sticks (to silence a warning about lack of swap, I m not sure swap is still a good idea?). The 3x18TB disks use DM-integrity with RAID1 however the installer does not support DM-integrity so I had to create it after the installation. There are two additional matters worth mentioning: I have re-installed the machine a couple of times, and have now finished installing the production setup. I haven t ran into any serious issues, and the system has been stable. Time to wrap up, and celebrate that I now run an operating system aligned with the Free System Distribution Guidelines on hardware that aligns with Respects Your Freedom Happy Hacking indeed!

28 August 2023

Jonathan McDowell: OMGWTFBBQ 2023

A person wearing an OMGWTFBBQ Catering t-shirt standing in front of a BBQ. Various uncooked burgers are visible. As is traditional for the UK August Bank Holiday weekend I made my way to Cambridge for the Debian UK BBQ. As was pointed out we ve been doing this for more than 20 years now, and it s always good to catch up with old friends and meet new folk. Thanks to Collabora, Codethink, and Andy for sponsoring a bunch of tasty refreshments. And, of course, thanks to Steve for hosting us all.

27 August 2023

Steve McIntyre: We're back!

It's August Bank Holiday Weekend, we're in Cambridge. It must be the Debian UK OMGWTFBBQ!. We're about halfway through, and we've already polished off lots and lots of good food and beer. Lars is making pancakes as I write this, :-) We had an awesome game of Mao last night. People are having fun! Many thanks to a number of awesome friendly people for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

Gunnar Wolf: Interested in adopting the RPi images for Debian?

Back in June 2018, Michael Stapelberg put the Raspberry Pi image building up for adoption. He created the first set of unofficial, experimental Raspberry Pi images for Debian. I promptly answered to him, and while it took me some time to actually warp my head around Michael s work, managed to eventually do so. By December, I started pushing some updates. Not only that: I didn t think much about it in the beginning, as the needed non-free pacakge was called raspi3-firmware, but By early 2019, I had it running for all of the then-available Raspberry families (so the package was naturally renamed to raspi-firmware). I got my Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and it soon joined the happy Debian family. The images are built daily, and are available in https://raspi.debian.net. In the process, I also adopted Lars great vmdb2 image building tool, and have kept it decently up to date (yes, I m currently lagging behind, but I ll get to it soonish ). Anyway This year, I have been seriously neglecting the Raspberry builds. I have simply not had time to regularly test built images, nor to debug why the builder has not picked up building for trixie (testing). And my time availability is not going to improve any time soon. We are close to one month away from moving for six months to Paran (Argentina), where I ll be focusing on my PhD. And while I do contemplate taking my Raspberries along, I do not forsee being able to put much energy to them. So This is basically a call for adoption for the Raspberry Debian images building service. I do intend to stick around and try to help. It s not only me (although I m responsible for the build itself) we have a nice and healthy group of Debian people hanging out in the #debian-raspberrypi channel in OFTC IRC. Don t be afraid, and come ask. I hope giving this project in adoption will breathe new life into it!

Next.