incoming()
which summarises the state of the incoming
directories at CRAN. I happen
to like having these things at my (shell) fingertips, so it goes along
with (still draft) wrapper
ciw.r that will be part of the next littler release.
For example, when I do this right now as I type this, I see
edd@rob:~$ ciw.r
Folder Name Time Size Age
<char> <char> <POSc> <char> <difftime>
1: waiting maximin_1.0-5.tar.gz 2024-03-13 22:22:00 20K 2.48 hours
2: inspect GofCens_0.97.tar.gz 2024-03-13 21:12:00 29K 3.65 hours
3: inspect verbalisr_0.5.2.tar.gz 2024-03-13 20:09:00 79K 4.70 hours
4: waiting rnames_1.0.1.tar.gz 2024-03-12 15:04:00 2.7K 33.78 hours
5: waiting PCMBase_1.2.14.tar.gz 2024-03-10 12:32:00 406K 84.32 hours
6: pending MPCR_1.1.tar.gz 2024-02-22 11:07:00 903K 493.73 hours
edd@rob:~$
r
. Good enough for me. From a well-connected EC2 instance
it is about 800ms on the command-line. When I do I from here inside an R
session it is maybe 700ms. And doing it over in Europe is faster still.
(I am using ping=FALSE
for these to omit the default sanity
check of can I haz networking? to speed things up. The check adds
another 200ms or so.)
The function (and the wrapper) offer a ton of options too this is
ridiculously easy to do thanks to the docopt
package:
edd@rob:~$ ciw.r -x
Usage: ciw.r [-h] [-x] [-a] [-m] [-i] [-t] [-p] [-w] [-r] [-s] [-n] [-u] [-l rows] [-z] [ARG...]
-m --mega use 'mega' mode of all folders (see --usage)
-i --inspect visit 'inspect' folder
-t --pretest visit 'pretest' folder
-p --pending visit 'pending' folder
-w --waiting visit 'waiting' folder
-r --recheck visit 'waiting' folder
-a --archive visit 'archive' folder
-n --newbies visit 'newbies' folder
-u --publish visit 'publish' folder
-s --skipsort skip sorting of aggregate results by age
-l --lines rows print top 'rows' of the result object [default: 50]
-z --ping run the connectivity check first
-h --help show this help text
-x --usage show help and short example usage
where ARG... can be one or more file name, or directories or package names.
Examples:
ciw.r -ip # run in 'inspect' and 'pending' mode
ciw.r -a # run with mode 'auto' resolved in incoming()
ciw.r # run with defaults, same as '-itpwr'
When no argument is given, 'auto' is selected which corresponds to 'inspect', 'waiting',
'pending', 'pretest', and 'recheck'. Selecting '-m' or '--mega' are select as default.
Folder selecting arguments are cumulative; but 'mega' is a single selections of all folders
(i.e. 'inspect', 'waiting', 'pending', 'pretest', 'recheck', 'archive', 'newbies', 'publish').
ciw.r is part of littler which brings 'r' to the command-line.
See https://dirk.eddelbuettel.com/code/littler.html for more information.
edd@rob:~$
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Publisher: | Angry Robot |
Copyright: | 2022 |
ISBN: | 0-85766-967-2 |
Format: | Kindle |
Pages: | 458 |
hz.tools
will be tagged
#hztools.
cos
and sin
of the multiplied phase (in the range of 0 to tau), assuming
the transmitter is emitting a carrier wave at a static amplitude and all
clocks are in perfect sync.
let observed_phases: Vec<Complex> = antennas
.iter()
.map( antenna
let distance = (antenna - tx).magnitude();
let distance = distance - (distance as i64 as f64);
((distance / wavelength) * TAU)
)
.map( phase Complex(phase.cos(), phase.sin()))
.collect();
let beamformed_phases: Vec<Complex> = ...;
let magnitude = beamformed_phases
.iter()
.zip(observed_phases.iter())
.map( (beamformed, observed) observed * beamformed)
.reduce( acc, el acc + el)
.unwrap()
.abs();
(x, y, z)
point at
(azimuth, elevation, magnitude)
. The color attached two that point is
based on its distance from (0, 0, 0)
. I opted to use the
Life Aquatic
table for this one.
After this process is complete, I have a
point cloud of
((x, y, z), (r, g, b))
points. I wrote a small program using
kiss3d to render point cloud using tons of
small spheres, and write out the frames to a set of PNGs, which get compiled
into a GIF.
Now for the fun part, let s take a look at some radiation patterns!
y
and z
axis, and separated by some
offset in the x
axis. This configuration can sweep 180 degrees (not
the full 360), but can t be steared in elevation at all.
Let s take a look at what this looks like for a well constructed
1x4 phased array:
And now let s take a look at the renders as we play with the configuration of
this array and make sure things look right. Our initial quarter-wavelength
spacing is very effective and has some outstanding performance characteristics.
Let s check to see that everything looks right as a first test.
Nice. Looks perfect. When pointing forward at (0, 0)
, we d expect to see a
torus, which we do. As we sweep between 0 and 360, astute observers will notice
the pattern is mirrored along the axis of the antennas, when the beam is facing
forward to 0 degrees, it ll also receive at 180 degrees just as strong. There s
a small sidelobe that forms when it s configured along the array, but
it also becomes the most directional, and the sidelobes remain fairly small.
z
axis, and separated by a fixed offset
in either the x
or y
axis by their neighbor, forming a square when
viewed along the x/y axis.
Let s take a look at what this looks like for a well constructed
2x2 phased array:
Let s do the same as above and take a look at the renders as we play with the
configuration of this array and see what things look like. This configuration
should suppress the sidelobes and give us good performance, and even give us
some amount of control in elevation while we re at it.
Sweet. Heck yeah. The array is quite directional in the configured direction,
and can even sweep a little bit in elevation, a definite improvement
from the 1x4 above.
linux-image-armmp-lpae
kernel per the ARMMP page)
There is another wrinkle: Debian doesn t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 like x86_64 is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions).
(It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)
df -h /boot
to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I ve done this, but it s outside the scope of this article.)
You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You did make a backup, right?)
pi
that can use sudo
to gain root without a password. I think this is an insecure practice, but assuming you haven t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat
:
pi ALL=(ALL) NOPASSWD: ALL
passwd
command as root to do so.
hcitool dev > /root/bluetooth-from-raspbian.txt
. I don t use Bluetooth, but this should let you develop a script to bring it up properly.
wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb
sha256sum
to verify the checksum of the downloaded file, comparing it to the package page on the Debian site.
Now, you ll install it with:
dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb
/etc/apt/sources.list
and all the files in /etc/apt/sources.list.d
. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:
deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free
dpkg --add-architecture arm64
apt-get update
apt-get update
.
/boot
by Raspberry Pi OS, but Debian s scripts assume it will be at /boot/firmware
. We need to fix this. First:
umount /boot
mkdir /boot/firmware
/boot
to be to /boot/firmware
. Now:
mount -v /boot/firmware
cd /boot/firmware
mv -vi * ..
/boot/firmware
later.
apt-get install linux-image-arm64
apt-get install firmware-brcm80211=20230210-5
apt-get install raspi-firmware
install firmware-brcm80211
command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that s one of them.
/etc/default/raspi-firmware
before proceeding. Edit that file.
First, uncomment (or add) a line like this:
KERNEL_ARCH="arm64"
/boot/cmdline.txt
you can find your old Raspbian boot command line. It will say something like:
root=PARTUUID=...
PARTUUID
. Back in /etc/default/raspi-firmware
, set a line like this:
ROOTPART=PARTUUID=abcdef00
/dev/mmcblk0
to /dev/mmcblk1
when switching to Debian s kernel. raspi-firmware
will encode the current device name in /boot/firmware/cmdline.txt
by default, which will be wrong once you boot into Debian s kernel. The PARTUUID
approach lets it work regardless of the device name.
dpkg --purge raspberrypi-kernel
apt-get -u upgrade
apt full-upgrade
apt-get --purge autoremove
apt list '~o'
apt purge '~o'
apt-get --purge remove bluez
apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools
apt-get --purge autoremove
apt-get install firmware-linux
firmware-atheros
, firmware-libertas
, and firmware-realtek
.
Here s how to resolve it, with firmware-realtek
as an example:
https://packages.debian.org/PACKAGENAME
for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm in this case, 20230210-5.
apt-get install firmware-realtek=20230210-5
apt-get install firmware-linux
and make sure it runs cleanly.
apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux
apt list '?narrow(?installed, ?not(?origin(Debian)))'
--mark-auto
to your apt-get install
command line to allow the package to be autoremoved later if the things depending on it go away.
If you aren t going to use Bluetooth, I recommend apt-get --purge remove bluez
as well. Sometimes it can hang at boot if you don t fix it up as described above.
/etc/network/interfaces.d
. First, eth0
should look like this:
allow-hotplug eth0
iface eth0 inet dhcp
iface eth0 inet6 auto
wlan0
should look like this:
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
ifconfig
or ip addr
. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0
file to the one named after the enx interface, or the wlan0
file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name.
If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf
. It should have lines like:
network=
ssid="NetworkName"
psk="passwordHere"
isc-dhcp-client
. Verify the system is in the correct state:
apt-get install isc-dhcp-client
apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus
apt-get install sysfsutils
. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf
:
class/leds/ACT/brightness = 1
class/leds/ACT/trigger = mmc1
/boot/firmware
files are updated, run update-initramfs -u
. Verify that root
in /boot/firmware/cmdline.txt
references the PARTUUID
as appropriate. Verify that /boot/firmware/config.txt
contains the lines arm_64bit=1
and upstream_kernel=1
. If not, go back to the section on modifying /etc/default/raspi-firmware
and fix it up.
reboot
/boot/firmware
FAT partition. Otherwise, you ve at least got the kernel going and can troubleshoot like usual from there.
debci
was initially announced on that month's Misc
Developer News, and later uploaded to Debian. It's been
continuously developed for the last 10 years, evolved from a single shell
script running tests in a loop into a distributed system with 47
geographically-distributed machines as of writing this piece, became part of
the official Debian release process gating migrations to testing, had 5 Summer
of Code and Outrechy interns working on it, and processed beyond 40 million
test runs.
In there years, Debian CI has received contributions from a lot of people, but
I would like to give special credits to the following:
Plug in a USB stick - use dmesg or your favourite method to see how it is identified.
Make a couple of mount points under /mnt - /mnt/data and /mnt/cdrom
1. Grab a USB stick, Partition using MBR. Make a single VFAT
partition, type 0xEF (i.e. EFI System Partition)
For a USB stick (identified as sdX) below:
$ sudo parted --script /dev/sdX mklabel msdos $ sudo parted --script /dev/sdX mkpart primary fat32 0% 100% $ sudo mkfs.vfat /dev/sdX1
$ sudo mount /dev/sdX1 /mnt/data/
Download an arm64 netinst.iso
https://cdimage.debian.org/debian-cd/current/arm64/iso-cd/debian-12.2.0-arm64-netinst.iso
2. Copy the complete contents of partition *1* from a Debian arm64
installer image into the filesystem (partition 1 is the installer
stuff itself) on the USB stick, in /
$ sudo kpartx -v -a debian-12.2.0-arm64-netinst.iso # Mount the first partition on the ISO and copy its contents to the stick $ sudo mount /dev/mapper/loop0p1 /mnt/cdrom/ $ sudo rsync -av /mnt/cdrom/ /mnt/data/ $ sudo umount /mnt/cdrom
3. Copy the complete contents of partition *2* from that Debian arm64
installer image into that filesystem (partition 2 is the ESP) on
the USB stick, in /
# Same story with the second partition on the ISO
$ sudo mount /dev/mapper/loop0p2 /mnt/cdrom/
$ sudo rsync -av /mnt/cdrom/ /mnt/data/ $ sudo umount /mnt/cdrom
$ sudo kpartx -d debian-testing-amd64-netinst.iso $ sudo umount /mnt/data
4. Grab the rpi edk2 build from https://github.com/pftf/RPi4/releases
(I used 1.35) and extract it. I copied the files there into *2*
places for now on the USB stick:
/ (so the Pi will boot using it)
/rpi4 (so we can find the files again later)
5. Add the preseed.cfg file (attached) into *both* of the two initrd
files on the USB stick
- /install.a64/initrd.gz and
- /install.a64/gtk/initrd.gz
cpio is an awful tool to use :-(. In each case:
$ cp /path/to/initrd.gz .
$ gunzip initrd.gz
$ echo preseed.cfg cpio -H newc -o -A -F initrd
$ gzip -9v initrd
$ cp initrd.gz /path/to/initrd.gz
If you look at the preseed file, it will do a few things:
- Use an early_command to unmount /media (to work around Debian bug
#1051964)
- Register a late_command call for /cdrom/finish-rpi (the next
file - see below) to run at the end of the installation.
- Force grub installation also to the EFI removable media path,
needed as the rpi doesn't store EFI boot variables.
- Stop the installer asking for firmware from removable media (as
the rpi4 will ask for broadcom bluetooth fw that we can't
ship. Can be ignored safely.)
6. Copy the finish-rpi script (attached) into / on the USB stick. It
will be run at the end of the installation, triggered via the
preseed. It does a couple of things:
- Copy the edk2 firmware files into the ESP on the system that's
just been installer
- Remove shim-signed from the installed systems, as there's a bug
that causes it to fail on rpi4. I need to dig into this to see
what the issue is.
That's it! Run the installer as normal, all should Just Work (TM).
BlueTooth didn't quite work : raspberrypi-firmware didn't install until adding a symlink for boot/efi to /boot/firmware
20231127 - This may not be necessary because raspberrypi-firmware path has been fixed
.gitignore
file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py
, from September 2012.
Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format
command that would apply clang-format-diff
to the output from hg diff
. Obviously, running hg diff
on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014.
A year later, when the initial implementation of mach artifact
was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact
was eventually added 14 months later, in March 2016.
From gecko-dev to git-cinnabar
Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands.
Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now).
Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence.
Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch).
One Firefox repository to rule them all
For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them.
And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though.
I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such.
In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary.
7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository.
Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated.
Hosting is simple, right?
Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones).
And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches.
Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication.
Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar).
Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is.
"But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way.
And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while).
Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016.
The growing difficulty of maintaining the status quo
Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin.
Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things).
On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option.
But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one.
Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours).
I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem.
Taking the leap
I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective.
Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more.
It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different.
You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved.
And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?).
Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git).
Heck, even Microsoft moved to Git!
With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems.
But... GitHub?
I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far.
After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those).
Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that).
I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened).
"But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion.
At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then.
So, what's next?
The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not.
While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now:
git-cinnabar
where mach bootstrap
would install it.
$ mkdir -p ~/.mozbuild/git-cinnabar
$ cd ~/.mozbuild/git-cinnabar
$ curl -sOL https://raw.githubusercontent.com/glandium/git-cinnabar/master/download.py
$ python3 download.py && rm download.py
git-cinnabar
to your PATH
. Make sure to also set that wherever you keep your PATH
up-to-date (.bashrc
or wherever else).
$ PATH=$PATH:$HOME/.mozbuild/git-cinnabar
$ git init
$ git remote add origin https://github.com/mozilla/gecko-dev
$ git remote update origin
$ git remote set-url origin hg::https://hg.mozilla.org/mozilla-unified
$ git config --local remote.origin.cinnabar-refs bookmarks
$ git remote update origin --prune
$ git -c cinnabar.refs=heads fetch hg::$PWD refs/heads/default/*:refs/heads/hg/*
This will create a bunch of hg/<sha1>
local branches, not all relevant to you (some come from old branches on mozilla-central). Note that if you're using Mercurial MQ, this will not pull your queues, as they don't exist as heads in the Mercurial repo. You'd need to apply your queues one by one and run the command above for each of them.$ git -c cinnabar.refs=bookmarks fetch hg::$PWD refs/heads/*:refs/heads/hg/*
This will create hg/<bookmark_name>
branches.
$ git reset $(git cinnabar hg2git $(hg log -r . -T ' node '))
This will take a little moment because Git is going to scan all the files in the tree for the first time. On the other hand, it won't touch their content or timestamps, so if you had a build around, it will still be valid, and mach build
won't rebuild anything it doesn't have to.
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg
directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure
before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post.
If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them.
Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far).
What about git-cinnabar?
With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained.
I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches).
Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there.
So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either.
Final words
That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;)
So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change.
To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire.
And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.
git
command directly and then foreground Vim again (fg
). Over the last few
years I've been trying more and more to call vim-fugitive from
within Vim instead. (I still do rebases and most merges the old-fashioned way).
For the most part Gitsigns is a nice passive addition to that, but it can also
do a lot of useful things that Fugitive also does. Previewing changed hunks in a
little floating window, in particular when resolving an awkward merge conflict, is
very handy.
The above picture shows it in action. I've changed two lines and added another
three in the top-left buffer. Gitsigns tries to choose colours from the currently
active colour scheme (in this case, tender)
xzcat haos_generic-x86-64-11.0.img.xz dd of=/dev/mmcblk0 bs=1M
That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using
the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would
work just as well. It even supports decompressing the XZ images directly while writing.
But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes,
it will ... until first boot. The HA OS takes over the empty space after its install partition on the first
boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is
completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there
is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give
any kind of progress, so your only way to figure out when it is done is opening the same web adress in another
browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had
before - all the settings, all the devices, all the history is preserved. Even authentification tokens are
preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send
location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start
working again without further actions needed from your side. That is an almost perfect backup/restore experience.
The first thing you get for using the OS version of HA is easy automatic update that also automatically takes
a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line
tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version
23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like
90% of all smart devices in my home. Really helpful stuff, but not a must have.
What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some
addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a
file server. That is not the most important bit, but even there the software comes pre-configured to use in
a home server configuration and has a very simple config UI to pre-configure key settings, like users,
passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral
users each with its own access to its own database. Couple more clicks and the DB is running and will be kept
restarted in case of failures.
But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality
in way that would be really hard or near impossible to configure in Home Assitant Container manually,
especially because no documentation has ever existed for such manual config - everyone just tells you to
install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and
figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA
Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later
when you have already forgotten everything. In the Addons that show up immediately after installation are
addons to install the new Matter
server, a MariaDB and MQTT server
(that other addons can use for data storage
and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes
editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH
connection and also a
web based terminal that gives access to the OS itself and also to a comand line interface, for example, to
do package downgrades if needed or see detailed logs. And also there is where you can get the features
that are the focus this year for HA developers - voice enablers.
However that is only a beginning. Like in Debian you can add additional repositories to expand your list of
available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside
the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant
Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow
based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for
home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels
like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require
more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally)
cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in
Python scrips that get run in a special environment where they get fed events from Home Assistant and can
trigger back events that can control everything HA can and also do anything Python can do. This I will need
to explore later.
And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) -
Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few
clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and
Portainer that
lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you
about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss
that. This will be very helpful very soon.
This whole environment of OS and containers and apps really made me think - what was missing in Debian that
made the talented developers behind all of that to spend the immense time and effor to setup a completely
new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps,
interfaces and configurations. Is there anything that can still be done to make HA community and the
general open source and Debian community closer together? HA devs are not doing anything wrong: they are
using the best open source can provide, they bring it to people whould could not install and use it otherwise,
they are contributing fixes and improvements as well. But there must be some way to do this better, together.
So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured
it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with
the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes
via the Importer? So I make an access token again, configured the Importer to use that, configured the
Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III
before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ...
It can only be barely useful if you first manually create all the asset accounts with the same names as
before and even then you'll again have to deal with resolving the problem of transfers showing up twice.
And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone,
your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from
bank exports again than to resolve data in that transaction export.
Turns out that Firefly III documenation explicitly
recommends making a mysqldump of your own and not rely
on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page
that sure looked a lot like a backup :D
After doing all that work all over again I needed to make something new not to feel like I wasted days of
work for no real gain. So I started solving a problem I had for a while already - how do I add cash
transations to the system when I am out of the house with just my phone in the hand? So far my workaround has
been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two
solutions are possible: app and bot.
There are actually multiple Android-based phone apps that work with Firefly III API to do full
financial management from the phone. However, after trying it out, that is not what I will be using most
of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either
via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via
a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working.
But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new
transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing
destination expense account, choosing category and budget and then submitting the form to the server.
Sadly none of that really works if you have no Internet or bad Internet at the place where you are using
cash. And it's just too many steps. Annoying.
An easier alternative is setting up a Telegram bot -
it is running in a custom Docker container right
next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create
very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction
from the (default) cash account in 5 amount with description "Coffee". This part also works if you are
offline at the moment - the bot will receive the message once you get back online. You can use Telegram
bot menu system to edit the transaction to add categories or expense accounts, but this part only work
if you are online. And the Firefly instance does not have to be online at all. Really nifty.
So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start
writing a Python script to predict my (financial) future!/
to /usr
, we will also run into the problems that the file move
moratorium was meant to prevent. The way forward is detecting them early and
applying workarounds on a per-package basis. Said detection is now automated
using the Debian Usr Merge Analysis Tool.
As problems are reported to the bug tracking system, they are connected to the
reports if properly usertagged. Bugs and patches for problem categories
DEP17-P2 and DEP17-P6 have been filed.
After consensus has been reached
on the bootstrapping matters, debootstrap
has been
changed to swap the initial unpack and merging
to avoid unpack errors due to pre-existing links. This is a precondition for
having base-files
install the aliasing symbolic links eventually.
It was identified that the root filesystem used by the Debian installer is
still unmerged and a
change has been proposed.
debhelper
was changed to
recognize systemd units installed to /usr.
A discussion with the CTTE and release team on repealing the moratorium has
been initiated.
dpkg-buildflags
now
defaults to issue arm64
-specific compiler flags, more care is needed to
distinguish between build architecture flags and host architecture flags than
previously.wget -q https://builds.trisquel.org/debian-installer-images/debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz
tar xfa debian-installer-images_20210731+deb11u8+11.0trisquel14_ppc64el.tar.gz ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso
echo '6df8f45fbc0e7a5fadf039e9de7fa2dc57a4d466e95d65f2eabeec80577631b7 ./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso' sha256sum -c
sudo wipefs -a /dev/sdX
sudo dd if=./installer-ppc64el/20210731+deb11u8+11/images/netboot/mini.iso of=/dev/sdX conv=sync status=progress
Sadly, no hash checksums or OpenPGP signatures are published.
Power off your device, insert the USB stick, and power it up, and you see a Petitboot menu offering to boot from the USB stick. For some reason, the "Expert Install"
was the default in the menu, and instead I select "Default Install"
for the regular experience. For this post, I will ignore BMC/IPMI, as interacting with it is not necessary. Make sure to not connect the BMC/IPMI ethernet port unless you are willing to enter that dungeon. The VGA console works fine with a normal USB keyboard, and you can chose to use only the second enP4p1s0f1
network card in the network card selection menu.
If you are familiar with Debian netinst ISO s, the installation is straight-forward. I complicate the setup by partitioning two RAID1 partitions on the two NVMe sticks, one RAID1 for a 75GB ext4 root filesystem (discard,noatime) and one RAID1 for a 900GB LVM volume group for virtual machines, and two 20GB swap partitions on each of the NVMe sticks (to silence a warning about lack of swap, I m not sure swap is still a good idea?). The 3x18TB disks use DM-integrity with RAID1 however the installer does not support DM-integrity so I had to create it after the installation.
There are two additional matters worth mentioning:
archive.trisquel.org
hostname and path values are available as defaults, so I just press enter and fix this after the installation has finished. You may want to have the hostname/path of your local mirror handy, to speed things up.linux-image-generic
which gives me a predictable 5.15 Linux-libre kernel, although you may want to chose linux-image-generic-hwe-11.0
for a more recent 6.2 Linux-libre kernel. Maybe this is intentional debinst-behaviour for non-x86 platforms?raspi3-firmware
, but By early 2019, I had it
running for all of the then-available Raspberry families (so the package was
naturally renamed to
raspi-firmware
). I got my
Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and
it soon joined the happy Debian family. The images are built daily, and are
available in https://raspi.debian.net.
In the process, I also adopted Lars great vmdb2
image building
tool, and have kept it decently up to
date (yes, I m currently lagging behind, but I ll get to it soonish ).
Anyway This year, I have been seriously neglecting the Raspberry builds. I
have simply not had time to regularly test built images, nor to debug why the
builder has not picked up building for trixie (testing). And my time
availability is not going to improve any time soon.
We are close to one month away from moving for six months to Paran
(Argentina),
where I ll be focusing on my PhD. And while I do contemplate taking my
Raspberries along, I do not forsee being able to put much energy to them.
So This is basically a call for adoption for the Raspberry Debian images
building service. I do intend to stick around and try to help. It s not only me
(although I m responsible for the build itself) we have a nice and healthy
group of Debian people hanging out in the #debian-raspberrypi
channel in OFTC
IRC.
Don t be afraid, and come ask. I hope giving this project in adoption will
breathe new life into it!
Next.