Search Results: "Wouter Verhelst"

16 October 2023

Wouter Verhelst: New toy: ASUS ZenScreen Go MB16AHP

A while ago, I saw Stefano's portable monitor, and thought it was very useful. Personally, I rent a desk at an office space where I have a 27" Dell monitor; but I do sometimes use my laptop away from that desk, and then I do sometimes miss the external monitor. So a few weeks before DebConf, I bought me one myself. The one I got is about a mid-range model; there are models that are less than half the price of the one that I bought, and there are models that are more than double its price, too. ASUS has a very wide range of these monitors; the cheapest model that I could find locally is a 720p monitor that only does USB-C and requires power from the connected device, which presumably if I were to connect it to my laptop with no power connected would half its battery life. More expensive models have features such as wifi connectivity and miracast support, builtin batteries, more connection options, and touchscreen fancyness. While I think some of these features are not worth the money, I do think that a builtin battery has its uses, and that I would want a decent resolution, so I got a FullHD model with builtin battery. 20231016_215332 The device comes with a number of useful accessories: a USB-C to USB-C cable for the USB-C connectivity as well as to charge the battery; an HDMI-to-microHDMI cable for HDMI connectivity; a magnetic sleeve that doubles as a back stand; a beefy USB-A charger and USB-A-to-USB-C convertor (yes, I know); and a... pen. No, really, a pen. You can write with it. Yes, on paper. No, not a stylus. It's really a pen. Sigh, OK. This one: 20231016_222024 OK, believe me now? Good. Don't worry, I was as confused about this as you just were when I first found that pen. Why would anyone do that, I thought. So I read the manual. Not something I usually do with new hardware, but here you go. It turns out that the pen doubles as a kickstand. If you look closely at the picture of the laptop and the monitor above, you may see a little hole at the bottom right of the monitor, just to the right of the power button/LED. The pen fits right there. Now I don't know what the exact thought process was here, but I imagine it went something like this: It's an interesting concept, especially given the fact that the magnetic sleeve works very well as a stand. But hey. Anyway, the monitor is very nice; the battery lives longer than the battery of my laptop usually does, so that's good, and it allows me to have a dual-monitor setup when I'm on the road. And when I'm at the office? Well, now I have a triple-monitor setup. That works well, too.

16 August 2023

Wouter Verhelst: Perl test suites in GitLab

I've been maintaining a number of Perl software packages recently. There's SReview, my video review and transcoding system of which I split off Media::Convert a while back; and as of about a year ago, I've also added PtLink, an RSS aggregator (with future plans for more than just that). All these come with extensive test suites which can help me ensure that things continue to work properly when I play with things; and all of these are hosted on salsa.debian.org, Debian's gitlab instance. Since we're there anyway, I configured GitLab CI/CD to run a full test suite of all the software, so that I can't forget, and also so that I know sooner rather than later when things start breaking. GitLab has extensive support for various test-related reports, and while it took a while to be able to enable all of them, I'm happy to report that today, my perl test suites generate all three possible reports. They are: Additionally, I also store the native perl Devel::Cover report as job artifacts, as they show some information that GitLab does not. It's important to recognize that not all data is useful. For instance, the JUnit report allows for a test name and for details of the test. However, the module that generates the JUnit report from TAP test suites does not make a distinction here; both the test name and the test details are reported as the same. Additionally, the time a test took is measured as the time between the end of the previous test and the end of the current one; there is no "start" marker in the TAP protocol. That being said, it's still useful to see all the available information in GitLab. And it's not even all that hard to do:
test:
  stage: test
  image: perl:latest
  coverage: '/^Total.* (\d+.\d+)$/'
  before_script:
    - cpanm ExtUtils::Depends Devel::Cover TAP::Harness::JUnit Devel::Cover::Report::Cobertura
    - cpanm --notest --installdeps .
    - perl Makefile.PL
  script:
    - cover -delete
    - HARNESS_PERL_SWITCHES='-MDevel::Cover' prove -v -l -s --harness TAP::Harness::JUnit
    - cover
    - cover -report cobertura
  artifacts:
    paths:
    - cover_db
    reports:
      junit: junit_output.xml
      coverage_report:
        path: cover_db/cobertura.xml
        coverage_format: cobertura
Let's expand on that a bit. The first three lines should be clear for anyone who's used GitLab CI/CD in the past. We create a job called test; we start it in the test stage, and we run it in the perl:latest docker image. Nothing spectacular here. The coverage line contains a regular expression. This is applied by GitLab to the output of the job; if it matches, then the first bracket match is extracted, and whatever that contains is assumed to contain the code coverage percentage for the code; it will be reported as such in the GitLab UI for the job that was ran, and graphs may be drawn to show how the coverage changes over time. Additionally, merge requests will show the delta in the code coverage, which may help deciding whether to accept a merge request. This regular expression will match on a line of that the cover program will generate on standard output. The before_script section installs various perl modules we'll need later on. First, we intall ExtUtils::Depends. My code uses ExtUtils::MakeMaker, which ExtUtils::Depends depends on (no pun intended); obviously, if your perl code doesn't use that, then you don't need to install it. The next three modules -- Devel::Cover, TAP::Harness::JUnit and Devel::Cover::Report::Cobertura are necessary for the reports, and you should include them if you want to copy what I'm doing. Next, we install declared dependencies, which is probably a good idea for you as well, and then we run perl Makefile.PL, which will generate the Makefile. If you don't use ExtUtils::MakeMaker, update that part to do what your build system uses. That should be fairly straightforward. You'll notice that we don't actually use the Makefile. This is because we only want to run the test suite, which in our case (since these are PurePerl modules) doesn't require us to build the software first. One might consider that this makes the call of perl Makefile.PL useless, but I think it's a useful test regardless; if that fails, then obviously we did something wrong and shouldn't even try to go further. The actual tests are run inside a script snippet, as is usual for GitLab. However we do a bit more than you would normally expect; this is required for the reports that we want to generate. Let's unpack what we do there:
cover -delete
This deletes any coverage database that might exist (e.g., due to caching or some such). We don't actually expect any coverage database, but it doesn't hurt.
HARNESS_PERL_SWITCHES='-MDevel::Cover'
This tells the TAP harness that we want it to load the Devel::Cover addon, which can generate code coverage statistics. It stores that in the cover_db directory, and allows you to generate all kinds of reports on the code coverage later (but we don't do that here, yet).
prove -v -l -s
Runs the actual test suite, with verbose output, shuffling (aka, randomizing) the test suite, and adding the lib directory to perl's include path. This works for us, again, because we don't actually need to compile anything; if you do, then -b (for blib) may be required. ExtUtils::MakeMaker creates a test target in its Makefile, and usually this is how you invoke the test suite. However, it's not the only way to do so, and indeed if you want to generate a JUnit XML report then you can't do that. Instead, in that case, you need to use the prove, so that you can tell it to load the TAP::Harness::JUnit module by way of the --harness option, which will then generate the JUnit XML report. By default, the JUnit XML report is generated in a file junit_output.xml. It's possible to customize the filename for this report, but GitLab doesn't care and neither do I, so I don't. Uploading the JUnit XML format tells GitLab which tests were run and Finally, we invoke the cover script twice to generate two coverage reports; once we generate the default report (which generates HTML files with detailed information on all the code that was triggered in your test suite), and once with the -report cobertura parameter, which generates the cobertura XML format. Once we've generated all our reports, we then need to upload them to GitLab in the right way. The native perl report, which is in the cover_db directory, is uploaded as a regular job artifact, which we can then look at through a web browser, and the two XML reports are uploaded in the correct way for their respective formats. All in all, I find that doing this makes it easier to understand how my code is tested, and why things go wrong when they do.

23 July 2023

Wouter Verhelst: Debconf Videoteam sprint in Paris, France, 2023-07-20 - 2023-07-23

The DebConf video team has been sprinting in preparation for DebConf 23 which will happen in Kochi, India, in September of this year. Video team sprint Present were Nicolas "olasd" Dandrimont, Stefano "tumbleweed" Rivera, and yours truly. Additionally, Louis-Philippe "pollo" V ronneau and Carl "CarlFK" Karsten joined the sprint remotely from across the pond. Thank you to the DPL for agreeing to fund flights, food, and accomodation for the team members. We would also like to extend a special thanks to the Association April for hosting our sprint at their offices. We made a lot of progress: It is now Sunday the 23rd at 14:15, and while the sprint is coming to an end, we haven't quite finished yet, so some more progress can still be made. Let's see what happens by tonight. All in all, though, we believe that the progress we made will make the DebConf Videoteam's work a bit easier in some areas, and will make things work better in the future. See you in Kochi!

27 June 2023

Wouter Verhelst: The future of the eID on RHEL

Since before I got involved in the eID back in 2014, we have provided official packages of the eID for Red Hat Enterprise Linux. Since RHEL itself requires a license, we did this, first, by using buildbot and mock on a Fedora VM to set up a CentOS chroot in which to build the RPM package. Later this was migrated to using GitLab CI and to using docker rather than VMs, in an effort to save some resources. Even later still, when Red Hat made CentOS no longer be a downstream of RHEL, we migrated from building in a CentOS chroot to building in a Rocky chroot, so that we could continue providing RHEL-compatible packages. Now, as it seems that Red Hat is determined to make that impossible too, I investigated switching to actually building inside a RHEL chroot rather than a derivative one. Let's just say that might be a challenge...
[root@b09b7eb7821d ~]# mock --dnf --isolation=simple --verbose -r rhel-9-x86_64 --rebuild eid-mw-5.1.11-0.v5.1.11.fc38.src.rpm --resultdir /root --define "revision v5.1.11"
ERROR: /etc/pki/entitlement is not a directory is subscription-manager installed?
Okay, so let's fix that.
[root@b09b7eb7821d ~]# dnf install -y subscription-manager
(...)
Complete!
[root@b09b7eb7821d ~]# mock --dnf --isolation=simple --verbose -r rhel-9-x86_64 --rebuild eid-mw-5.1.11-0.v5.1.11.fc38.src.rpm --resultdir /root --define "revision v5.1.11"
ERROR: No key found in /etc/pki/entitlement directory.  It means this machine is not subscribed.  Please use 
  1. subscription-manager register
  2. subscription-manager list --all --available (available pool IDs)
  3. subscription-manager attach --pool <POOL_ID>
If you don't have Red Hat subscription yet, consider getting subscription:
  https://access.redhat.com/solutions/253273
You can have a free developer subscription:
  https://developers.redhat.com/faq/
Okay... let's fix that too, then.
[root@b09b7eb7821d ~]# subscription-manager register
subscription-manager is disabled when running inside a container. Please refer to your host system for subscription management.
Wut.
[root@b09b7eb7821d ~]# exit
wouter@pc220518:~$ apt-cache search subscription-manager
wouter@pc220518:~$
As I thought, yes. Having to reinstall the docker host machine with Fedora just so I can build Red Hat chroots seems like a somewhat excessive requirement, which I don't think we'll be doing that any time soon. We'll see what the future brings, I guess.

9 June 2023

Wouter Verhelst: Planet Debian rendered with PtLink

As I blogged before, I've been working on a Planet Venus replacement. This is necessary, because Planet Venus, unfortunately, has not been maintained for a long time, and is a Python 2 (only) application which has never been updated to Python 3. Python not being my language of choice, and my having plans to do far more than just the "render RSS streams" functionality that Planet Venus does, meant that I preferred to write "something else" (in Perl) rather than updating Planet Venus to modern Python. Planet Grep has been running PtLink for over a year now, and my plan had been to update the code so that Planet Debian could run it too, but that has been taking a bit longer. This month, I have finally been able to work on this, however. This screenshot shows two versions of Planet Debian: The rendering on the left is by Planet Venus, the one on the right is by PtLink. It's not quite ready yet, but getting there. Stay tuned.

12 November 2022

Wouter Verhelst: Day 3 of the Debian Videoteam Sprint in Cape Town

The Debian Videoteam has been sprinting in Cape Town, South Africa -- mostly because with Stefano here for a few months, four of us (Jonathan, Kyle, Stefano, and myself) actually are in the country on a regular basis. In addition to that, two more members of the team (Nicolas and Louis-Philippe) are joining the sprint remotely (from Paris and Montreal). Videoteam sprint (Kyle and Stefano working on things, with me behind the camera and Jonathan busy elsewhere.) We've made loads of progress! Some highlights: The sprint isn't over yet (we're continuing until Sunday), but loads of things have already happened. Stay tuned!

30 August 2022

Wouter Verhelst: Not currently uploading

A notorious ex-DD decided to post garbage on his site in which he links my name to the suicide of Frans Pop, and mentions that my GPG key is currently disabled in the Debian keyring, along with some manufactured screenshots of the Debian NM site that allegedly show I'm no longer a DD. I'm not going to link to the post -- he deserves to be ridiculed, not given attention. Just to set the record straight, however: Frans Pop was my friend. I never treated him with anything but respect. I do not know why he chose to take his own life, but I grieved for him for a long time. It saddens me that Mr. Notorious believes it a good idea to drag Frans' name through the mud like this, but then, one can hardly expect anything else from him by this point. Although his post is mostly garbage, there is one bit of information that is correct, and that is that my GPG key is currently no longer in the Debian keyring. Nothing sinister is going on here, however; the simple fact of the matter is that I misplaced my OpenPGP key card, which means there is a (very very slight) chance that a malicious actor (like, perhaps, Mr. Notorious) would get access to my GPG key and abuse that to upload packages to Debian. Obviously we can't have that -- certainly not from him -- so for that reason, I asked the Debian keyring maintainers to please disable my key in the Debian keyring. I've ordered new cards; as soon as they arrive I'll generate a new key and perform the necessary steps to get my new key into the Debian keyring again. Given that shipping key cards to South Africa takes a while, this has taken longer than I would have initially hoped, but I'm hoping at this point that by about halfway September this hurdle will have been taken, meaning, I will be able to exercise my rights as a Debian Developer again. As for Mr. Notorious, one can only hope he will get the psychiatric help he very obviously needs, sooner rather than later, because right now he appears to be more like a goat yelling in the desert. Ah well.

22 August 2022

Wouter Verhelst: Remote notification

Sometimes, it's useful to get a notification that a command has finished doing something you were waiting for:
make my-large-program && notify-send "compile finished" "success"   notify-send "compile finished" "failure"
This will send a notification message with the title "compile finished", and a body of "success" or "failure" depending on whether the command completed successfully, and allows you to minimize (or otherwise hide) the terminal window while you do something else, which can be a very useful thing to do. It works great when you're running something on your own machine, but what if you're running it remotely? There might be something easy to do, but I whipped up a bit of Perl instead:
#!/usr/bin/perl -w
use strict;
use warnings;
use Glib::Object::Introspection;
Glib::Object::Introspection->setup(
    basename => "Notify",
    version => "0.7",
    package => "Gtk3::Notify",
);
use Mojolicious::Lite -signatures;
Gtk3::Notify->init();
get '/notify' => sub ($c)  
    my $msg = $c->param("message");
    if(!defined($msg))  
        $msg = "message";
     
    my $title = $c->param("title");
    if(!defined($title))  
        $title = "title";
     
    app->log->debug("Sending notification '$msg' with title '$title'");
    my $n = Gtk3::Notify::Notification->new($title, $msg, "");
    $n->show;
    $c->render(text => "OK");
 ;
app->start;
This requires the packages libglib-object-introspection-perl, gir1.2-notify-0.7, and libmojolicious-perl to be installed, and can then be started like so:
./remote-notify daemon -l http://0.0.0.0:3000/
(assuming you did what I did and saved the above as "remote-notify") Once you've done that, you can just curl a notification message to yourself:
curl 'http://localhost:3000/notify?title=test&message=test+body'
Doing this via localhost is rather silly (much better to use notify-send for that), but it becomes much more interesting if you're going to run this to your laptop from a remote system. An obvious TODO would be to add in some form of security, but that's left as an exercise to the reader...

12 August 2022

Wouter Verhelst: Upgrading a Windows 10 VM to Windows 11

I run Debian on my laptop (obviously); but occasionally, for $DAYJOB, I have some work to do on Windows. In order to do so, I have had a Windows 10 VM in my libvirt configuration that I can use. A while ago, Microsoft issued Windows 11. I recently found out that all the components for running Windows 11 inside a libvirt VM are available, and so I set out to upgrade my VM from Windows 10 to Windows 11. This wasn't as easy as I thought, so here's a bit of a writeup of all the things I ran against, and how I fixed them. Windows 11 has a number of hardware requirements that aren't necessary for Windows 10. There are a number of them, but the most important three are: So let's see about all three.

A modern enough processor If your processor isn't modern enough to run Windows 11, then you can probably forget about it (unless you want to use qemu JIT compilation -- I dunno, probably not going to work, and also not worth it if it were). If it is, all you need is the "host-passthrough" setting in libvirt, which I've been using for a long time now. Since my laptop is less than two months old, that's not a problem for me.

A TPM 2.0 module My Windows 10 VM did not have a TPM configured, because it wasn't needed. Luckily, a quick web search told me that enabling that is not hard. All you need to do is:
  • Install the swtpm and swtpm-tools packages
  • Adding the TPM module, by adding the following XML snippet to your VM configuration:
    <devices>
      <tpm model='tpm-tis'>
        <backend type='emulator' version='2.0'/>
      </tpm>
    </devices>
    
    Alternatively, if you prefer the graphical interface, click on the "Add hardware" button in the VM properties, choose the TPM, set it to Emulated, model TIS, and set its version to 2.0.
You're done! Well, with this part, anyway. Read on.

Secure boot Here is where it gets interesting. My Windows 10 VM was old enough that it was configured for the older i440fx chipset. This one is limited to PCI and IDE, unlike the more modern q35 chipset (which supports PCIe and SATA, and does not support IDE nor SATA in IDE mode). There is a UEFI/Secure Boot-capable BIOS for qemu, but it apparently requires the q35 chipset, Fun fact (which I found out the hard way): Windows stores where its boot partition is somewhere. If you change the hard drive controller from an IDE one to a SATA one, you will get a BSOD at startup. In order to fix that, you need a recovery drive. To create the virtual USB disk, go to the VM properties, click "Add hardware", choose "Storage", choose the USB bus, and then under "Advanced options", select the "Removable" option, so it shows up as a USB stick in the VM. Note: this takes a while to do (took about an hour on my system), and your virtual USB drive needs to be 16G or larger (I used the libvirt default of 20G). There is no possibility, using the buttons in the virt-manager GUI, to convert the machine from i440fx to q35. However, that doesn't mean it's not possible to do so. I found that the easiest way is to use the direct XML editing capabilities in the virt-manager interface; if you edit the XML in an editor it will produce error messages if something doesn't look right and tell you to go and fix it, whereas the virt-manager GUI will actually fix things itself in some cases (and will produce helpful error messages if not). What I did was:
  • Take backups of everything. No, really. If you fuck up, you'll have to start from scratch. I'm not responsible if you do.
  • Go to the Edit->Preferences option in the VM manager, then on the "General" tab, choose "Enable XML editing"
  • Open the Windows VM properties, and in the "Overview" section, go to the "XML" tab.
  • Change the value of the machine attribute of the domain.os.type element, so that it says pc-q35-7.0.
  • Search for the domain.devices.controller element that has pci in its type attribute and pci-root in its model one, and set the model attribute to pcie-root instead.
  • Find all domain.devices.disk.target elements, setting their dev=hdX to dev=sdX, and bus="ide" to bus="sata"
  • Find the USB controller (domain.devices.controller with type="usb", and set its model to qemu-xhci. You may also want to add ports="15" if you didn't have that yet.
  • Perhaps also add a few PCIe root ports:
    <controller type="pci" index="1" model="pcie-root-port"/>
    <controller type="pci" index="2" model="pcie-root-port"/>
    <controller type="pci" index="3" model="pcie-root-port"/>
    
I figured out most of this by starting the process for creating a new VM, on the last page of the wizard that pops up selecting the "Modify configuration before installation" option, going to the "XML" tab on the "Overview" section of the new window that shows up, and then comparing that against what my current VM had. Also, it took me a while to get this right, so I might have forgotten something. If virt-manager gives you an error when you hit the Apply button, compare notes against the VM that you're in the process of creating, and copy/paste things from there to the old VM to make the errors go away. As long as you don't remove configuration that is critical for things to start, this shouldn't break matters permanently (but hey, use your backups if you do break -- you have backups, right?) OK, cool, so now we have a Windows VM that is... unable to boot. Remember what I said about Windows storing where the controller is? Yeah, there you go. Boot from the virtual USB disk that you created above, and select the "Fix the boot" option in the menu. That will fix it. Ha ha, only kidding. Of course it doesn't. I honestly can't tell you everything that I fiddled with, but I think the bit that eventually fixed it was where I chose "safe mode", which caused the system to do a hickup, a regular reboot, and then suddenly everything was working again. Meh. Don't throw the virtual USB disk away yet, you'll still need it. Anyway, once you have it booting again, you will now have a machine that theoretically supports Secure Boot, but you're still running off an MBR partition. I found a procedure on how to convert things from MBR to GPT that was written almost 10 years ago, but surprisingly it still works, except for the bit where the procedure suggests you use diskmgmt.msc (for one thing, that was renamed; and for another, it can't touch the partition table of the system disk either). The last step in that procedure says to restart your computer!, which is fine, except at this point you obviously need to switch over to the TianoCore firmware, otherwise you're trying to read a UEFI boot configuration on a system that only supports MBR booting, which obviously won't work. In order to do that, you need to add a loader element to the domain.os element of your libvirt configuration:
<loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.ms.fd</loader>
When you do this, you'll note that virt-manager automatically adds an nvram element. That's fine, let it. I figured this out by looking at the documentation for enabling Secure Boot in a VM on the Debian wiki, and using the same trick as for how to switch chipsets that I explained above. Okay, yay, so now secure boot is enabled, and we can install Windows 11! All good? Well, almost. I found that once I enabled secure boot, my display reverted to a 1024x768 screen. This turned out to be because I was using older unsigned drivers, and since we're using Secure Boot, that's no longer allowed, which means Windows reverts to the default VGA driver, and that only supports the 1024x768 resolution. Yeah, I know. The solution is to download the virtio-win ISO from one of the links in the virtio-win github project, connecting it to the VM, going to Device manager, selecting the display controller, clicking on the "Update driver" button, telling the system that you have the driver on your computer, browsing to the CD-ROM drive, clicking the "include subdirectories" option, and then tell Windows to do its thing. While there, it might be good to do the same thing for unrecognized devices in the device manager, if any. So, all I have to do next is to get used to the completely different user interface of Windows 11. Sigh. Oh, and to rename the "w10" VM to "w11", or some such. Maybe.

23 July 2022

Wouter Verhelst: Planet Grep now running PtLink

Almost 2 decades ago, Planet Debian was created using the "planetplanet" RSS aggregator. A short while later, I created Planet Grep using the same software. Over the years, the blog aggregator landscape has changed a bit. First of all, planetplanet was abandoned, forked into Planet Venus, and then abandoned again. Second, the world of blogging (aka the "blogosphere") has disappeared much, and the more modern world uses things like "Social Networks", etc, making blogs less relevant these days. A blog aggregator community site is still useful, however, and so I've never taken Planet Grep down, even though over the years the number of blogs that was carried on Planet Grep has been reducing. In the past almost 20 years, I've just run Planet Grep on my personal server, upgrading its Debian release from whichever was the most recent stable release in 2005 to buster, never encountering any problems. That all changed when I did the upgrade to Debian bullseye, however. Planet Venus is a Python 2 application, which was never updated to Python 3. Since Debian bullseye drops support for much of Python 2, focusing only on Python 3 (in accordance with python upstream's policy on the matter), that means I have had to run Planet Venus from inside a VM for a while now, which works as a short-term solution but not as a long-term one. Although there are other implementations of blog aggregation software out there, I wanted to stick with something (mostly) similar. Additionally, I have been wanting to add functionality to it to also pull stuff from Social Networks, where possible (and legal, since some of these have... scary Terms Of Use documents). So, as of today, Planet Grep is no longer powered by Planet Venus, but instead by PtLink. Rather than Python, it was written in Perl (a language with which I am more familiar), and there are plans for me to extend things in ways that have little to do with blog aggregation anymore... There are a few other Planets out there that also use Planet Venus at this point -- Planet Debian and Planet FSFE are two that I'm currently already aware of, but I'm sure there might be more, too. At this point, PtLink is not yet on feature parity with Planet Venus -- as shown by the fact that it can't yet build either Planet Debian or Planet FSFE successfully. But I'm not stopping my development here, and hopefully I'll have something that successfully builds both of those soon, too. As a side note, PtLink is not intended to be bug compatible with Planet Venus. For one example, the configuration for Planet Grep contains an entry for Frederic Descamps, but somehow Planet Venus failed to fetch his feed. With the switch to PtLink, that seems fixed, and now some entries from Frederic seem to appear. I'm not going to be "fixing" that feature... but of course there might be other issues that will appear. If that's the case, let me know. If you're reading this post through Planet Grep, consider this a public service announcement for the possibility (hopefully a remote one) of minor issues.

20 May 2022

Wouter Verhelst: Faster tar

I have a new laptop. The new one is a Dell Latitude 5521, whereas the old one was a Dell Latitude 5590. As both the old and the new laptops are owned by the people who pay my paycheck, I'm supposed to copy all my data off the old laptop and then return it to the IT department. A simple way of doing this (and what I'd usually use) is to just rsync the home directory (and other relevant locations) to the new machine. However, for various reasons I didn't want to do that this time around; for one, my home directory on the old laptop is a bit of a mess, and a new laptop is an ideal moment in time to clean that up. If I were to just rsync over the new home directory, then, well. So instead, I'm creating a tar ball. The first attempt was quite slow:
tar cvpzf wouter@new-laptop:old-laptop.tar.gz /home /var /etc
The problem here is that the default compression algorithm, gzip, is quite slow, especially if you use the default non-parallel implementation. So we tried something else:
tar cvpf wouter@new-laptop:old-laptop.tar.gz -Ipigz /home /var /etc
Better, but not quite great yet. The old laptop now has bursts of maxing out CPU, but it doesn't even come close to maxing out the gigabit network cable between the two. Tar can compress to the LZ4 algorithm. That algorithm doesn't compress very well, but it's the best algorithm if "speed" is the most important consideration. So I could do that:
tar cvpf wouter@new-laptop:old-laptop.tar.gz -Ilz4 /home /var /etc
The trouble with that, however, is that the tarball will then be quite big. So why not use the CPU power of the new laptop?
tar cvpf - /home /var /etc   ssh new-laptop "pigz > old-laptop.tar.gz"
Yeah, that's much faster. Except, now the network speed becomes the limiting factor. We can do better.
tar cvpf - -Ilz4 /home /var /etc   ssh new-laptop "lz4 -d   pigz > old-laptop.tar.gz"
This uses about 70% of the link speed, just over one core on the old laptop, and 60% of CPU time on the new laptop. After also adding a bit of --exclude="*cache*", to avoid files we don't care about, things go quite quickly now: somewhere between 200 and 250G (uncompressed) was transferred into a 74G file, in 20 minutes. My first attempt hadn't even done 10G after an hour!

17 January 2022

Wouter Verhelst: Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap. He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases. There are two reasons why you should have backups. The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed. Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online. The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable. That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create. All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites. In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much. As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data. But of course, this is something everyone should consider for themselves...

16 January 2022

Wouter Verhelst: Backing up my home server with Bacula and Amazon Storage Gateway

I have a home server. Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware. Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing. But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay. So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that. When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too. After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly. After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try. The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful. As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that. All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

28 November 2021

Wouter Verhelst: GR procedures and timelines

A vote has been proposed in Debian to change the formal procedure in Debian by which General Resolutions (our name for "votes") are proposed. The original proposal is based on a text by Russ Allberry, which changes a number of rules to be less ambiguous and, frankly, less weird. One thing Russ' proposal does, however, which I am absolutely not in agreement with, is to add a absolutly hard time limit after three weeks. That is, in the proposed procedure, the discussion time will be two weeks initially (unless the Debian Project Leader chooses to reduce it, which they can do by up to one week), and it will be extended if more options are added to the ballot; but after three weeks, no matter where the discussion stands, the discussion period ends and Russ' proposed procedure forces us to go to a vote, unless all proposers of ballot options agree to withdraw their option. I believe this is a big mistake. I think any procedure we come up with should allow for the possibility that we may end up with a situation where everyone agrees that extending the discussion time a short time is a good idea, without necessarily resetting the whole discussion time to another two weeks (modulo a decision by the DPL). At the same time, any procedure we come up with should try to avoid the possibility of process abuse by people who would rather delay a vote ad infinitum than to see it voted upon. A hard time limit certainly does that; but I believe it causes more problems than it solves. I think insted that it is necessary for any procedure to allow for the discussion time to be extended as long as a strong enough consensus exists that this would be beneficial. As such, I have proposed an amendment to Russ' proposal (a full version of my proposed constitution can be seen on salsa) that hopefully solves these issues in a novel way: it allows anyone to request an extension to the discussion time, which then needs to be sponsored according to the same rules as a new ballot option. If the time extension is successfully created, those who supported the extension can then also no longer propose any new ones. Additionally, after 4 weeks, the proposed procedure allows anyone to object, so that 4 weeks is probably the practical limit -- although the possibility exists if enough support exists to extend the discussion time (or not enough to end it). The full rules involve slightly more than that (I don't like to put too much formal language in a blog post), but they're not too complicated, I think. That proposal has received a number of seconds, but after a week it hasn't yet reached the constitutional requirement for the option to be on the ballot. So, I guess this is a public request for more support to my proposal. If you're a Debian Developer and you agree with me that my proposed procedure is better than the alternative, please step forward and let yourself be heard. Thanks!

27 September 2021

Wouter Verhelst: SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another. This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy. As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself. The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all). Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually. I'll upload it to Debian soon, too.

27 May 2021

Wouter Verhelst: SReview and pandemics

The pandemic was a bit of a mess for most FLOSS conferences. The two conferences that I help organize -- FOSDEM and DebConf -- are no exception. In both conferences, I do essentially the same work: as a member of both video teams, I manage the postprocessing of the video recordings of all the talks that happened at the respective conference(s). I do this by way of SReview, the online video review and transcode system that I wrote, which essentially crowdsources the manual work that needs to be done, and automates as much as possible of the workflow. The original version of SReview consisted of a database, a (very basic) Mojolicious-based webinterface, and a bunch of perl scripts which would build and execute ffmpeg command lines using string interpolation. As a quick hack that I needed to get working while writing it in my spare time in half a year, that approach was workable and resulted in successful postprocessing after FOSDEM 2017, and a significant improvement in time from the previous years. However, I did not end development with that, and since then I've replaced the string interpolation by an object oriented API for generating ffmpeg command lines, as well as modularized the webinterface. Additionally, I've had help reworking the user interface into a system that is somewhat easier to use than my original interface, and have slowly but surely added more features to the system so as to make it more flexible, as well as support more types of environments for the system to run in. One of the major issues that still remains with SReview is that the administrator's interface is pretty terrible. I had been planning on revamping that for 2020, but then massive amounts of people got sick, travel was banned, and both the conferences that I work on were converted to an online-only conference. These have some very specific requirements; e.g., both conferences allowed people to upload a prerecorded version of their talk, rather than doing the talk live; since preprocessing a video is, technically, very similar to postprocessing it, I adapted SReview to allow people to upload a video file that it would then validate (in terms of length, codec, and apparent resolution). This seems like easy to do, but I decided to implement this functionality so that it would also allow future use for in-person conferences, where occasionally a speaker requests that modifications would be made to the video file in a way that SReview is unable to do. This made it marginally more involved, but at least will mean that a feature which I had planned to implement some years down the line is now already implemented. The new feature works quite well, and I'm happy I've implemented it in the way that I have. In order for the "upload" processing and the "post-event" processing to not be confused, however, I decided to import the conference schedules twice: once as the conference itself, and once as a shadow version of that conference for the prerecordings. That way, I could track the progress through the system of the prerecording completely separately from the progress of the postprocessing of the video (which adds opening/closing credits, and transcodes to multiple variants of the same video). Schedule parsing was something that had not been implemented in a generic way yet, however; since that made doubling the schedule in that way rather complex, I decided to bite the bullet and (finally) implement schedule parsing in a generic way. Currently, schedule parsers exist for two formats (Pentabarf XML and the Wafer variant of that same format which is almost, but not quite, entirely the same). The API for that is quite flexible, and I'm happy with the way things have been implemented there. I've also implemented a set of "virtual" parsers, which allow mangling the schedule in various ways (by either filtering out talks that we don't want, or by generating the shadow version of the schedule that I talked about earlier). While the SReview settings have reasonable defaults, occasionally the output of SReview is not entirely acceptable, due to more complicated matters that then result in encoding artifacts. As a result, the DebConf video team has been doing a final review step, completely outside of SReview, to ensure that such encoding artifacts don't exist. That seemed suboptimal, so recently I've been working on integrating that into SReview as well. First tests have been run, and seem to be acceptable, but there's still a few loose ends to be finalized. As part of this, I've also reworked the way comments could be entered into the system. Previously the presence of a comment would signal that the video has some problems that an administrator needed to look at. Unfortunately, that was causing some confusion, with some people even thinking it's a good place to enter a "thank you for your work" style of comment... which it obviously isn't. Turning it into a "comment log" system instead fixes that, and also allows for better two-way communication between administrators and reviewers. Hopefully that'll improve things in that area as well. Finally, the audio normalization in SReview -- for which I've long used bs1770gain -- is having problems. First of all, bs1770gain will sometimes alter the timing of the video or audio file that it's passed, which is very problematic if I want to process it further. There is an ffmpeg loudnorm filter which implements the same algorithm, so that should make things easier to use. Secondly, the author of bs1770gain is a strange character that I'd rather not be involved with. Before I knew about the loudnorm filter I didn't really have a choice, but now I can just rip bs1770gain out and replace it by the loudnorm filter. That will fix various other bugs in SReview, too, because SReview relies on behaviour that isn't actually there (but which I didn't know at the time when I wrote it). All in all, the past year-and-a-bit has seen a lot of development for SReview, with multiple features being added and a number of long-standing problems being fixed. Now if only the pandemic would subside, allowing the whole "let's do everything online only" wave to cool down a bit, so that I can finally make time to implement the admin interface...

Wouter Verhelst: Freenode

Bye, Freenode I have been on Freenode for about 20 years, since my earliest involvement with Debian in about 2001. When Debian moved to OFTC for its IRC presence way back in 2006, I hung around on Freenode somewhat since FOSDEM's IRC channels were still there, as well as for a number of other channels that I was on at the time (not anymore though). This is now over and done with. What's happening with Freenode is a shitstorm -- one that could easily have been fixed if one particular person were to step down a few days ago, but by now is a lost cause. At any rate, I'm now lurking, mostly for FOSDEM channels, on libera.chat, under my usual nick, as well as on OFTC.

22 March 2021

Wouter Verhelst: Twenty years of Debian

Ten years ago, I reflected on the fact that -- by that time -- I had been in Debian for just over ten years. This year, in early February, I've passed the twenty year milestone. As I'm turning 43 this year, I will have been in Debian for half my life in about three years. Scary thought, that. In the past ten years, not much has changed, and yet at the same time, much has. I became involved in the Debian video team; I stepped down from the m68k port; and my organizing of the Debian devroom at FOSDEM resulted in me eventually joining the FOSDEM orga team, where I eventually ended up also doing video. As part of my video work, I wrote SReview, for which in these COVID-19 times in much of my spare time I have had to write new code and/or fix bugs. I was a candidate for the position of DPL one more time, without being elected. I was also a candidate for the technical committee a few times, also without success. I also added a few packages to the list of packages that I maintain for Debian; most obviously this includes SReview, but there's also things like extrepo and policy-rcd-declarative, both fairly recent packages that I hope will improve Debian as a whole in the longer term. On a more personal level, at one debconf I met a wonderful girl that I now have just celebrated my first wedding anniversary with. Before that could happen, I have had to move to South Africa two years ago. Moving is an involved process at any one time; moving to a different continent altogether is even more so. As it would have been complicated and involved to remain a business owner of a Belgian business while living 9500km away from the country, I sold my shares to my (now ex) business partner; it turned the page of a 15-year chapter of my life, something I could not do without feelings one way or the other. The things I do in Debian has changed over the past twenty years. I've been the maintainer of the second-highest number of packages in the project when I maintained the Linux Gazette packages; I've been an m68k porter; I've been an AM, and briefly even an NM frontdesk member; I've been a DPL candidate three times, and a TC candidate twice. At the turn of my first decade of being a Debian Developer, I noted that people started to recognize my name, and that I started to be one of the Debian Developers who had been with the project longer than most. This has, obviously, not changed. New in the "I'm getting old" department is the fact that during the last Debconf, I noticed for the first time that there was a speaker who had been alive for less long than I had been a Debian Developer. I'm assuming these types of things will continue happening in the next decade, and that the future will bring more of these kinds of changes that will make me feel older as I and the project mature more. I'm looking forward to it. Here's to you, Debian; may you continue to influence my life, in good ways and in bad (but hopefully mostly good), as well as continue to inspire me to improve the world, as you have over the past twenty years!

17 January 2021

Wouter Verhelst: SReview 0.6

... isn't ready yet, but it's getting there. I had planned to release a new version of SReview, my online video review and transcoding system that I wrote originally for FOSDEM but is being used for DebConf, too, after it was set up and running properly for FOSDEM 2020. However, things got a bit busy (both in my personal life and in the world at large), so it fell a bit by the wayside. I've now also been working on things a bit more, in preparation for an improved administrator's interface, and have started implementing a REST API to deal with talks etc through HTTP calls. This seems to be coming along nicely, thanks to OpenAPI and the Mojolicious plugin for parsing that. I can now design the API nicely, and autogenerate client side libraries to call them. While at it, because libmojolicious-plugin-openapi-perl isn't available in Debian 10 "buster", I moved the docker containers over from stable to testing. This revealed that both bs1770gain and inkscape changed their command line incompatibly, resulting in me having to work around those incompatibilities. The good news is that I managed to do so in a way that keeps running SReview on Debian 10 viable, provided one installs Mojolicious::Plugin::OpenAPI from CPAN rather than from a Debian package. Or installs a backport of that package, of course. Or, heck, uses the Docker containers in a kubernetes environment or some such -- I'd love to see someone use that in production. Anyway, I'm still finishing the API, and the implementation of that API and the test suite that ensures the API works correctly, but progress is happening; and as soon as things seem to be working properly, I'll do a release of SReview 0.6, and will upload that to Debian. Hopefully that'll be soon.

Wouter Verhelst: Dear Google

... Why do you have to be so effing difficult about a YouTube API project that is used for a single event per year? FOSDEM creates 600+ videos on a yearly basis. There is no way I am going to manually upload 600+ videos through your webinterface, so we use the API you provide, using a script written by Stefano Rivera. This script grabs video filenames and metadata from a YAML file, and then uses your APIs to upload said videos with said metadata. It works quite well. I run it from cron, and it uploads files until the quota is exhausted, then waits until the next time the cron job runs. It runs so well, that the first time we used it, we could upload 50+ videos on a daily basis, and so the uploads were done as soon as all the videos were created, which was a few months after the event. Cool! The second time we used the script, it did not work at all. We asked one of our key note speakers who happened to be some hotshot at your company, to help us out. He contacted the YouTube people, and whatever had been broken was quickly fixed, so yay, uploads worked again. I found out later that this is actually a normal thing if you don't use your API quota for 90 days or more. Because it's happened to us every bloody year. For the 2020 event, rather than going through back channels (which happened to be unavailable this edition), I tried to use your normal ways of unblocking the API project. This involves creating a screencast of a bloody command line script and describing various things that don't apply to FOSDEM and ghaah shoot me now so meh, I created a new API project instead, and had the uploads go through that. Doing so gives me a limited quota that only allows about 5 or 6 videos per day, but that's fine, it gives people subscribed to our channel the time to actually watch all the videos while they're being uploaded, rather than being presented with a boatload of videos that they can never watch in a day. Also it doesn't overload subscribers, so yay. About three months ago, I started uploading videos. Since then, every day, the "fosdemtalks" channel on YouTube has published five or six videos. Given that, imagine my surprise when I found this in my mailbox this morning... Google lies, claiming that my YouTube API project isn't being used for 90 days and informing me that it will be disabled This is an outright lie, Google. The project has been created 90 days ago, yes, that's correct. It has been used every day since then to upload videos. I guess that means I'll have to deal with your broken automatic content filters to try and get stuff unblocked... ... or I could just give up and not do this anymore. After all, all the FOSDEM content is available on our public video host, too.

Next.