Search Results: "ch"

1 February 2023

Paul Wise: FLOSS Activities January 2023

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages cycle/pygopherd and ask about guile-2.2 reintroduction bugs
  • Debian IRC: fix topic/info of obsolete channel
  • Debian wiki: unblock IP addresses, approve accounts, approve domains.

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The celery, docutils, pyemd work was sponsored. All other work was done on a volunteer basis.

31 January 2023

Jonathan McDowell: Enabling retrogaming with Kodi on Debian

For some reason my son has started to be really into watching playthroughs of Mario and similar games on Youtube. I don t understand the appeal, but it s less distracting as background than Paw Patrol, so I m not complaining. He s not quite at the stage he s ready to play the games himself, but it s coming. So I figured it would be neat to sort out some retrogaming bits ready for when that happens. I already have a Kodi box underneath the TV; it doesn t get as much use these days as a lot of our viewing is through commercial streaming services, but it s got all of our DVDs ripped so is still useful. Recent version of Kodi have support for games as well, so I decided it would be perfect if I could tie in to that. However. The normal ways of doing this seems to be to download someone s pre-rolled setup, and I d much rather be able to get the bits I need from Debian, as that s what the machine is running (it does a few minor things other than Kodi). The best retrogaming environment out there seems to be RetroArch. It s available for Linux/OS X/Windows and RetroPie provides a nice easy standalone setup if you re not interested in the Kodi side. If you are then game.libretro provides a wrapper for libretro cores under Kodi. This seemed like the right track. Unfortunately RetroArch and related packages were in need of some love in Debian. So I ended up engaging in some yak shaving to try and get to where I wanted to be. First up was RetroArch itself, which was over 4 years out of date, at 1.7.3. It turned out that wanted an updated assets package which contains the necessary icons etc for the interface. RetroArch is only a frontend. To actually play games you need a suitable core. The first one I tried was genesisplusgx (I have fond memories of Sonic from the Master System era), which again was several years out of date. I pulled recent git (I wish folk would tag releases at least every now and then) and updated things. And successfully managed to play Sonic (badly, I am way out of practice). genesisplusgx is in non-free, due to a prohibition on commercial distribution. So it s not actually part of Debian. I switched my attention to libretro-bsnes-mercury, which would then allow SNES emulation and is part of main. Again, not to hard to update, some packaging cleanups, and I was playing Super Mario. Again, badly. That meant I knew I had working emulation with libretro cores. It was time to integrate with Kodi. That meant taking game.libretro, filing an ITP and doing a bunch of bits to get it ready to upload (including introducing a retroarch-dev binary package that contains the appropriate include files as part of retroarch). It sat in NEW for a while (including an initial reject because I d missed an attribution in the debian/copyright), and was accepted yesterday. There s a final piece of the puzzle, and that s the Kodi config that ties together the libretro core with game.libretro and presents the emulator to Kodi as a fully fledged add-on. The kodi-game folk have a neat tool, kodi-game-scripting which automates the heavy lifting of producing this config. I ve done some local modifications that make it bit more useful for producing config that can be embedded in the Debian libretro-* packages directly, which I should upload somewhere but it s all a bit rough n ready at present. It was enough to allow me to produce some kodi-game-libretro-bsnes-* packages as part of libretro-bsnes-mercury. With that complete, all the packages needed for playing SNES games under Kodi are now present in Debian. I need to upload libretro-bsnes-mercury to unstable (it went to experimental while waiting for kodi-game-libretro to be accepted), and kodi-game-libretro needs another source-only upload, but once that s done both should be in good shape to migrate to testing and be part of the upcoming bookworm release. What else is there to do? I d like to get Kodi config included in the other libretro packages that are already part of Debian. That s going to need the Controller Topology Project to be packaged so that the controller details are available (I was lucky in that the SNES controller is already part of the Kodi package). I need to work out if I can turn kodi-game-scripting into some sort of dh helper to help automate things. But I ve done some local testing with genesisplusgx and it works fine as expected. The other thing is that games are not yet first class citizens in Kodi; the normal browser interface you get for movies, music and TV shows is not available for games. Currently I ve been trying out the ROM Collection Browser though I find its automated scraping isn t as good as I d like. A friend has recommended the Advanced Emulator Launcher but I haven t taken a look at it. Either way I d like to ultimately get one of them packaged up as well, though not in time for bookworm. Anyway. My hope is that these updated and new packages prove useful to someone else. You can tell I m biased towards 90s era consoles, but if you ve enough CPU grunt there are a bunch of more recent cores available too. Big thanks to the Debian FTP Master team for letting these through NEW so close to release. And all the upstream devs - RetroArch is a great framework from my perspective as a user, and the Kodi Game folk have done massive amounts of work that made my life much easier when preparing things for Debian.

Dirk Eddelbuettel: #39: Faster Feedback Systems A Continuous Integration Example

Welcome to the 39th post in the relatively randomly recurring rants, or R4 for short. Today s post picks up where the previous post #38: Faster Feedback Systems started. As we argued in #38, the need for fast feedback loops is fairly universal and widespread. Fairly shortly after we posted #38, useR! 2022 happened and one presentation had the key line
Waiting 1-24 minutes for a build to finish can be a massive time suck.
which we would like to come back to today. Furthermore, the unimitable @b0rk had a fabulous tweet just weeks later stating the same point debugging strategy: shorten your feedback loop as a key in a successful debugging strategy. So in sum: shorter is better. Nobody likes to wait. And shorter i.e. faster is a key and recurrent theme in the R4 series. Today we have a fairly nice illustration of two aspects we have stressed before: The combined effects can be staggering as we show below. The example is motivated by a truly surprising (we are being generous here) comment we received as an aside when discussing the eternal topic of whether R users do, or do not, have a choice when picking packages, or approaches, or verses. To our surprise, we were told that packages are not substitutable . Which is both demonstrably false (see below) and astonishing as it came from an academic. I.e. someone trained and paid to abstract superfluous detail away and recognise and compare core features of items under investigation. Truly stunning. But I digress. CRAN by now has many packages, slowly moving in on 20,000 in total, and is a unique success we commented-on time and time before. By now many packages shadow or duplicate each other, reinvent one another, or revise/review. One example is the pair of packages accessing PostgreSQL databases. There are several, but two are key. The older one is RPostgreSQL which has been around since Sameer Kumar Prayaga wrote it as a Google Summer of Code student in 2008 under my mentorship. The package has now been maintained well (if quietly) for many years by Tomoaki Nishiyama. The other entrant is more recent, though not new, and is RPostgres by Kirill M ller and others. We will for the remainder of this post refer to these two as the tiny and the tidy version as either can been as being a representative of a certain verse . The aforementioned comment on non-substitutability struck us as eminently silly, so we set out to prove just how absurd it really is. So about a year ago we set up pair of GitHub repos with minimal code in a pair we called lim-tiny and lim-tidy. Our conjecture was (and is!) that less is more just as post #34 titled Less Is More argued with respect to package dependencies. Each of the repos just does one thing: a query to a (freely accessible but remote) PostgreSQL database. The tiny version just retrieves a data frame using only the dependencies needed for RPostgreSQL namely DBI and nothing else. The tidy version retrieves a tibble and has access to everything else that comes when installing RPostgres: DBI, dplyr, and magrittr plus of course their respective dependencies. We were able to let the code run in (very default) GitHub Actions on a weekly schedule without intervention apart from one change to the SQL query when the remote server (providing public bioinformatics data) changed its schema slighly, plus one update to the action yaml code version. No other changes. We measure the time a standard continuous integration run takes in total using the default GitHub Action setup in the tidy case (which relies on RSPM/PPM binaries, caching, , and does not rebuild from source each time), and our own r-ci script (introduced in #32 for CI with R. It switched to using r2u during the course of this experiment but already had access to binaries via c2d4u so it is always binaries-based (see e.g. #37 or #13 for more). The chart shows the evolution of these run-times over the course of the year with one weekly run each.
tiny vs tidy ci timing impact chart tiny vs tidy ci timing impact chart
This study reveals a few things: The key point there is that while the net-time to fire off a single PostgreSQL is likely (near-)identical, the net cost of continuous integration is not. In this setup, it is about twice the run-time simply because Less Is More which (in this setup) comes out to about being twice as fast. And that is a valid (and concrete, and verifiable) illustration of the overall implicit cost of adding dependencies to creating and using development, test, or run-time environments. Faster feedback loops make for faster builds and faster debugging. Consider using fewer dependencies, and/or using binaries as provided by r2u.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 January 2023

Arturo Borrero Gonz lez: Debian and the adventure of the screen resolution

Post logo I read somewhere a nice meme about Linux: Do you want an operating system or do you want an adventure? I love it, because it is so true. What you are about to read is my adventure to set a usable screen resolution in a fresh Debian testing installation. The context is that I have two different Lenovo Thinkpad laptops with 16 screen and nvidia graphic cards. They are both installed with the latest Debian testing. I use the closed-source nvidia drivers (they seem to work better than the nouveau module). The desktop manager and environment that I use is lightdm + XFCE4. The monitor native resolution in both machines is very high: 3840x2160 (or 4K UHD if you will). The thing is that both laptops show an identical problem: when freshly installed with the Debian default config, the native resolution is in use. For a 16 screen laptop, this high resolution means that the font is tiny. Therefore, the raw native resolution renders the machine almost unusable. This is a picture of what you get by running htop in the console (tty1, the terminal you would get by hitting CTRL+ALT+F1) with the default install: Linux tty console with high resolution and tiny fonts Everything in the system is affected by this:
  1. the grub menu is unreadable. Thanksfully the right option is selected by default.
  2. the tty console, with the boot splash by systemd is unreadable as well. There are some colors, so you at least see some systemd stuff happening in green .
  3. when lightdm starts, the resolution keeps being very high. Can barely click the login button.
  4. when XFCE4 starts, it is a pain to navigate the menu and click the right buttons to set a more reasonable resolution.
The adventure begins after installing the system. Each of these four points must be fixed by hand by the user. XFCE4 Point #4 is the easiest. Navigate with the mouse pointer to the tiny Applications menu, then Settings, then Displays. This is more or less the same in every other desktop operating system. There are no further actions required to persist this setting. Thanks you XFCE4. lightdm Point #3, about lightdm, is more tricky to solve. It involves running xrandr when lightdm sets up the display. Nobody will tell you this trick. You have to search for it on the internet. Thankfully is a common problem, and a person who knows what to search for can find good results. The file /etc/lightdm/lightdm.conf needs to contain something like this:
[LightDM]
[Seat:*]
# set up correct display resolution
display-setup-script=sh -c -- "xrandr -s 1920x1080"
By the way, depending on your system hardware setup, you may also need an additional call to xrandr here. If you want to plug in an HDMI monitor, chances are you require something like xrandr --setprovideroutputsource NVIDIA-G0 modesetting && xrandr --auto to instruct the NVIDIA graphic card to work will with the kernel graphic system. In my case, one of my laptops require it, so I have:
[LightDM]
[Seat:*]
# don't ask me to type my username
greeter-hide-users=false
# set up correct display resolution, and prepare NVIDIA card for HDMI output
display-setup-script=sh -c "xrandr -s 1920x1080 && xrandr --setprovideroutputsource NVIDIA-G0 modesetting && xrandr --auto"
grub Point #1 about the grub menu is also not trivial to solve, but also widely known on the internet. Grub allows you to set arbitrary graphical modes. In Debian systems, adding something like GRUB_GFXMODE=1024x768 to /etc/default/grub and then running sudo update-grub should do the magic. console So we get to point #2 about the tty1 console. For months, I ve been investing my scarce personal time into trying to solve this annoyance. There are a lot of conflicting information about this on the internet. Plenty of misleading solutions, essays about framebuffer, kernel modeset, and other stuff I don t want to read just to get my tty1 in a readable status. People point in different directions, like using GRUB_GFXPAYLOAD_LINUX=keep in /etc/default/grub. Which is a good solution, but won t work: my best bet is that the kernel indeed keeps the resolution as told by grub, but the moment systemd loads the nvidia driver, it enables 4K in the display and the console gets the high resolution. Actually, for a few weeks, I blamed plymouth. Because the plymouth service is loaded early by systemd, it could be responsible for setting some of the display settings. It actually contains some (undocummented) DeviceScale configuration option that is seemingly aimed to integrate into high resolution scenarios. I played with it to no avail. Some folks from IRC suggested reconfiguring the console-font package. Back-then unknown to me. Running sudo dpkg-reconfigure console-font would indeed show a menu to select some preferences for the console, including font size. But apparently, a freshly installed system already uses the biggest possible, so this was a dead end. Other option I evaluted for a few days was touching the kernel framebuffer setting. I honestly don t understand this, and all the solutions pointing to use fbset didn t work for me anyways. This is the default framebuffer configuration in one of the laptops:
user@debian:~$ fbset -i

mode "3840x2160"
    geometry 3840 2160 3840 2160 32
    timings 0 0 0 0 0 0 0
    accel true
    rgba 8/16,8/8,8/0,0/0
endmode
Frame buffer device information:
    Name        : i915drmfb
    Address     : 0
    Size        : 33177600
    Type        : PACKED PIXELS
    Visual      : TRUECOLOR
    XPanStep    : 1
    YPanStep    : 1
    YWrapStep   : 0
    LineLength  : 15360
    Accelerator : No
Playing with these numbers, I was able to modify the geometry of the console, only to reduce the panel to a tiny square in the console display (with equally small fonts anyway). If it was possible to scale or resize the panel in other way, I was unable to understand how to do so by reading the associated docs. One day, out of despair, I tried disabling kernel modesetting (or KMS). It indeed got me a more readable tty1, only to prevent the whole graphic stack from starting, with Xorg complaining about the lack of kernel modeset. After lots of wasted time, I decided to blame the NVIDIA graphic card. Because why not: a closed source module in my system looks fishy. I registered in their official forum and wrote a message about my suspicion on the module, asking for advice on how to modify the driver default resolution. I was hoping that something like modprobe nvidia my_desired_resolution=1920x1080 could exist. Apparently not :-( I was about to give up. I had walked every corner of the known internet. I even tried summoning the ancient gods, I used ChatGPT. I asked the AI god for mercy, for a working solution to no avail. Then I decided to change the kind of queries I was issuing the search engines (don t ask me, I no longer remember). Eventually I landed in this askubuntu.com page. The question described the exact same problem I was experiencing. Finally, that was encouraging! I was not alone in my adventure after all! The solution section included a font size I hadn t seen before in my previous tests: 16x32. More excitement! I did all the steps. I installed the xfonts-terminus package, and in the file /etc/default/console-setup I put:
ACTIVE_CONSOLES="/dev/tty[1-6]"
CHARMAP="ISO-8859-15"
CODESET="guess"
FONTFACE="Terminus"
FONTSIZE="16x32"
VIDEOMODE=
Then I run setupcon from a tty, and the miracle happened! I finally got a bigger font in the tty1 console! Turned out a potential solution was about playing with console-setup, which I had tried wihtout success before. I m not even sure if the additional package was required. This is how my console looks now: Linux tty console with high resolution but not so tiny fonts The truth is the solution is satisfying only to a degree. I m a person with good eyesight and can work with these bit larger fonts. I m not sure if I can get larger fonts using this method, honestly. After some search, I discovered that some folks already managed to describe the problem in detail and filed a proper bug report in Debian, see #595696 opened more than 10 years ago. 2023 is the year of linux on the desktop Nope. I honestly don t see how this disconnected pile of settings can be all reconciled together. Can we please have a systemd-whatever that homogeinizes all of this mess? I m referring to grub + kernel drivers + console + lightdm + XFCE4. Next adventure When I lock the desktop (with CTRL+ALT+L) and close the laptop lid to suspend it, then reopen it, type the login info into the lightdm greeter, then the desktop environment never loads, black screen. I have already tried the first few search results without luck. Perhaps the nvidia card is to blame this time? Perhaps poorly coupled power management by the different system software pieces? Who knows what s going on here. This will probably be my next Debian desktop adventure.

Russell Coker: Links January 2023

The Intercept has an amusing and interesting article about senior Facebook employees testifying that they don t know where Facebook stores all it s data on users [1]. One lesson all programmers can learn from this is to document all these things in an orderly manner. Cory Doctorow wrote a short informative article about inflation from a modern monetary theory perspective [2]. Russ Allbery wrote an insightful blog post about effecive altruism and respect for disadvantaged people [3]. GiveDirectly sounds good. The Conversation has an interesting article about the Google and Apple app stores providing different versions of apps for users in different regions [4]. Apparently there are specific versions to comply with GDPR and versions that differ in adverts. The hope that GDPR would affect enough people to become essentially a world-wide standard was apparently overly optimistic. We need political lobbying in all countries for laws like the GDPR to force the app stores to give us the better versions of apps. Arya Voronova wrote an informative article about USB-C and extension or data blocker cables [5]. USB just keeps getting more horrible in technology while getting more useful in functionality. Laptops and phones catching fire will probably become more common in future. John McBride wrote an insightful article about the problems in the security of the software supply chain [6]. His main suggestion for addressing problems is If you are on a team that relies on some piece of open source software, allocate real engineering time to contributing , the problem with this is that real engineering time means real money and companies don t want to do that. Maybe having companies contribute moderate amounts of money to a foundation that hires people would be a viable option. Toms Guide has an interesting article describing problems with the Tesla [7]. It doesn t cover things like autopilot driving over children and bikers but instead covers issues of the user interface that make it less pleasant to drive and also remove concentration from the road. The BBC has an interesting article about the way mathematical skill is correlated with the way language is used to express numbers [8]. Every country with a lesser way of expressing numbers should switch to some variation of the East-Asian way. Science 2.0 has an interesting blog post about the JP Aerospace plans to use airships to get most of the way through the atmosphere and then a plane to get to orbit [9]. It s a wild idea but seems plausible. The idea of going to space in balloons seems considerably scarier to me than the current space craft. Interesting list of red team and physical entry gear with links to YouTube videos showing how to use them [10]. The Verge has an informative summary of the way Elon mismanaged Twitter after taking it over [11].

Reproducible Builds (diffoscope): diffoscope 234 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 234. This version includes the following changes:
[ FC Stegerman ]
* test_text_proper_indentation requires at least file version 5.44.
  (Closes: reproducible-builds/diffoscope#329)
You find out more by visiting the project homepage.

29 January 2023

Dirk Eddelbuettel: RcppTOML 0.2.2 on CRAN: Now with macOS-on-Intel Builds

Just days after a build-fix release (for aarch64) and still only a few weeks after the 0.2.0 release of RcppTOML and its switch to toml++, we have another bugfix release 0.2.2 on CRAN also bringing release 3.3.0 of toml++ (even if we had large chunks of 3.3.0 already incorporated). TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka packages ) for the Rust language. The package was building fine on Intel-based macOS provided the versions were recent enough. CRAN, however, aims for the broadest possibly reach of binaries and builds on a fairly ancient macOS 10.13 with clang version 10. This confused toml++ into (wrongly) concluding it could not build when it in fact can. After a hint from Simon that Apple in their infinite wisdom redefines clang version ids, this has been reflected in version 3.3.0 of toml++ by Mark so we should now build everywhere. Big thanks to everybody for the help. The short summary of changes follows.

Changes in version 0.2.2 (2023-01-29)
  • New toml++ version 3.3.0 with fix to permit compilation on ancient macOS systems as used by CRAN for the Intel-based builds.

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: Is the desktop recommending your program for opening its files?

Linux desktop systems have standardized how programs present themselves to the desktop system. If a package include a .desktop file in /usr/share/applications/, Gnome, KDE, LXDE, Xfce and the other desktop environments will pick up the file and use its content to generate the menu of available programs in the system. A lesser known fact is that a package can also explain to the desktop system how to recognize the files created by the program in question, and use it to open these files on request, for example via a GUI file browser. A while back I ran into a package that did not tell the desktop system how to recognize its files and was not used to open its files in the file browser and fixed it. In the process I wrote a simple debian/tests/ script to ensure the setup keep working. It might be useful for other packages too, to ensure any future version of the package keep handling its own files. For this to work the file format need a useful MIME type that can be used to identify the format. If the file format do not yet have a MIME type, it should define one and preferably also register it with IANA to ensure the MIME type string is reserved. The script uses the xdg-mime program from xdg-utils to query the database of standardized package information and ensure it return sensible values. It also need the location of an example file for xdg-mime to guess the format of.
#!/bin/sh
#
# Author: Petter Reinholdtsen
# License: GPL v2 or later at your choice.
#
# Validate the MIME setup, making sure motor types have
# application/vnd.openmotor+yaml associated with them and is connected
# to the openmotor desktop file.
retval=0
mimetype="application/vnd.openmotor+yaml"
testfile="test/data/real/o3100/motor.ric"
mydesktopfile="openmotor.desktop"
filemime="$(xdg-mime query filetype "$testfile")"
if [ "$mimetype" != "$filemime" ] ; then
    retval=1
    echo "error: xdg-mime claim motor file MIME type is $filemine, not $mimetype"
else
    echo "success: xdg-mime report correct mime type $mimetype for motor file"
fi
desktop=$(xdg-mime query default "$mimetype")
if [ "$mydesktopfile" != "$desktop" ]; then
    retval=1
    echo "error: xdg-mime claim motor file should be handled by $desktop, not $mydesktopfile"
else
    echo "success: xdg-mime agree motor file should be handled by $mydesktopfile"
fi
exit $retval
It is a simple way to ensure your users are not very surprised when they try to open one of your file formats in their file browser. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Gunnar Wolf: miniDebConf Tamil Nadu 2023

Greetings from Viluppuram, Tamil Nadu, South India! As a preparation and warm-up for DebConf in September, the Debian people in India have organized a miniDebConf. Well, I don t want to be unfair to them They have been regularly organizing miniDebConfs for over a decade, and while most of the attendees are students local to this state in South India (the very tip of the country; Tamil Nadu is the Eastern side, and Kerala, where Kochi is and DebConf will be held, is the Western side), I have talked with attendees from very different regions of this country. This miniDebConf is somewhat similar to similarly-scoped events I have attended in Latin America: It is mostly an outreach conference, but it s also a great opportunity for DDs in India to meet in the famous hallway track. India is incredibly multicultural. Today at the hotel, I was somewhat surprised to see people from Kerala trying to read a text written in Tamil: Not only the languages are different, but the writing systems also are. From what I read, Tamil script is a bit simpler to Kerala s Mayalayam, although they come from similar roots. Of course, my school of thought is that, whenever you visit a city, culture or country that differs from the place you were born, a fundamental component to explore and to remember is Food! And one of the things I most looked forward for this trip was that precisely. I arrived to the Chennai Airport (MAA) 8:15 local time yesterday morning, so I am far from an expert but I have been given (and most happily received) three times biryani (pictured in the photo by this paragraph). It is delicious, although I cannot yet describe the borders of what should or should not be considered proper biryani): The base dish is rice, and you go mixing it with different sauces or foods. What managed to surprise us foreigners is, strangely, well known for us all: there is no spoon. No, the food is not pushed to your mouth using metal or wooden utensils. Not even using a tortilla as back home, or by breaking bits of the injera that serves also as a dish, as in Ethiopia. Sure, there is naan, but it is completely optional, and would be a bit too much for as big a big dish as what we have got. Biryani is eaten With the tools natural to us primates: the fingers. We have learnt some differnt techniques but so far, I am still using the base technique (thumb-finger-middle). I m closing the report with the photo of the closing of the conference as it happens. And I will, of course, share our adventures as they unfold in the next couple of days. Because Well, we finished with the conference-y part of the trip, but we have a full week of (pre-)DebConf work ahead of us!

Valhalla's Things: Hello World!

Posted on January 29, 2023
Welcome to my new blog! Or rather, strictly speaking, welcome to my first blog! Back when everybody had a blog, I had an old-fashioned personal website where pages were organized by topic rather than by date, so now that blogs are dead (or so they say), I guess it s time for me to have one :) . The old website is still online, but updating it is getting harder, both for organizational reasons and because the static generator I ve used is no longer supported and requires python 2; lately I ve started publish some specific categories of material such as sewing patterns into their own website (you can find a list in the about page), but I was missing a place to post about the history and experiences of the things I publish elsewhere, as well as more uncategorized things. Of course I ve chosen to use a static site generator, but since I m picky I ve discarded most of the common ones mostly because they enforce assumptions I don t agree with. On the other hand, one day I wouldn t mind learning a bit of Haskell, so I decided to look for trouble and use Hakyll, hoping that nobody will add a level 20 wandering monster to it. Right now I m using mostly the default theme, because I know that if I start fiddling with it I m not going to start posting content ever; maybe one day I ll decide to completely change the look. As for contents, on the bits side of things you can expect me to talk about Debian, the Fediverse, Python, my inventory manging program and a bit of arduino-level electronics; on the atom side you should definitely expect sewing, modern and mostly historically inspired, some fiber crafts (spinning, knitting, crochet), maybe some bad attempts at painting (watercolours and acrilics), printing (screen and linocut), things that are written on paper and, well, any other craft I will happen to collect. Before the blog gets too full I plan to add tag management to help people who are only interested in some of these contents. This being a blog, of course it has an atom feed you can add to your favourite (and ideally self-hosted) rss reader, and since this is a personal blog there will be no periodicity, posts will happen when I ll have something to say on some topic. Until next time

28 January 2023

Scarlett Gately Moore: KDE Snaps, Debian uploads and much more in the works!

Witch Wells, AZ SnowWitch Wells, AZ Snow
It has been a very busy few weeks as we endured snowstorm after snowstorm! I have made some progress on the Mycroft in debian adventure! This will slow down as we enter freeze for bookworm and there is no way we will make it into bookworm as there are some significant issues to solve. On the KDE side of things: In the Snap arena, I have made my first significant contribution to snapcraft upstream! It has been a great learning experience as I convert my Ruby knowledge to Python. Formatting is something I need to get used to! https://github.com/snapcore/snapcraft/pull/4023 Snaps have been on hold due to the kde-neon extension not having core22 support and the above pull request fixes that. Meanwhile, I have been working on getting core20 apps ( 22.08.3 final KDE apps version for this base. ) rebuilt for security updates. As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > ! Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration. GoFundMe

Emmanuel Kasper: Table of correspondence between AWS / Azure / Red Hat OpenShift Container Platform / upstream projects

If you know the Amazon Web Services or Azure portfolio, and you are interested in OpenShift or the OKD OpenShift community distribution, this is a table of corresponding technologies. OpenShift is Red Hat s Kubernetes distribution: it is basically the upstream Kubernetes delivered with monitoring, logging, CI/CD, underlying OS, tested upgrade paths not found with a manual kubernetes.io kubeadm install. After passing the two corresponding certifications, my opinion on cloud operators is that it is very much a step back in the direction of proprietary software. You can rebuild their cloud stack with opensource components, but it is also a lot of integration work, similar to using the Linux from scratch distribution instead of something like Debian. A good middle point are the OpenShift and OKD Kubernetes distributions, who integrate the most common cloud components, but allow an installation on your own hardware or cloud provider of your choice.
AWS Azure OpenShift *OpenShift upstream project&
Cloud Trail Kubernetes API Server audit log Kubernetes
Cloud Watch Azure Monitor, Azure Log Analytics OpenShift Monitoring Prometheus, Kubernetes Metrics
AWS Artifact Compliance Operator OpenSCAP
AWS Trusted Advisor Azure Advisor Insights
AWS Marketplace Red Hat Market place Operator Hub
AWS Identity and Access Management (IAM) Azure Active Directory, Azure AD DS Red Hat SSO Keycloack
AWS Elastisc Beanstalk Azure App Services OpenShift Source2Image (S2I) Source2Image (S2I)
AWS S3 Azure Blob Storage** ODF Rados Gateway Rook RGW
AWS Elastic Block Storage Azure Disk Storage ODF Rados Block Device Rook RBD
AWS Elastic File System Azure Files ODF Ceph FS Rook CephFS
AWS ELB Classic Azure Load Balancer MetalLB Operator MetalLB
AWS ELB Application Load Balancer Azure Application Gateway OpenShift Router HAProxy
Amazon Simple Notification Service OpenShift Streams for Apache Kafka Apache Kafka
Amazon Guard Duty Microsoft Defender for Cloud API Server audit log review, ACS Runtime detection Stackrox
Amazon Inspector Microsoft Defender for Cloud Quay.io container scanner, ACS Vulnerability Assessment Clair, Stackrox
AWS Lambda Azure Serverless Openshift Serverless* Knative
AWS Key Management System Azure Key Vault could be done with Hashicorp Vault Vault
AWS WAF NGINX Ingress Controller Operator with ModSecurity NGINX ModSecurity
Amazon Elasticache Redis Enterprise Operator Redis, memcached as alternative
AWS Relational Database Service Azure SQL Crunchy Data Operator PostgreSQL
Azure Arc OpenShift ACM Open Cluster Management
AWS Scaling Group Azure Scale Set OpenShift Autoscaler OKD Autoscaler
* OpenShift Serverless requires the application to be packaged as a container, something AWS Lambda does not require. ** Azure Blob Storage covers the object storage use case of S3, but is itself not S3 compatible

Russ Allbery: Review: The Library of the Dead

Review: The Library of the Dead, by T.L. Huchu
Series: Edinburgh Nights #1
Publisher: Tor
Copyright: 2021
Printing: 2022
ISBN: 1-250-76777-6
Format: Kindle
Pages: 329
The Library of the Dead is the first book in a post-apocalyptic (sort of) urban fantasy series set in Edinburgh, written by Zimbabwean author (and current Scotland resident) T.L. Huchu. Ropa is a ghosttalker. This means she can see people who have died but are still lingering because they have unfinished business. She can stabilize them and understand what they're saying with the help of her mbira. At the age of fourteen, she's the sole source of income for her small family. She lives with her grandmother and younger sister in a caravan (people in the US call it an RV), paying rent to an enterprising farmer turned landlord. Ropa's Edinburgh is much worse off than ours. Everything is poorer, more run-down, and more tenuous, but other than a few hints about global warming, we never learn the history. It reminded me a bit of the world in Octavia Butler's Parable of the Sower in the feel of civilization crumbling without a specific cause. Unlike that series, The Library of the Dead is not about the collapse or responses to it. The partial ruin of the city is the mostly unremarked backdrop of Ropa's life. Much of the book follows Ropa's daily life carrying messages for ghosts and taking care of her family. She does discover the titular library when a wealthier friend who got a job there shows it off to her, but it has no significant role in the plot. (That was disappointing.) The core plot, once Ropa is convinced by her grandmother to focus on it, is the missing son of a dead woman, who turns out to not be the only missing child. This is urban fantasy with the standard first-person perspective, so Ropa is the narrator. This style of book needs a memorable protagonist, and Ropa is certainly that. She's a talker who takes obvious delight in narrating her own story alongside a constant patter of opinions, observations, and Scottish dialect. Ropa is also poor. That last may not sound that notable; a lot of urban fantasy protagonists are not well-off. But most of them feel culturally middle-class in a way that Ropa does not. Money may be a story constraint in other books, but it rarely feels like a life constraint and experience the way it does here. It's hard to describe the difference in tone succinctly, since it's a lot of small things: the constant presence of money concerns, the frustration of possessions that are stolen or missing and can't be replaced, the tedious chores one has to do when there's no money, even the language and vulgarity Ropa uses. This is rare in fantasy and excellent characterization work. Given that, I am still frustrated with myself over how much I struggled with Ropa as a narrator. She's happy to talk about what is happening to her and what she's learning about (she listens voraciously to non-fiction while running messages), but she deflects, minimizes, or rushes past any mention of what she's feeling. If you don't like the angst that's common from urban fantasy protagonists, this may be the book for you. I have complained about that angst before, and therefore feel like this should have been the book for me, but apparently I need a minimum level of emotional processing and introspection from the narrator. Ropa is utterly unwilling to do any of that. It's possible to piece together what she's feeling and worrying about, but the reader has to rely on hints and oblique comments that she passes over quickly. It didn't help that Ropa is not interested in the same things in her world that I was interested in. She's not an unreliable narrator in the conventional sense; she doesn't lie to the reader or intentionally hide information. And yet, the experience of reading this book was, for me, similar to reading a book with an unreliable narrator. Ropa consistently refused to look at what I wanted her to look at or think about what I wanted her to think about. For example, when she has an opportunity to learn magic through books from the titular library, her initial enthusiasm is infectious. Huchu does a great job showing the excitement of someone who likes new ideas and likes telling other people about the neat things she just learned. But when things don't work the way she expected from the books, she doesn't follow up, experiment, or try to understand why. When her grandmother tries to explain something to her from a different angle, she blows her off and refuses to pay attention. And when she does get magic to work, she never tries to connect that to her previous understanding. I kept waiting for Ropa to try to build her own mental model of magic, but she would only toy with an idea for a few pages and then put it down and never mention it again. This is not a fault in the book, just a mismatch between the book and what I wanted to read. All of this is consistent with Ropa's defensive strategies, emotional resiliency, and approach to understanding the world. (I strongly suspect Huchu was giving Ropa some ADHD characteristics, and if so, I think he got it spot on.) Given that, I tried to pivot to appreciating the characterization and the world, but that ran into another mismatch I had with this book, and the reason why I passed on it when it initially came out. I tend to avoid fantasy novels about ghosts. This is not because I mind ghosts themselves, but I've learned from experience that authors who write about ghosts usually also write about other things that I don't want to read about. That unfortunately was the case here; The Library of the Dead was too far into horror for me. There's child abuse, drugs, body horror, and similar nastiness here, more than I wanted in my head. Ropa's full-speed-ahead attitude and refusal to dwell on anything made it a bit easier to read, but it was still too much for me. Ropa is a great character who is refreshingly different than the typical urban fantasy protagonist, and the few hints of the magical library and world background we get were intriguing. This book was not for me, but I can see why other people will love it. Followed by Our Lady of Mysterious Ailments. Rating: 6 out of 10

27 January 2023

Matthew Garrett: Further adventures in Apple PKCS#11 land

After my previous efforts, I wrote up a PKCS#11 module of my own that had no odd restrictions about using non-RSA keys and I tested it. And things looked much better - ssh successfully obtained the key, negotiated with the server to determine that it was present in authorized_keys, and then went to actually do the key verification step. At which point things went wrong - the Sign() method in my PKCS#11 module was never called, and a strange
debug1: identity_sign: sshkey_sign: error in libcrypto
sign_and_send_pubkey: signing failed for ECDSA "testkey": error in libcrypto"

error appeared in the ssh output. Odd. libcrypto was originally part of OpenSSL, but Apple ship the LibreSSL fork. Apple don't include the LibreSSL source in their public source repo, but do include OpenSSH. I grabbed the OpenSSH source and jumped through a whole bunch of hoops to make it build (it uses the macosx.internal SDK, which isn't publicly available, so I had to cobble together a bunch of headers from various places), and also installed upstream LibreSSL with a version number matching what Apple shipped. And everything worked - I logged into the server using a hardware-backed key.

Was the difference in OpenSSH or in LibreSSL? Telling my OpenSSH to use the system libcrypto resulted in the same failure, so it seemed pretty clear this was an issue with the Apple version of the library. The way all this works is that when OpenSSH has a challenge to sign, it calls ECDSA_do_sign(). This then calls ECDSA_do_sign_ex(), which in turn follows a function pointer to the actual signature method. By default this is a software implementation that expects to have the private key available, but you can also register your own callback that will be used instead. The OpenSSH PKCS#11 code does this by calling EC_KEY_set_method(), and as a result calling ECDSA_do_sign() ends up calling back into the PKCS#11 code that then calls into the module that communicates with the hardware and everything works.

Except it doesn't under macOS. Running under a debugger and setting a breakpoint on EC_do_sign(), I saw that we went down a code path with a function called ECDSA_do_sign_new(). This doesn't appear in any of the public source code, so seems to be an Apple-specific patch. I pushed Apple's libcrypto into Ghidra and looked at ECDSA_do_sign() and found something that approximates this:
nid = EC_GROUP_get_curve_name(curve);
if (nid == NID_X9_62_prime256v1)  
  return ECDSA_do_sign_new(dgst,dgst_len,eckey);
 
return ECDSA_do_sign_ex(dgst,dgst_len,NULL,NULL,eckey);
What this means is that if you ask ECDSA_do_sign() to sign something on a Mac, and if the key in question corresponds to the NIST P256 elliptic curve type, it goes down the ECDSA_do_sign_new() path and never calls the registered callback. This is the only key type supported by the Apple Secure Enclave, so I assume it's special-cased to do something with that. Unfortunately the consequence is that it's impossible to use a PKCS#11 module that uses Secure Enclave keys with the shipped version of OpenSSH under macOS. For now I'm working around this with an SSH agent built using Go's agent module, forwarding most requests through to the default session agent but appending hardware-backed keys and implementing signing with them, which is probably what I should have done in the first place.

comment count unavailable comments

26 January 2023

Matt Brown: Goals for 2023

This is the second of a two-part post covering my goals for 2023. See the first part to understand the vision, mission and strategy driving these goals. I want to thank my friend Nat, and Will Larson whose annual reviews I ve always enjoyed reading for inspiring me to write these posts. I ve found the process articulating my motivations and goals very useful to clarify my thoughts and create tangible next steps. I m grateful for that in and of itself, but I also hope that by publishing this you too might find it interesting, and the additional public accountability it creates will be a positive encouragement to me.

2023 Goals My focus for 2023 is to bootstrap a business that I can use to build software that solves real problems (see the strategy from the previous post for more details on this). I m going to track this via three goals:
  1. Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.
  2. Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.
  3. Develop and maintain a broad professional network.

Consulting Based on my background and experience, I plan to target my consulting across three areas:
  1. Leadership - building and growing operationally focused software teams following SRE/devops principles. A typical engagement may involve helping a client establish a brand new SRE/devops practice, or to strengthen and mature the existing practices used to build and operate reliable software in their team(s).
  2. Architecture - applying deep technical expertise to the design of large software systems, particularly focusing on their reliability and operability. A typical engagement may involve design input and decision making support for key aspects of a new system, providing external review and analysis to improve an existing system, or delivering actionable, tactical next steps during or immediately after a reliability crisis.
  3. Technology Strategy - translating high-level business needs into a technical roadmap that provides understandable explanations of the value software can deliver in that context, and the iterative series of appropriately sized projects required to realise it. A typical customer for this would be a small to medium sized business outside of the software industry with a desire to use software in a transformative way to improve their business but who does not employ the necessary in-house expertise to lead that transition.

Product Development There are three, currently extremely high level, product ideas that I m excited to explore:
  1. Improve co-ordination of electricity resources to accelerate the electrification of NZ s energy demand and the transition to zero carbon grid. NZ has huge potential to be a world-leader in decarbonising energy use through electrification, but requires a massive transition to realise the benefits. Many of the challenges to that transition involve coordination of an order of magnitude more distributed energy resources (DER) in a much more dynamic and software-oriented manner than the electricity industry is traditionally experienced with. The concept of improving DER coordination is not novel, but our grid has unique characteristics that mean we re likely to need to build localised solutions. There is a strong match between my experience with large, high-reliability distributed software systems, and this need. With renewed motivation in the industry for rapid progress and many conversations and consultations still in their early stages this a very compelling space to explore with the intent of developing a more detailed product opportunity to pursue.
  2. Reduce agricultural emissions by making high performance farm management, including effortless compliance reporting, straightforward, fun and effective for busy farmers. NZ s commitments to reduce agricultural emissions (our largest single sector) place increased compliance and reporting burdens on busy farmers who don t want to report the same data multiple times to different regulators and authorities. In tandem, rising business costs and constraints drive a need for continuous improvements in efficiency, performance and farm management processes in order to remain profitable. This in turn drives increases in complexity and the volume of data that farmers must work with. Many industry organisations and associated software developers offer existing products aimed at addressing aspects of these problems, but anecdotal feedback indicates these are poorly integrated, piecemeal solutions that are often frustrating to use - a burden rather than a source of continuous improvement. It looks like there could be an opportunity for a delightful, comprehensive farm management and reporting system to disrupt the industry and help farmers run more profitable and sustainable farms while also reducing compliance costs and effort.
  3. Lower sickness rates and improve cognitive performance by enabling every indoor space to benefit from continuous ventilation monitoring and reporting. Indoor air quality is important in reducing disease transmission risk and promoting optimal cognitive performance, but despite the current pandemic temporarily raising its profile, a focus on indoor air quality generally remains under the radar for most people. One factor contributing to this is the lack of widely available systems for continuously monitoring and reporting on air quality. I built https://co2mon.nz/ to help address this problem in my children s school during 2022. I see potential to further grow this business through marketing and raising awareness of the value of ventilation monitoring in all indoor environments.
In addition to these mission aligned product ideas, I m also interested in exploring the creation of small to medium sized SaaS applications that deliver useful value by serving the needs of a specialised or niche business or industry. Even when not directly linked to the overall mission, the development and operation of products of this type can support the strategy. Each application adds direct revenue and also contributes to achieving better economies of scale in the many backend processes and infrastructure required to deliver secure, reliable and performant software systems.

Developing my professional network To help make this goal more actionable and measurable I will track 3 sub goals:
  1. To build a professional relationship with at least 30 new people this year, meaning that we ve met and had a decent conversation at least a couple of times and keep in touch at least every few months in some form.
  2. To publish a piece of writing on this site at least once a week, and for many of those to generate interesting conversations and feedback. I ll use this as an opportunity to explore product ideas, highlight my consulting expertise and generally contribute interesting technical content into the world.
  3. To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

Next Steps Over the coming weeks I ll write more about each of these topics - you can use the box in the sidebar (or on the front page, if you re on a phone) to be notified when I post new writing (there s also an RSS feed here, for the geeks). I d love to have your feedback and engagement on these goals too - please drop me an email with your thoughts or even book a meeting - it won t be a distraction to me, you ll be helping me meet my goal of developing and maintaining my network :)

Louis-Philippe V ronneau: Montreal Subway Foot Traffic Data, 2022 edition

For the fourth year in a row, I've asked Soci t de Transport de Montr al, Montreal's transit agency, for the foot traffic data of Montreal's subway. By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic. Licences

Shirish Agarwal: Minidebconf Tamilnadu 2023, Tinnitus, Cooking, Books and Series.

First up is Minidebconf Tamilnadu 2023 that would be held on 28-29 January 2023. You can find rest of the details here. I do hope we get to see/hear some good stuff from the Minidebconf. Best of luck to all those who are applying.

Tinnitus During the lock-down of March 2020, I became aware of noise in ears and subsequently major hearing loss. It took me quite a while to know that Tinnitus happens to both those who have hearing loss as well as not. I keep running into threads like this and as shared by someone nobody knows what really causes it. I did try some of the apps (an app. called Resound on Android) that is supposed to tackle Tinnitus but it hasn t helped much. There is this but at least for me, right now pretty speculative. Also this, and again highly speculative.

Cooking After mum passed away, I haven t cooked anything. This used to give me pleasure but now just doesn t feel right. Cooking is something you enjoy when you are doing for somebody else and not just for yourself, at least that s how I feel and with that the curiosity to know more recipes. I do wanna buy a wok at sometime but when, how I just don t know.

Books Have been reading books quite a bit. And due to that had to again revisit and understand ISBN. Perhaps I might have shared it before. It really is something, the history of ISBN. And that co-relates with the book I read, Raising Steam by Terry Pratchett. Raising Steam is the 40th Book in the Discworld Series and it basically romanticizes and reminisces how the idea of an engine was born, and then a steam engine and how actually Railways started. There has been a lot of history and experiences from the early years of Steam Railway that have been taken and transplanted into the book. Also how Railways is and can be successful if only it is invested wisely and maintenance is done. This is only where imagination and reality come apart as maintenance isn t done and then you have issues. While this is and was in the UK, similar situation exists in India and many other places around the world and doesn t matter whether it is private or public. Exceptions are German, French but then that maybe due to Labor movements that happened and were successful unlike in other places. I could go on but then it will become a different article in itself. Suffice to say there is much to learn and you need serious people to look after it. Both in UK and India we lack that. And not just in Railways but Civil Aviation too, but again that is a story in itself.

Web-series Apart from books, have been seeing web-series that Willow is a good one that I enjoyed even though I hadn t seen the earlier movie. While there has been a flurry of movies and web-series both due to end of year and beginning of 2023 and yet have tried to be a bit partial on what I wanna watch or not. If it has crime, fantasy, drama then usually I like it. For e.g. I saw Blackout and pretty much was engrossed in what will happen next. It also does lead you to ask questions about centralization vs de-centralization for both power and other utilities and does make a case for communities to have their utilities apart from the grid as a fallback. How do we do over decades or centuries about it is a different question perhaps altogether. There were two books that kinda stood out for me, the first was Ian Rankin s Naming of the Dead . The book is about a cynical John Rebus, a man after my own heart. I am probably going to buy a few more of his series. In a way it also tells you why UK is the way it is right now. Another book that I liked was Shades of Grey by Jasper Fforde. This is one of the books that Mum would have clearly liked. It is pretty unusual while at the same time very close to 1984 and other such dystopian novels. The main trope of the book is what color you can see and how much you can see. The main character is somebody who can see Red, around the age of 20. One of the interesting aspects of the book is de-facting which closely resembles the Post-Truth world where alternative facts can be made out of air and they don t need any scientific evidence to back them up. In Jasper s world, they don t care about how things work and most of the technology is banned and curiosity is considered harmful and those who show that are murdered one way or the other. Interestingly, the author has just last year decided to start book 2 in the 3 book series that is supposed to be. This also tells why the U.S. is such a precarious situation in a way. A part of it is also due to the media which is in hands of chosen few, the same goes for UK and India, almost an oligopoly.

The Great Escape This is also a book but also about experiences of people, not in 19th-20th century but today that tells you slavery is alive and well and human-trafficking as well. This piece from NPR tells you about an MNC and Indian workers. What I found interesting is that there barely is an mention of the Indian Embassy that is supposed to help Indian people. I do know for a fact that the embassies of India has seen a drastic shortage of both people and materials even since the new Govt. came in place that was nine years ago. Incidentally, BBC shared about the Gujarat riots 2002 and that has been censored in India. They keep quiet about the UK Govt. who did find out that the Chief Minister was directly responsible for the killings and in facts his number 2, Amit Shah had shared that we would do 2002 again in the election cycle barely a month ago. But sadly, no hate speech FIR or any action was taken against Mr. Shah. There have been attempts by people to showcase the documentary. For e.g. JNU tried it and the rowdies from ABVP (arm of BJP) created violence. Even the questions that has been asked by the Wire, GOI will not acknowledge them. Interestingly, all India s edtechs have taken a beating in the last 6-8 months including the biggest BJYU s. Sharing a story from 2021 where things were best and today all of them are at bottom. In fact, the public has been wary as the prices of the courses has kept on increasing and most case studies have been found to be fake. Also the general outlook on jobs and growth has been pessimistic. In fact, most companies have been shedding jobs truckloads, most in the I.T. sector but other sectors as well. Hospitality and other related sectors have taken a huge beating, part of it post-pandemic, part of it Govt s refusal to either spend money or do any positive policies for either infrastructure, education, medical, you name it, they think private sector has all the answers which has been proven to be wrong again and again. I did not want to end on a discordant note but things are the way they are

B lint R czey: How to speed up your next build 5-20x with Firebuild?

Firebuild logo
TL;DR: Just prefix your build command (or any command) with firebuild:
firebuild <build command>
OK, but how does it work? Firebuild intercepts all processes started by the command to cache their outputs. Next time when the command or any of its descendant commands is executed with the same parameters, inputs and environment, the outputs are replayed (the command is shortcut) from the cache instead of running the command again. This is similar to how ccache and other compiler-specific caches work, but firebuild can shortcut any deterministic command, not only a specific list of compilers. Since the inputs of each command is determined at run time firebuild does not need a maintained complete dependency graph in the source like Bazel. It can work with any build system that does not implement its own caching mechanism. Determinism of commands is detected at run-time by preloading libfirebuild.so and interposing standard library calls and syscalls. If the command and all its descendants inputs are available when the command starts and all outputs can be calculated from the inputs then the command can be shortcut, otherwise it will be executed again. The interception comes with a 5-10% overhead, but rebuilds can be 5-20 times, or even faster depending on the changes between the builds. Can I try it? It is already available in Debian Unstable and Testing, Ubuntu s development release and the latest stable version is back-ported to supported Ubuntu releases via a PPA. How can I analyze my builds with firebuild? Firebuild can generate an HTML report showing each command s contribution to the build time. Below are the before and after reports of json4s, a Scala project. The command call graphs (lower ones) show that java (scalac) took 99% of the original build. Since the scalac invocations are shortcut (cutting the second build s time to less than 2% of the first one) they don t even show up in the accelerated second build s call graph. What s left to be executed again in the second run are env, perl, make and a few simple commands. The upper graphs are the process trees, with expandable nodes (in blue) also showing which command invocations were shortcut (green). Clicking on a node shows details of the command and the reason if it was not shortcut. Could I accelerate my project more? Firebuild works best for builds with CPU-intensive processes and comes with defaults to not cache very quick commands, such as sh, grep, sed, etc., because caching those would take cache space and shortcutting them may not speed up the build that much. They can still be shortcut with their parent command. Firebuild s strength is that it can find shortcutting points in the process tree automatically, e.g. from sh -c 'bash -c "sh -c echo Hello World!"' bash would be shortcut, but none of the sh commands would be cached. In typical builds there are many such commands from the skip_cache list. Caching those commands with firebuild -o 'processes.skip_cache = []' can improve acceleration and make the reports smaller. Firebuild also supports several debug flags and -d proc helps finding reasons for not shortcutting some commands:
...
FIREBUILD: Command "/usr/bin/make" can't be short-cut due to: Executable set to be not shortcut,  ExecedProcess 1329.2, running, "make -f debian/rules build", fds=[ FileFD fd=0  FileOFD ...
FIREBUILD: Command "/usr/bin/sort" can't be short-cut due to: Process read from inherited fd ,  ExecedProcess 4161.1, running, "sort", fds=[ FileFD fd=0  FileOFD ...
FIREBUILD: Command "/usr/bin/find" can't be short-cut due to: fstatfs() family operating on fds is not supported,  ExecedProcess 1360.1, running, "find -mindepth 1 ...
...
make, ninja and other incremental build tool binaries are not shortcut because they compare the timestamp of files, but they are fast at least and every build step they perform can still be shortcut. Ideally the slower build steps that could not be shortcut can be re-implemented in ways that can be shortcut by avoiding tools performing unsupported operations. I hope those tools help speeding up your build with very little effort, but if not and you find something to fix or improve in firebuild itself, please report it or just leave a feedback! Happy speeding, but not on public roads!

Matt Brown: Vision, Mission and Strategy

This is the first of a two-part post, covering high-level thoughts around my motivations and vision. Make sure to check out the second part for my specific goals for 2023. A new year is upon us! My plan was to be 6 months into the journey of starting a business by this point. I made some very tentative progress towards that goal in 2022, registering a company and starting some consulting work, but on the whole I ve found it much harder than expected to gather the necessary energy to begin that journey in earnest.

Reflection I m excited about the next chapter of my career, so the fact that I ve been struggling to get started has been frustrating. The only upside is that the delay has given me plenty of time to reflect on the last few years and what I can learn from them and draw some lessons to help better manage and sustain my energy going forward.

Purpose A large part of what I ve realised is that I should have left Google years ago. It was a great place to work, and I m incredibly grateful for everything I learned and received during my time there. For years it was my dream job, but my happiness had been declining, and instead of taking the (relatively small) risk of leaving to the unknown, I tried several variations of team and role in the hope of restoring the dream. The reality is that a significant chunk of my motivation and energy comes from being able to link my work back to a bigger purpose that delivers concrete positive impact in the world. I felt that link through Google s mission to make information universally accessible and useful for the first 10-11 years, but for the latter 4-5 years my ability to see that link was tenuous at best and trying to push through the challenges presented without that link providing a reliable source of energy is what drove my unhappiness and led to needing a longer break to recharge. I expect the challenges of starting a business to be even greater than what I experienced at Google, so the lesson I m taking from this is that it s crucial for me to understand what the link between my work and the bigger purpose with concrete positive impact in the world that I m aiming to contribute to is.

Community The second factor that I ve slowly come to realise has been missing from my career in the last few years has been participation in a professional community and a variety of enriching interpersonal relationships. As much as I value and need this type of interaction, fostering and sustaining it unfortunately doesn t come naturally to me. Working remotely since 2016 and then taking a 9 month break out of the industry are not particularly helpful contributors to building and maintaining a wide network either! The lesson here is simply that I m going to need to push past my comfort zone in reaching out and introducing myself to a range of people in order to grow my professional network, and equally I need to be diligent and disciplined in making time to maintain and regularly connect with people whom I respect and find energising to interact with.

Personal Influences Lastly, I ve been reflecting on a set of principles that are important to me. These are not so much new lessons, more confirming to myself what I value moving forward. There are many things I could include here, but to keep it somewhat brief, the key influences on my thinking are:
  • Independence - I can t entirely explain why or where it comes from, but since the start of my professional career (which I consider to be my consulting/freelancing development during high school) I ve understood that I m far more motivated by building and growing my own business than I am by working for someone else. Working for myself has always felt like the default and sensible course - I m excited to get back to that.
  • Openness - Open is better than closed, in terms of software, business model and organisational processes. This continues to be a strong belief and something I want to uphold in my business endeavours. Competition should be based on superior technical quality or service, not artificial constraints or barriers to entry that lock customers and users into a single solution or market. Protocols and networks should be open for wide participation and easily accessible to new entrants and competition.
  • People first - This applies both to how we work with each other - respectfully, valuing diversity and with integrity, and to how we apply technology to our world - with consideration for all stakeholders it may affect and awareness of both the intended and potential unintended impacts.

Framework Using Vision, Mission and Strategy as a planning framework has worked quite well for me when building and growing teams over the years, so I plan to re-use it personally to help organise the above reflections into a hopefully cohesive plan than results in some useful 2023 goals.

Vision Software systems contribute direct and meaningful impact to solving real problems in our world. Each word has a fair bit of meaning behind it for me, so breaking it down a little bit:
  • software systems - excite me because software is eating the world and has significant potential to do good.
  • contribute - Software alone doesn t solve problems, and misapplied can easily make things worse. To contribute software needs to be designed intentionally and evaluated with an awareness of risks it could pose within the complex system that is our modern world.
  • direct and meaningful impact - I m not looking for broad outcomes like improving productivity or communication, which apply generally across many problems. I want to see software applied to solve specific blockers whose removal unlocks significant progress towards solving a problem.
  • real - as opposed to straightforward problems. The types of issue where the acknowledgement of it as a real problem often ends the sentence as it feels too big to tackle. Climate change and pandemic risk are examples of real problems. Decentralising finance or selling more widgets are not.
  • in our world - is mostly filler to round out the sentence nicely, but I do think we should probably sort out the mess we re making on our own planet before trying to colonise anywhere else.

Mission To lead the development and operation of software systems that deliver new opportunities for individuals, businesses and communities to solve the real problems in their community. Again breaking down the intent a little bit:
  • lead - having a meaningful impact on real problems is a big job. I won t succeed as a one man band. It will require building and growing a larger team.
  • development and operation - development is fun and necessary, but I also wanted to highlight that the ongoing operation and integration of those software systems into the broader social and human systems of our world is an equally important and ongoing need.
  • new opportunities - are important to drive and motivate investment in the adoption of technology. Building or operating a system that maintains the status quo is not motivating for me.
  • individuals, businesses and communities - aka everyone! But each of these groups (as examples, not specific) will have diverse roles, needs and interactions with the software which must be considered to ensure the system achieves the desired contribution and impact.
  • their community - refines the ambition from the vision to an achievable scope of action within which to execute the mission. We won t solve our problems by targeting one big global fix, but if we each participate in solving the problems in our community, collectively it will make a difference.

Strategy Build a sustainable business that provides a home and infrastructure to support a continuous cycle of development, validation and growth of software systems fulfilling the mission and vision above.
  • Accumulate meaningful impact via a portfolio of systems rather than one big bet.
  • Focus on opportunities that promote the decarbonisation of our economy (the most pressing problem our society faces), but not at the expense of ignoring compelling opportunities to contribute impact to other real problems also.
  • Favour the marathon over the sprint - while being first can be fun and convey benefits, it s often the fast-followers who learn from the initial mistakes and deliver lasting change and broader impact.
In keeping with the final bullet point, I aim to evaluate the strategy against a long-term view of success. What excites me about it is that it has the potential to provide structure and clarity for my work while also enabling many future paths - from operating a portfolio of micro-SaaS products that each solve real problems for a specific niche or community, or diving deep into a single compelling opportunity for a year or two, joining with others to partner on shared ventures or some combination of all three and other variations in between.

Your Thoughts I consider this a first draft, which I intend to revise and evolve further over the next 6-12 months. I don t plan major changes to the intent or underlying ideas, but finding the best words to express and convey that intent clearly is not something I expect to get right on the first take. I d love to have your feedback and engagement as I move forward with this strategy - please use the box in the sidebar (or on the front page, if you re on a phone) to be notified when I post new writing, drop me an email with your thoughts or even book a meeting to say hi and discuss something in detail.

Goals for 2023 Next up - check out part two of this post to see my goals for 2023.

Dirk Eddelbuettel: RcppTOML 0.2.1 on CRAN: Small Build Fix for Some Arches

Two weeks after the release of RcppTOML 0.2.0 and the switch to toml++, we have a quick bugfix release 0.2.1. TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka packages ) for the Rust language. Some architectures, aarch64 included, got confused over float16 which is of course a tiny two-byte type nobody should need. After consulting with Mark we concluded to (at least for now) simply override this excluding the use of float16 . The short summary of changes follows.

Changes in version 0.2.1 (2023-01-25)
  • Explicitly set -DTOML_ENABLE_FLOAT16=0 to permit compilation on some architectures stumbling of the type.

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Next.

Previous.